Machine Epsilon and ULP in IEEE 754

Understand machine epsilon (the gap between 1.0 and the next representable float) and ULP (unit in the last place). Essential for numerical analysis and error bounds.

Precision

Decimal Value

2.220446049250313e-16

Float32 Hex

0x34000000

Float64 Hex

0x3CB0000000000000

Detailed Explanation

Machine epsilon is a fundamental constant in floating-point arithmetic that quantifies the finest granularity of precision available. It is defined as the smallest value e such that 1.0 + e > 1.0 in floating-point arithmetic.

Machine epsilon values:

Precision Machine Epsilon Hex
Float32 2^-23 ≈ 1.192e-7 0x34000000
Float64 2^-52 ≈ 2.220e-16 0x3CB0000000000000

Why these specific values?

Float64 has 52 mantissa bits. The value 1.0 is stored as 1.000...0 (52 zeros after the binary point). The next representable number is 1.000...01 (51 zeros then a 1), which equals 1 + 2^-52. Therefore, the gap between 1.0 and the next float is 2^-52.

ULP — Unit in the Last Place:

While machine epsilon measures the gap at 1.0, the ULP measures the gap at any value. The ULP of a number x is the value of the least significant mantissa bit at x's magnitude.

For a float64 value x with exponent e: ULP(x) = 2^(e - 52)

Examples:

  • At x = 1.0 (exponent 0): ULP = 2^-52 ≈ 2.22e-16
  • At x = 1024.0 (exponent 10): ULP = 2^-42 ≈ 2.27e-13
  • At x = 1e15 (exponent ~49): ULP = 2^-3 = 0.125

The practical implication:

As numbers get larger, the gap between adjacent representable values grows. At 1e15, you cannot represent integers precisely — the gap between neighbors is 0.125. At 2^53, consecutive integers are 2 apart. This is why Number.MAX_SAFE_INTEGER = 2^53 - 1 in JavaScript.

Using epsilon for comparisons:

A simple epsilon comparison works near 1.0:

Math.abs(a - b) < Number.EPSILON

But for larger values, you need a scaled epsilon:

Math.abs(a - b) < Math.max(Math.abs(a), Math.abs(b)) * Number.EPSILON * factor

The factor depends on how many floating-point operations accumulated error. A typical factor for well-conditioned algorithms is 4-16.

Kahan summation and error:

In numerical analysis, machine epsilon is used to bound accumulated error. Kahan summation, for example, achieves error proportional to epsilon regardless of the number of additions, compared to naive summation which accumulates O(n * epsilon) error.

Use Case

Machine epsilon is essential in numerical analysis for computing error bounds, implementing convergence criteria for iterative algorithms, designing robust floating-point comparisons, and understanding the limitations of double-precision arithmetic in financial and scientific applications.

Try It — IEEE 754 Inspector

Open full tool