Positive and Negative Zero in IEEE 754
Understand why IEEE 754 has both +0 and -0. Learn their bit patterns, when they arise in computations, and how to detect them in JavaScript and other languages.
Decimal Value
0
Float32 Hex
0x00000000
Float64 Hex
0x0000000000000000
Detailed Explanation
IEEE 754 defines two distinct representations for zero: positive zero (+0) and negative zero (-0). While they compare as equal in standard comparisons, they are stored differently and can produce different results in certain operations.
Bit patterns:
| Value | Float32 | Float64 |
|---|---|---|
| +0 | 0x00000000 (all 32 bits zero) |
0x0000000000000000 |
| -0 | 0x80000000 (sign bit = 1, rest zero) |
0x8000000000000000 |
The only difference is the sign bit. Both have all-zero exponent and all-zero mantissa fields.
When does -0 arise?
Negative zero is produced by:
- Multiplying a positive number by -0:
5 * (-0) = -0 - Dividing a negative number by positive infinity:
-1 / Infinity = -0 - Negating zero:
-(0) = -0(though the literal-0in code) - Rounding very small negative numbers toward zero
Equality and comparison:
In most languages, +0 === -0 evaluates to true. In JavaScript, you can distinguish them using:
Object.is(+0, -0)returnsfalse1/+0 === Infinitybut1/-0 === -InfinityMath.sign(-0)returns0(not -1), which can be surprising
Why two zeros exist:
The sign-magnitude representation naturally produces two zeros. Rather than adding special hardware logic to collapse them into one, the IEEE 754 committee decided to keep both because they carry useful information about the direction from which a computation approached zero. This is important in complex analysis and certain numerical algorithms where the sign of zero affects branch cuts.
Practical implications:
Most application code can safely treat +0 and -0 as identical. But if you are implementing mathematical libraries, serialization routines, or debugging tools, being aware of signed zeros prevents subtle bugs.
Use Case
Understanding signed zeros is important when implementing mathematical functions with branch cuts (like atan2), serializing floating-point data where bit-exact roundtripping matters, or debugging unexpected behavior in numerical code.