IEEE 754 Double Precision Format

Explore the 64-bit double precision IEEE 754 format used by JavaScript, Python, and most modern languages. 1 sign bit, 11 exponent bits, 52 mantissa bits.

Fundamentals

Decimal Value

3.141592653589793

Float32 Hex

0x40490FDB

Float64 Hex

0x400921FB54442D18

Detailed Explanation

IEEE 754 double precision, known as double in C/C++/Java and the default number type in JavaScript and Python, uses 64 bits to represent floating-point values with much greater range and precision than single precision.

The 64-bit layout:

Field Bits Position
Sign 1 bit Bit 63
Exponent 11 bits Bits 62-52
Mantissa 52 bits Bits 51-0

How Pi (3.14159...) is stored:

The mathematical constant Pi cannot be represented exactly in any finite binary format. The closest double-precision representation is:

  1. Sign = 0 (positive)
  2. Biased exponent = 1024 (actual exponent = 1024 - 1023 = 1)
  3. Mantissa = the 52-bit fraction closest to Pi/2 - 1

The stored value is 3.141592653589793, which differs from true Pi by about 1.22e-16. The hex representation is 0x400921FB54442D18.

Why double precision matters:

  • 15-16 decimal digits of precision (vs. 7 for float32)
  • Exponent range from -1022 to +1023 (vs. -126 to +127)
  • Largest value approximately 1.80e+308 (vs. 3.40e+38)
  • Smallest normalized approximately 2.23e-308

JavaScript and double precision:

All numbers in JavaScript are IEEE 754 double precision. There is no native 32-bit float type (though Float32Array provides typed array access). This means expressions like 0.1 + 0.2 use 64-bit arithmetic, and the maximum safe integer (Number.MAX_SAFE_INTEGER) is 2^53 - 1 = 9007199254740991.

Memory trade-off:

Double precision uses 8 bytes per value compared to 4 bytes for single precision. In applications processing millions of values (scientific simulation, financial modeling, data analytics), this doubled memory footprint can be significant.

Use Case

Double precision is the default for scientific computing, financial calculations, web applications (JavaScript), data science (Python), and any domain where 7 decimal digits of precision is insufficient. It is the standard number type in most high-level languages.

Try It — IEEE 754 Inspector

Open full tool