IEEE 754 Inspector
Inspect and visualize IEEE 754 floating-point representations with interactive bit manipulation.
About This Tool
The IEEE 754 Inspector lets you explore how computers store decimal numbers as binary floating-point values. Enter any number and instantly see its 32-bit (single precision) and 64-bit (double precision) IEEE 754 representation, broken down into the sign bit, exponent field, and mantissa (significand) fraction.
The interactive bit display is color-coded: the sign bit appears in red, exponent bits in blue, and mantissa bits in green. You can click any individual bit to toggle it and watch the resulting decimal value change in real time. This makes it easy to understand how each bit contributes to the final number and to explore edge cases like denormalized numbers, positive and negative zero, infinity, and NaN.
All processing happens entirely in your browser using JavaScript's
DataView and ArrayBuffer APIs. No data is sent to any server.
This makes the tool safe for classroom demonstrations, technical
interviews, or debugging sessions where you need to verify exactly
how a particular floating-point value is stored.
The comparison mode helps you understand classic floating-point
surprises like why 0.1 + 0.2 does not equal 0.3. It shows
the exact stored representation of each operand and the accumulated
rounding error. The precision analyzer displays the unit in the last
place (ULP) and the representable range at the current magnitude, giving
you a concrete sense of float64 accuracy.
If you work with low-level binary data, the Hex Editor lets you inspect raw bytes, while the Number Base Converter handles arbitrary base conversions. For bitwise operation exploration, see the Text to Binary tool.
How to Use
- Type a decimal number into the input field (e.g.
0.1,-3.14,Infinity,NaN). - View the 32-bit and 64-bit IEEE 754 representations with color-coded bit fields.
- Click any individual bit in the bit display to toggle it and see the resulting number change.
- Use the quick-access buttons to load special values like +0, -0, Infinity, NaN, or Max Safe Int.
- Switch to the Compare tab to enter two values and see why their sum may differ from the expected result.
- Review the precision analyzer section for ULP, representable range, and machine epsilon at the current magnitude.
- Click the Copy button (or press Ctrl+Shift+C) to copy the full IEEE 754 breakdown to your clipboard.
Popular IEEE 754 Examples
FAQ
Is my data safe?
Yes. All processing happens in your browser using JavaScript's DataView and ArrayBuffer APIs. No data is transmitted to any server, and nothing is stored or logged.
What is IEEE 754?
IEEE 754 is the international standard for floating-point arithmetic used by virtually all modern CPUs and programming languages. It defines how decimal numbers are encoded as binary, including formats for 32-bit (single precision) and 64-bit (double precision) floats.
Why does 0.1 + 0.2 not equal 0.3?
The values 0.1 and 0.2 cannot be represented exactly in binary floating-point. Each is rounded to the nearest representable value, and when added, the accumulated rounding error produces 0.30000000000000004 instead of exactly 0.3. Use the Compare tab to see this in detail.
What is a denormalized (subnormal) number?
A denormalized number has an exponent field of all zeros and a non-zero mantissa. These numbers fill the gap between zero and the smallest normalized float, allowing gradual underflow instead of an abrupt jump to zero. They sacrifice precision for range near zero.
How do I toggle bits?
Click any bit in the visual bit display. The sign bit is red, exponent bits are blue, and mantissa bits are green. When you toggle a bit, the decimal value updates instantly so you can see exactly how that bit affects the result.
What is the difference between float32 and float64?
Float32 (single precision) uses 1 sign bit, 8 exponent bits, and 23 mantissa bits, giving roughly 7 decimal digits of precision. Float64 (double precision) uses 1 sign bit, 11 exponent bits, and 52 mantissa bits, giving roughly 15-16 decimal digits of precision. JavaScript numbers are always float64.
What is machine epsilon?
Machine epsilon is the smallest value that, when added to 1.0, produces a result different from 1.0 in floating-point arithmetic. For float64 it is approximately 2.22e-16. The precision analyzer shows both machine epsilon and the ULP (unit in the last place) at the current value's magnitude.
Related Tools
Bitwise Calculator
Perform AND, OR, XOR, NOT, and shift operations with visual binary representation and step-by-step breakdowns.
Number Base Converter
Convert numbers between binary, octal, decimal, hexadecimal, and custom bases with bit visualization.
Hex Editor
View and edit files or text as hexadecimal. Hex dump with offset, hex, and ASCII columns.
Big-O Reference
Interactive Big-O complexity reference with growth charts, algorithm database, and comparison tools. Visualize O(1) to O(n!) growth curves.