Two's Complement Representation
Learn how computers represent negative integers using two's complement. Understand the relationship between NOT, addition, and signed numbers.
Detailed Explanation
Two's Complement: How Computers Store Negative Numbers
Two's complement is the universal method for representing signed integers in modern computers. It elegantly allows the same addition circuitry to work for both positive and negative numbers.
How to Compute Two's Complement
To negate a number in two's complement:
- Invert all bits (bitwise NOT)
- Add 1
42 = 00101010
~42 = 11010101 (step 1: flip all bits)
+ 1 (step 2: add 1)
──────────
-42 = 11010110
Verification
42 + (-42) should equal 0:
00101010
+ 11010110
──────────
100000000 (9 bits — the carry-out is discarded in 8-bit)
= 00000000 (correct!)
Value Ranges
For an N-bit two's complement integer:
- Minimum:
-2^(N-1) - Maximum:
2^(N-1) - 1
| Width | Min | Max |
|---|---|---|
| 8-bit | -128 | 127 |
| 16-bit | -32768 | 32767 |
| 32-bit | -2^31 | 2^31-1 |
The NOT Shortcut
Since two's complement negation is ~A + 1, we get the identity:
~A = -(A + 1)
This means ~0 = -1, ~1 = -2, ~(-1) = 0, etc.
Why Two's Complement?
- Zero is unique: Only one representation of zero (unlike one's complement which has +0 and -0).
- Addition just works: The CPU doesn't need separate circuits for signed and unsigned addition.
- Overflow detection: If the carry into the sign bit differs from the carry out, overflow occurred.
Sign Extension
When widening a two's complement number (e.g., 8-bit to 16-bit), copy the sign bit into all new high bits:
-42 (8-bit): 11010110
-42 (16-bit): 11111111 11010110
This is what arithmetic right shift (>>) does — it performs sign extension.
Use Case
Compiler developers must understand two's complement to correctly implement integer arithmetic, overflow detection, and type casting. When a C program casts a signed 8-bit value (-42) to a 32-bit integer, the compiler emits a sign-extension instruction that copies the sign bit into the upper 24 bits. Getting this wrong produces subtle bugs where negative values become large positive numbers.