Convert Binary to Decimal

Learn how to convert binary numbers to decimal step by step. Understand positional notation, powers of two, and practice with worked examples online.

Binary (Base 2)Decimal (Base 10)Conversion

Detailed Explanation

Converting binary to decimal is one of the most fundamental operations in computer science and digital electronics. Binary uses only two digits, 0 and 1, while decimal uses ten digits, 0 through 9.

How the conversion works:

Each digit in a binary number represents a power of 2, starting from the rightmost digit at position 0. To convert, multiply each binary digit by its corresponding power of 2, then sum all the results.

Step-by-step example — converting 11010 to decimal:

  1. Write out each digit with its position: 1(4) 1(3) 0(2) 1(1) 0(0)
  2. Calculate powers of 2: 1×2⁴ + 1×2³ + 0×2² + 1×2¹ + 0×2⁰
  3. Evaluate: 16 + 8 + 0 + 2 + 0 = 26

So 11010₂ = 26₁₀.

Key concepts to remember:

  • The rightmost bit is called the least significant bit (LSB) with a weight of 2⁰ = 1.
  • The leftmost bit is the most significant bit (MSB) with the highest weight.
  • Each position doubles in value from right to left: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, and so on.
  • Memorizing the first 10 powers of 2 makes mental conversion much faster.

Common binary values: 0000 = 0, 0001 = 1, 0010 = 2, 0100 = 4, 1000 = 8, 1111 = 15, 11111111 = 255. The maximum value of an 8-bit unsigned integer is 255, which is why this number appears frequently in networking and color codes. Understanding binary-to-decimal conversion is essential for working with memory addresses, bitwise operations, and low-level programming.

Use Case

Developers use binary-to-decimal conversion daily when reading register values in embedded systems or interpreting bitfield flags in protocol specifications.

Try It — Number Base Converter

Open full tool