Logarithmic Time O(log n) — The Power of Divide and Conquer

Learn how O(log n) logarithmic algorithms like binary search achieve efficiency by halving the problem at each step. Includes comparison tables and real-world examples.

Complexity Classes

Detailed Explanation

What Is O(log n) Logarithmic Time?

An algorithm runs in O(log n) time when it reduces the problem size by a constant fraction (usually half) at each step. The most famous example is binary search, which eliminates half the remaining elements with every comparison.

How Fast Does log n Grow?

n log₂ n
10 3.3
100 6.6
1,000 10
1,000,000 20
1,000,000,000 30

Even for a billion elements, you only need ∼30 steps. This is remarkably efficient.

Classic O(log n) Algorithms

  1. Binary search — Search a sorted array by checking the middle element and discarding half.
  2. Balanced BST operations — Lookup, insert, delete in AVL trees or Red-Black trees.
  3. Exponentiation by squaring — Compute x^n in O(log n) multiplications instead of n.
  4. Binary lifting — Find ancestors in a tree in O(log n) per query.

The Divide-and-Conquer Pattern

O(log n) algorithms share a common pattern: at each step, they make a constant amount of work and then recurse on a problem that is a constant fraction smaller. The recurrence is:

T(n) = T(n/2) + O(1)

By the Master Theorem, this resolves to O(log n).

Prerequisites

Most O(log n) search algorithms require sorted or structured input. If your data is unsorted, you’ll need to sort it first (O(n log n)), which may or may not be worthwhile depending on how many queries you need to make.

Use Case

Logarithmic algorithms are critical for database indexing (B-trees), binary search in sorted datasets, and any scenario where you need to answer many queries on static or semi-static data efficiently.

Try It — Big-O Reference

Open full tool