Linear Time O(n) — Processing Every Element Once

Understand O(n) linear time complexity. Learn when and why algorithms must examine every element, with examples including linear search, counting sort, and array traversal.

Complexity Classes

Detailed Explanation

What Is O(n) Linear Time?

An algorithm is O(n) when the work it does grows directly proportional to the input size. If you double the input, the runtime roughly doubles. This is the complexity of any algorithm that must look at every element at least once.

Why O(n) Is Often Optimal

Many problems have an information-theoretic lower bound of O(n) — you cannot solve them without examining every element. Examples:

  • Finding the maximum in an unsorted array: you must check every element.
  • Summing all elements: every value contributes to the result.
  • Checking if an element exists (unsorted): in the worst case, it’s the last one.

Common O(n) Algorithms

Algorithm Description
Linear search Scan each element sequentially
Counting sort Count occurrences (when range k is small)
Two-pointer technique e.g., finding pairs that sum to a target in a sorted array
Kadane’s algorithm Maximum subarray sum in one pass
Bucket sort Distribute elements into buckets (uniform distribution)

Single-Pass vs Multi-Pass

An algorithm that makes two passes over the data is still O(n) — specifically O(2n), but the constant factor 2 is dropped in Big-O notation. What matters is that the number of passes is constant, not dependent on n.

Linear Time Is Fast

O(n) is generally considered efficient for most practical purposes. Processing 10 million records in O(n) takes ~10M operations, which modern hardware handles in milliseconds.

Use Case

Linear algorithms are the backbone of data processing pipelines, ETL jobs, stream processing, and any scenario where you must inspect or transform every record in a dataset.

Try It — Big-O Reference

Open full tool