Amortized Analysis — Average Cost Over a Sequence of Operations

Learn amortized analysis: how to prove that expensive operations are rare enough that the average cost per operation is low. Covers dynamic arrays, hash table resizing, and splay trees.

Analysis Concepts

Detailed Explanation

What Is Amortized Analysis?

Amortized analysis determines the average time per operation over a worst-case sequence of operations. Unlike average-case analysis (which assumes random inputs), amortized analysis guarantees the average even in the worst case.

The Dynamic Array Example

A dynamic array (like std::vector in C++ or ArrayList in Java) doubles its capacity when full. Most push_back operations are O(1), but occasionally one triggers a resize that copies all n elements (O(n)).

Aggregate method: After n pushes, the total resizing work is:

1 + 2 + 4 + 8 + ... + n = 2n - 1

So the total work for n pushes is n (regular pushes) + 2n - 1 (resizing) = 3n - 1. The amortized cost per push is O(1).

Three Methods of Amortized Analysis

Method Idea
Aggregate Sum total cost of n operations, divide by n
Accounting (Banker’s) Assign a “credit” to cheap operations to pay for expensive ones
Potential Define a potential function that tracks stored energy in the data structure

Real-World Examples

  • Hash table resizing: When the load factor exceeds a threshold, the table doubles and rehashes all entries. Amortized O(1) per insert.
  • Splay trees: Individual operations may be O(n), but any sequence of m operations on an n-node tree takes O(m log n) total — amortized O(log n) per operation.
  • Fibonacci heaps: Decrease-key is O(1) amortized, enabling Dijkstra’s algorithm to run in O(E + V log V).

Why It Matters

Amortized analysis gives a more realistic picture of performance than worst-case analysis for data structures with occasional expensive operations. It’s the reason we can say “hash table insert is O(1)” even though resizing is O(n).

Use Case

Amortized analysis is essential for understanding the true performance of dynamic data structures like vectors, hash maps, and self-balancing trees. It is a common topic in algorithm design courses and technical interviews.

Try It — Big-O Reference

Open full tool