Amortized Analysis
Amortized analysis averages the cost of expensive operations over a sequence of operations to give a more accurate per-operation cost.
Amortized = paying rent. You pay one big security deposit (resize) then cheap monthly rent (appends). Averaged out, each month costs almost nothing. One expensive day doesn't mean every day is expensive.
Dynamic array (ArrayList/Python list) append: most appends are O(1), but when the array resizes it copies all elements. Amortized over n appends, each append is still O(1) because doubling means the total copies equal 2n. Hash table operations are O(1) amortized — occasional rehashing at load factor threshold is spread across all inserts. Stack with multi-pop: individual pop might pop n items, but amortized over n pushes, it's O(1) per operation.
The potential method formalizes amortized analysis: define a potential function Φ that represents 'stored work'. Actual cost + ΔΦ = amortized cost. For dynamic arrays, Φ = 2*(size - capacity/2) — before a resize, potential is high enough to pay for the copy. The accounting method assigns 'credits' to cheap operations that pay for future expensive ones. Understanding amortized analysis is critical for arguing that lazy deletion in hash maps and tree rebalancing are efficient over time.
When I say 'O(1) amortized' — like for array append or hash insert — I mean the average cost per operation over any sequence of operations is O(1), even if individual operations can be costly. The classic example is dynamic arrays: doubling the capacity means the total copy work is bounded by 2n over n appends, so amortized O(1) per append. I'd contrast this with 'worst case O(1)' which would mean every single call is O(1).
Amortized O(1) does NOT mean every operation is O(1). Calling clear() on a Python list and then doing n appends gives you one O(n) resize during that sequence. If your use case has adversarial access patterns hitting the expensive case repeatedly, amortized analysis doesn't apply.