Memory & Contexthigh

Context Compression & Compaction

When a Claude Code session approaches the context limit, the harness automatically compresses old turns into summaries to free space. The model continues with a condensed history plus the recent turns intact.

Memory anchor

Compression is a friend summarizing the meeting because the recording cut off — they got the gist, but specific names and numbers are reconstructed. Trust the gist, verify the details.

Expected depth

Compaction triggers near the window limit (e.g., 80% utilization). The harness summarizes old turns — file reads, command output, intermediate reasoning — into a tighter form. Recent turns and current state stay intact. The model is told it was compacted and given the summary. Quality impact: compaction loses detail; the model may forget specific values, file paths, or intermediate decisions. Mitigation: write important state to memory files or git commits before compaction; reference files by path so they can be re-read.

Deep — senior internals

Compaction is a defensive measure, not a feature you should rely on — quality is always better when you avoid the threshold. Strategies: dispatch verbose work to subagents (their context isn't compacted into yours); use Read with offset/limit to avoid loading entire files; clear via /clear when starting a new task. The Stop hook can fire after compaction to remind the model what was lost. Manual approach: keep a 'session notes' file you write to as you go, so post-compaction you can re-read the file rather than reconstruct from memory.

🎤Interview-ready answer

Context compression kicks in when a session nears the window limit — the harness summarizes old turns and replaces them with a digest, keeping recent turns intact. It's automatic but lossy: the model forgets specifics. I treat it as a fallback, not a tool — better strategies are dispatching verbose work to subagents (their context doesn't compact into mine), reading file ranges instead of whole files, and writing session state to memory files I can re-read after compaction.

Common trap

Trusting the model post-compaction to remember details from earlier in the session. It will confidently reconstruct from the summary — sometimes correctly, sometimes inventing. Verify against source files after a compaction.

Related concepts