Codebase Exploration Context Strategies

Core

Manage context effectively in large codebase exploration · Difficulty 3/5

0%
codebaseexplorationscratchpadsubagentdegradation

Large codebase exploration requires specific strategies to maintain accuracy as context accumulates during extended investigation sessions.

Context Degradation Problem

After exploring many files, the model's responses degrade:

  • References become vague ("typical patterns" instead of specific class names)
  • Earlier findings are forgotten or contradicted
  • Answers become inconsistent across similar questions
  • Scratchpad Pattern

    Have agents maintain scratchpad files that record key findings:

  • After each significant discovery, write findings to a scratchpad file
  • Before answering subsequent questions, re-read the scratchpad
  • This externalizes memory beyond the context window
  • Subagent Delegation

    Spawn subagents for specific investigation tasks:

  • "Find all test files" -- subagent explores, returns structured list
  • "Trace refund flow dependencies" -- subagent traces, returns dependency graph
  • Main agent stays focused on high-level coordination with clean context
  • Phase-Based Exploration

  • Phase 1: Map structure (subagent explores, returns summary)
  • Inject Phase 1 summary into Phase 2 context
  • Phase 2: Deep-dive specific areas based on Phase 1 findings
  • Use /compact between phases when context is heavy
  • Key Takeaways

    • Use scratchpad files to externalize findings beyond the context window
    • Spawn subagents for verbose exploration tasks to keep main agent context clean
    • Summarize findings between exploration phases and inject into next phase context
    • Use /compact to reduce context during extended sessions