Lost in the Middle & Position Effects

Core

Manage conversation context to preserve critical information across long interactions · Difficulty 3/5

0%
lost-in-middlecontextattentionposition-effects

Large language models attend more reliably to information at the beginning and end of their context, with reduced attention to middle sections -- the "lost in the middle" effect.

The Problem

When aggregated results total ~75K tokens:

  • First ~15K tokens: reliably cited (primacy effect)
  • Last ~10K tokens: reliably cited (recency effect)
  • Middle ~50K tokens: frequently omitted, even when containing critical findings
  • Solutions

  • Key findings summary at the top: Place a condensed summary of all critical information at the beginning (leverages primacy effect)
  • Explicit section headers: Help the model navigate and attend to middle content
  • Structured data: Use key facts, citations, and relevance scores instead of verbose content
  • Reduce total volume: Have upstream agents return structured data rather than verbose content and reasoning chains
  • Anti-Pattern: Streaming Sequentially

    Processing results one source at a time prevents holistic cross-source analysis and doesn't solve the attention distribution issue.

    Key Takeaways

    • Models attend best to beginning and end of context, less to the middle
    • Place key findings summary at the beginning of large contexts
    • Use section headers and structured data to improve middle-context attention

    Test Yourself1 of 2

    When the synthesis agent processes aggregated results from all subagents (~75K tokens total), it reliably cites findings from the first and last sections but frequently omits critical findings from the middle sections. What's the most effective fix?