Lost in the Middle: How Language Models Use Long Contexts
Liu et al. show that language models attend to the start and end of a long context far more than the middle. The finding sets the ceiling for any retrieval system that pads its context naïvely. SourcePrep ranks results by relevance and assembles them so the highest-scoring chunks bracket the prompt, never bury it.
