Frequently Asked Questions
Context, tokens, and how SourcePrep actually works
Answers about token budgets, context quality, and how SourcePrep fits alongside the tools you already use.
Yes — and this is where SourcePrep diverges from every other context tool on the market. SourcePrep maintains a Persistent Agent Memory: a local store of observations — architectural decisions, discovered bugs, design patterns, working assumptions — each linked directly to specific files and symbols in your codebase.
What makes this different from bolting a memory file onto your repo: SourcePrep's observations are staleness-aware. Modify auth.py and every observation tied to that file is automatically flagged [STALE]. In the next session, the AI receives both the updated code and a signal that its prior assumptions may no longer hold. It doesn't blindly repeat outdated notes — it knows to re-evaluate.
This is not a prompt cache or a conversation log. It's a structured, file-linked, searchable knowledge layer that the AI maintains about your specific codebase. It works on every tier, including Free — it's local SQLite, zero cloud cost, zero telemetry. And because observations are injected alongside code context (not dumped in bulk), they respect the same tight token budget as everything else.
Still have questions?
Open an issue, ask in the community, or read the research behind these answers.
