Core Concepts
Understanding Context, Budgets, and AI Governance
Why ContextDigger enforces limits, what "refusal" means, and how disciplined context makes AI tools work better
What is "Context" in AI Tools?
Context is all the information you give an AI tool (Claude, ChatGPT, Cursor, Copilot) to understand your request.
When you ask an AI to "fix this bug" or "add a feature", it needs to know:
- β’ What code exists - The files, functions, and classes you're working with
- β’ How they relate - Dependencies, imports, and interactions
- β’ Your intent - What you're trying to accomplish
Key Point:
The AI doesn't "see" your entire codebase automatically. You choose what context to give it. Too much = confused AI. Too little = generic answers that don't work.
Where this shows up:
- β’ CLI: every
cdg dig <area>command builds an explicit context bundle instead of letting tools scan the whole repo. - β’ Tutorial: Lessons 1 and 3 walk through discovery and context bundle creation in detail.
- β’ AI School: Track 1 explains how context and tokens interact in real coding sessions.
The Problem: Context Overload
Here's what happens when you give an AI tool too much context:
β Too Much Context
β Right-Sized Context
The Core Problem
AI tools have limited "attention", like humans. Reading 176 files makes them lose track of what's important, just like you would.
Research shows: AI accuracy drops significantly when context exceeds ~10-20 relevant files. More context β better answers. Focused context = precise answers.
Where this shows up:
- β’ CLI and MCP: budget errors such as
Context Aperture Exceededare triggered when an area is too wide. - β’ Documentation: the architecture and governance sections describe how aperture limits are enforced across all surfaces.
- β’ AI School: Track 2 uses aperture examples to show how to avoid overload in practice.
What is a "Budget"?
A budget is a protective limit on how much context you can load at once. It forces you to be intentional about what you're working on.
ContextDigger's Default Budgets
Why These Numbers?
Cognitive Load
Humans can effectively reason about 10-20 files at once. AI tools have similar limits. Beyond this, both humans and AI start losing track of relationships and dependencies.
Attention Quality
Research on transformer models (like GPT, Claude) shows attention degradation after ~10-15 focused items. More items = shallower understanding of each.
Token Economics
AI tools charge per token (~4 chars). 15 files Γ 200 lines β 12K tokens ($0.03). 176 files Γ 350 lines β 246K tokens ($0.74). Per request. Budgets save money.
Speed
Smaller context = faster processing. 15 files process in ~2 seconds. 176 files take ~20 seconds. Every request.
π‘ The Budget Philosophy
Budgets aren't arbitrary restrictions - they're forcing functions for better thinking. When you can't load 176 files, you must ask: "What am I actually trying to do?" This clarity helps both you and the AI.
Analogy: It's like a word limit on an essay. The constraint forces you to be clear and focused, which makes the final result better.
Where this shows up:
- β’ CLI: every
cdg digandcdg splitcall reports file and line usage against your configured budgets. - β’ Config: budgets live in
.cdg/config.jsonso teams can agree on shared limits. - β’ Documentation: the governance section explains budgets alongside refusal, contracts, and provenance.
When Context Aperture Is Exceeded
Context Aperture Exceeded is when a requested area breaks the Attention Budget. Instead of silently loading too much, ContextDigger enforces the constraint and guides you to smaller, governed scopes.
What Happens When Aperture Is Exceeded?
Why This is Good (Not Bad)
β Without Refusal
- β’ Loads 176 files silently
- β’ You don't realize context is bloated
- β’ AI gives vague, unhelpful answers
- β’ You waste time debugging bad suggestions
- β’ You spend $$ on useless tokens
β With Refusal
- β’ Stops you before wasting time
- β’ Shows you intelligent suggestions
- β’ Helps you find the right 8-15 files
- β’ AI gives precise, actionable answers
- β’ You save time and money
Enforcement = Guidance
An Aperture Exceeded event isn't punishment. It's the moment ContextDigger transforms from a passive tool into an active guide.
Instead of: "Sure, I will load everything" β
You get: "Here are focused sub-areas you can create. Pick one and start working in 30 seconds." β
Where this shows up:
- β’ CLI and skills: budget errors guide you toward
cdg splitor sub-area creation instead of silently loading everything. - β’ Tutorial: the sub area lesson demonstrates a full refusal workflow from error to focused area.
- β’ AI School: Track 2 frames refusal as focus discipline, not failure.
Autistic Intelligence as a Design Principle
What We Mean by "Autistic Intelligence"
ContextDigger was designed from the lived experience of an autistic founder. Here, "Autistic Intelligence" means a mind that is:
- Brilliant with the right slice of information
- Fragile under noisy, unbounded, or conflicting inputs
- At its best when it can set boundaries and say "no" to overload
This is also a surprisingly accurate model of how large language models behave under context overload.
How It Shapes ContextDigger
- Context Aperture - sensory and cognitive gating for AI: how wide the lens is.
- Attention Budget - how much input we can safely load before quality collapses.
- Refusal Mode - the right to say "no" when an area exceeds those limits.
- Continuation Contracts - "do not make me start over"; carry governed context forward.
- Provenance - explaining why each file is present in the context bundle.
We use "Autistic Intelligence" to describe this design lens and experience. It is not a clinical claim and does not attempt to speak for all autistic people.
"ContextDigger assumes intelligence is finite, context is expensive, and overload is a design failure, not a test of strength."
Everything else - budgets, refusal, contracts, provenance - exists to protect that intelligence from bad inputs.
Continuation Contracts
A Continuation Contract is a governed work session: it remembers your focus area, intent, and context bundle so you can resume work without starting over.
Instead of "chat history" that slowly drifts, ContextDigger treats sessions as explicit objects with lifecycles:
- β’ Active: you are currently working in this area with a governed bundle.
- β’ Paused: work is saved; context can be re-loaded later.
- β’ Completed/Expired: contract is closed; history remains auditable.
Where this shows up:
- β’ CLI commands:
cdg continue,cdg pause,cdg contracts - β’ VS Code: Contracts sidebar shows active and paused sessions.
- β’ Docs: see continuation section in Documentation.
Provenance: "Why Is This File Here?"
Provenance explains why each file in a context bundle was included: which area it belongs to, how it was selected, and what role it plays in your current task.
Good governance isn't just about limits; it's about traceability:
- β’ "This test file was included because it's part of the
billing-apiarea." - β’ "This config file was pulled in via a continuation contract from yesterday's session."
- β’ "These helpers are here because they are imported by your focus files."
Where this shows up:
- β’ CLI:
cdg provenance,cdg report,cdg export - β’ VS Code: "Show Provenance" context menu on focused files.
- β’ Docs: provenance section in Documentation.
Autistic Intelligence & the Context Compiler
We design ContextDigger from the perspective of Autistic Intelligence: minds (human and model) that are brilliant under the right constraints and fragile under overload.
Instead of asking βHow do we show the AI everything?β, we ask:
βWhat is this AI allowed to know about my system, for this task, right now?β
ContextDigger acts as a compiler for context that runs before your AI tools:
- Discover focus areas in your repo.
- Apply budgets and governance policies.
- Build a bounded, explainable context bundle.
- Attach continuation and provenance metadata.
- Let AI tools consume that bundle instead of free-scanning the repo.
Where to go next:
- β’ Read the full story on Our Story.
- β’ Follow the guided path in AI School.
- β’ See how the concepts map to behavior in the Documentation.
Why This Approach Works
1. Matches How AI Actually Works
AI models like Claude, GPT, and Codex use attention mechanisms. They don't read linearly - they look for patterns and relationships. Too much context dilutes attention across irrelevant information.
Result: 15 focused files get 100% attention. 176 scattered files get ~8% attention each.
2. Forces Intentional Thinking
Without budgets, you throw "everything related to users" at the AI. With budgets, you must ask: "Am I working on user authentication? User profiles? User permissions?"
Result: Clearer question β clearer answer. The constraint improves your thinking.
3. Prevents Cost Spirals
AI tools charge per token. A single "fix this bug" request with 176 files costs $0.74. Ask 20 questions = $14.80. With 15 files: 20 questions = $0.60.
Result: 96% cost reduction. Same (better) answers. Budgets pay for themselves.
4. Encourages Good Architecture
If you constantly hit budget limits, your codebase might be poorly organized. ContextDigger's refusals reveal architectural problems: "Why are 176 test files in one folder?"
Result: Budget pressure β better code organization β easier maintenance.
The Philosophy in One Sentence
"Constraints create clarity. Clarity enables precision. Precision makes AI useful."
ContextDigger doesn't limit you - it guides you to be more effective with AI tools by enforcing the same cognitive constraints that make human experts effective.
Real-World Example: Sarah's Salesforce Tests
Before ContextDigger
π Loads all 176 Apex test files
β Asks: "How do I add a test for Account.Industry?"
π€ AI Response: "You could add a test in one of your Account test files. Here's a generic example..."
β±οΈ Time wasted: 10 minutes reading generic answer, finding right file manually
π° Cost per question: $0.74
After ContextDigger
β Context Aperture Exceeded (176 files)
π― Creates "apex-account-tests" sub-area (8 files)
β Asks: "How do I add a test for Account.Industry?"
π€ AI Response: "In AccountFieldTest.cls line 47, add this test method after testNameValidation()..."
β±οΈ Time saved: Precise answer in 30 seconds
π° Cost per question: $0.03 (96% cheaper)
The Difference: Context aperture constraints forced Sarah to focus on "Account tests" instead of "all tests". The AI could then give her an exact file, line number, and code snippet - not generic advice.
You're Always In Control
All Budgets Are Configurable
Don't like 15 files? Change it to 25. Working with tiny microservices? Set it to 5. ContextDigger ships with opinionated defaults based on research and testing, but you're the boss.
Use Defaults
Start with 15 files, 3K lines. Works for 90% of projects.
Customize
Adjust based on your codebase, team size, or AI tool limits.
Override
In rare cases, override budget for specific areas. Use sparingly.
Ready to Try Disciplined Context?
See how budgets and refusal transform AI-assisted coding from guesswork to precision.