📚 INDUSTRY REFERENCE

AI Context Management Glossary

Standard terminology for AI-assisted development

As AI coding tools mature, the industry needs precise language for context optimization. This glossary defines emerging standard terms for managing AI attention, context scope, and focus discipline.

About This Glossary

This glossary documents terminology for AI context management: the practice of deliberately controlling what context AI tools receive to optimize accuracy, cost, and performance.

These terms address a gap in current AI discourse: we have words for what AI does (generation, inference, embeddings) but lack standard language for how humans should provide context.

Note: These terms are platform-agnostic and apply to tools like ChatGPT, Claude, Gemini, Copilot, Cursor, Aider, Cody, Codeium, and all transformer-based AI coding assistants. They describe mechanisms, not products.

Core Terminology

📸

Context Aperture

noun • /ˈkän-tekst ˈa-pər-ˌchu̇r/

Definition:

The controlled scope of code, files, or information provided to an AI tool, analogous to a camera's aperture controlling depth of field. A narrow aperture (fewer files) produces sharp focus on specific context; a wide aperture (many files) dilutes attention across broader scope.

Technical Basis:

Transformer models use attention mechanisms with quadratic complexity (O(n²)). Doubling context quadruples computation, leading to attention degradation beyond optimal thresholds (~10-15K tokens for most models).

Usage:

"Set your context aperture to f/15 for this refactoring task."
"Wide aperture (176 files) produced generic answers."
"Narrow your aperture to improve AI precision."

Related Metrics:

  • f/15 aperture: 15 files, ~3,000 lines, ~12K tokens (optimal)
  • f/30 aperture: 30 files, ~6,000 lines, ~24K tokens (acceptable)
  • f/100+ aperture: 100+ files, 20K+ lines, 80K+ tokens (degraded)

Why it matters: Unlike camera aperture (hardware limit), context aperture is a discipline - a deliberate choice to optimize AI performance through constraint.

🎯

Attention Budget

noun • /ə-ˈten(t)-shən ˈbə-jət/

Definition:

The maximum amount of context (measured in tokens or files) an AI model can process while maintaining high-quality attention across all inputs. Exceeding the attention budget results in attention dilution.

Technical Basis:

Research on transformer attention shows degradation beyond ~10-15K tokens, even with 200K context windows. The budget represents effective capacity, not theoretical maximum.

Usage:

"Stay within your attention budget for optimal results."
"This request exceeds the recommended attention budget."
"Budget: 15 files (12K tokens), using 8 files (6K tokens)."

Default Budgets (Recommended):

  • Files: 15 maximum per context
  • Lines: 3,000 maximum total
  • Tokens: ~12,000 optimal range
  • Areas: 2 concurrent focus areas

Industry adoption: "Attention budget" is already emerging in AI research papers and internal engineering discussions at major AI labs.

🧘

Focus Discipline

noun • /ˈfō-kəs ˈdi-sə-plən/

Definition:

The practice of deliberately constraining AI context to maximize precision, accuracy, and cost-efficiency. Encompasses techniques like context scoping, aperture control, and attention budget management.

Philosophical Basis:

Like "security discipline" or "code discipline," focus discipline is a professional practice - a learnable skill that separates effective AI users from ineffective ones.

Usage:

"Senior devs practice focus discipline when using AI tools."
"Code review: This PR lacks focus discipline (176 files sent to AI)."
"Teaching focus discipline to junior developers."

Core Principles:

  • Intentionality: Choose context deliberately, not reflexively
  • Minimalism: Include only what's necessary
  • Boundaries: Respect attention budget limits
  • Verification: Measure context size before sending

Future prediction: By 2026, "focus discipline" will appear in job descriptions for AI-assisted development roles.

⚠️

Attention Dilution

noun • /ə-ˈten(t)-shən dī-ˈlü-shən/

Definition:

The measurable decrease in AI accuracy and precision when context exceeds the model's attention budget. Symptoms include generic answers, missed details, and hallucinations.

Technical Mechanism:

Attention scores are normalized across all tokens. With 240K tokens (176 files), each token receives ~0.0004% attention. With 12K tokens (15 files), each receives ~0.008% (a 20x improvement).

Usage:

"Attention dilution caused generic, unhelpful responses."
"Reducing context from 176 to 15 files eliminated attention dilution."
"Measure attention dilution: accuracy drops beyond f/30 aperture."

Observable Symptoms:

  • Generic responses: "You could add a function..." instead of specific guidance
  • Missed context: AI overlooks relevant files or functions
  • Hallucinations: Invents non-existent APIs or patterns
  • Surface-level analysis: Skims code instead of deep understanding

Research gap: "Attention dilution" fills a terminology gap. We currently lack a standard term for context-induced AI degradation.

🎚️

Context Scoping

noun/verb • /ˈkän-tekst ˈskō-piŋ/

Definition:

The deliberate process of selecting, filtering, and organizing code context before providing it to AI tools. Effective context scoping respects attention budgets and maintains appropriate context aperture.

Usage:

"Proper context scoping improved AI accuracy by 10x."
"Scope context to authentication components only."
"Context scoping is a core skill for AI-assisted developers."

Scoping Strategies:

  • By directory: src/auth/** only
  • By naming pattern: *Account*.test.js
  • By role: Model files only, exclude controllers
  • By imports: Files that import UserModel

Industry trend: "Context scoping" naturally extends familiar "variable scoping" concepts from programming.

Usage in Practice

Code Review Comment:

"This PR shows poor focus discipline - you sent 176 test files to the AI. Try narrowing your context aperture to the 15 Account test files. You'll get better suggestions and reduce attention dilution."

Technical Discussion:

"We're seeing attention dilution at scale. Team needs to practice better context scoping - stay within the f/15 aperture, respect the 12K token attention budget."

Documentation:

"When refactoring authentication, set context aperture to src/auth/** (14 files). This maintains focus discipline and avoids exceeding the attention budget."

References & Further Reading

Help Shape Industry Language

These terms are emerging as AI-assisted development matures. Use them, share them, and help standardize the vocabulary.