AI School for Developers

From โ€œI know ChatGPTโ€ to โ€œI govern what my AI sees.โ€

Follow these tracks to learn how AI context works, why Context Governance matters, and how to use ContextDigger as the discipline layer under your existing AI tools.

Track 1 - AI Basics for Developers

Start here if youโ€™ve used ChatGPT, Claude, Gemini, Copilot, or Cursor but donโ€™t yet have a strong mental model of tokens, context windows, and attention.

Track 2 - Context Governance 101

Once you understand context at the model level, learn how to deliberately govern what the model is allowed to see in your repo.

Track 3 - Hands-on Labs

Put Context Governance into practice with concrete scenarios. Use the CLI to build governed bundles, then compare AI behavior with and without them in tools like ChatGPT, Claude, Gemini, Copilot, Cursor, Aider, Cody, or Codeium.

Track 4 - Using ContextDigger with Your AI Tools

Learn how to go from a repo with no governance to a setup where every assistant you use reads governed context bundles, whether you prefer chat (ChatGPT, Claude, Gemini), IDE assistants (Copilot, Cursor, Windsurf), or terminal tools (Aider).

Pick a Track and Start Governing Context

If you only do one thing today, complete the AI Basics track and the Checkout API lab. Youโ€™ll never look at โ€œjust give it the whole repoโ€ the same way again.

Deep Dive - AI Coding Basics

This section gives you a concrete mental model of what AI coding assistants do with your code: how they tokenize, where attention goes, and why context windows and governance matter.

What AI "Sees" When You Send Code

Key Insight: AI doesn't "read" code like you do. It converts everything to numbers (tokens), processes them with math (attention), and predicts what comes next (generation).

Step 1: Tokenization

When you give an AI assistant a Python file, it gets broken into tokens (roughly 4 characters each):

Your Python code:
def calculate_total(items):
return sum(item.price for item in items)
How AI sees it (tokens):
["def", " calculate", "_total", "(", "items", "):", "\n", " return", " sum", "(", "item", ".", "price", " for", " item", " in", " items", ")"]
โ†’ 18 tokens
Why Tokens Matter
  • โ€ข Cost: AI tools bill per token
  • โ€ข Limits: Context windows are token-based (8K, 100K, 200K)
  • โ€ข Speed: More tokens = slower processing
Token Examples
  • โ€ข "function" = 1 token
  • โ€ข "calculateTotal" = 3 tokens (calculate, Total)
  • โ€ข " " (4 spaces) = 1 token
  • โ€ข 1 line of code is roughly 15 to 30 tokens

Step 2: Attention Mechanism

Once tokenized, the AI uses attention to understand relationships between tokens:

How Attention Works

Imagine the AI is reading: user.email = validate_email(input_email)

The AI looks at each token and asks: โ€œWhich other tokens are related to this one?โ€

  • โ†’ email pays attention to validate_email (same concept)
  • โ†’ user pays attention to where user is created
  • โ†’ input_email pays attention to earlier validation logic

Once you see AI through this lens of tokens, attention, and bounded windows, it becomes obvious why governed context matters so much. The rest of ContextDigger (aperture, budgets, refusal, contracts, provenance) is built on top of this mental model.

Want the formal definitions? Read the related sections in Core Concepts: Context Aperture, Attention Budget, and Focus & Refusal.