Local Models and Offline Use

ContextDigger is model agnostic. It prepares text bundles that work just as well with local LLMs as with hosted assistants.

How ContextDigger fits with local LLMs

ContextDigger does not embed a model and does not depend on any specific provider. Its job is to build governed context bundles in plain text. Any model runner that can read a text file or accept pasted text can benefit from those bundles.

This includes tools like LM Studio, Ollama front ends, or custom local UIs that you or your team build.

Basic workflow with a local model

  1. Run ContextDigger in your project: cd your-project cdg init cdg dig backend-api
  2. Open the generated bundle, for example: .cdg/context/backend-api.txt
  3. Give that bundle to your local model:
    • Paste the contents into the prompt window, or
    • Use the tool's file attach feature if it supports text file uploads.

The key is that the model sees only the governed slice of your codebase that ContextDigger prepared, not the entire repository.

Why local plus governed context works well

  • Privacy: code never leaves your machine. ContextDigger and the model both run locally.
  • Cost control: you can experiment without per token charges while still respecting attention budgets.
  • Repeatability: the same `.cdg/context/*.txt` bundle can be reused across runs and tools.

Whether you use a hosted assistant, a local model, or both, the governance story stays the same. ContextDigger defines what the model is allowed to see for each task.