C3 is a local code-intelligence layer for AI coding tools. The useful core is narrow: retrieve less, read less, and offload heavy analysis locally when that actually saves context.
New installs should use the guided init flow with direct MCP mode:
pip install -r requirements.txt
python cli/c3.py init /path/to/projectc3 init now walks through IDE selection, optional local git init, and optional MCP installation.
If you want the same behavior without prompts:
python cli/c3.py init /path/to/project --force --git --ide codex --mcp-mode directdirect points the IDE straight at cli/mcp_server.py.
proxy is still available, but it is now an advanced mode for teams that explicitly want dynamic tool filtering experiments:
python cli/c3.py install-mcp /path/to/project --mcp-mode proxyinstall-mcp also accepts IDE shorthand positionally when you are already in the project directory:
python cli/c3.py install-mcp claude
python cli/c3.py install-mcp codex
python cli/c3.py install-mcp . geminiUse these tools by default:
c3_recallwhen the topic may have prior historyc3_searchto locate codec3_file_mapbefore larger code readsc3_compressfor understanding-only passesc3_extractbefore.log,.txt, or.jsonlc3_delegatefor heavy non-editing analysisc3_session_logandc3_rememberfor durable decisions and conventions
- Direct MCP mode is the recommended install path.
c3 initnow provides a step-wise setup menu for IDE, local Git, and MCP.- Proxy mode is optional and documented as advanced.
- Savings footers, nudges, and response padding are disabled by default.
- Generated instruction files now describe a pragmatic workflow instead of a maximal ritual.
c3 initandinstall-mcpnow syncCLAUDE.md,AGENTS.md, andGEMINI.mdinto the project root.install-mcpnow creates project-local.codex/config.tomland.gemini/settings.jsonsession configs for new projects.- The context-budget agent now warns before threshold crossings and automatically captures a snapshot at L2 so recovery is faster after
/clear.
C3 now features a sophisticated three-tier local intelligence system powered by Ollama:
- Tier 1 (Nano): Ultra-fast intent classification and routing using
qwen2:0.5b. Sub-100ms classification ensures the right tool is used for every task. - Tier 2 (Micro): Efficient Q&A and summarization using models like
deepseek-r1:1.5b. Ideal for "last-turn" context retrieval and session summaries. - Tier 3 (Base): Complex code analysis and technical reasoning using
llama3.2:3bor larger.
- Real-time Streaming: Token-by-token response delivery via SSE for an instant, responsive UI experience.
- Semantic Caching: Persistent disk-based cache for LLM results reduces latency for repeated tasks to zero.
- Dynamic Context Control: Automatic
num_ctxoptimization right-sizes the model context window for every task type.
c3_search: narrow code retrievalc3_read: surgical reading of symbols (classes, functions) or line rangesc3_file_map: structural map for targeted readsc3_compress: token-reduced file understandingc3_extract: log/data pre-filteringc3_delegate: local Ollama offload for heavy analysis
python cli/c3.py benchmark /path/to/projectWhen local Ollama is available, c3_delegate is measured and included in the main benchmark scorecard rather than being treated as an optional side metric.
C3 can manage Claude Code permission tiers via .claude/settings.local.json. This feature is specific to Claude Code β other IDEs (Codex, Gemini, VS Code Copilot) do not have equivalent permission systems.
python cli/c3.py permissions show # Show current tier
python cli/c3.py permissions standard # Apply standard tier
python cli/c3.py permissions read-only # Apply read-only tier
python cli/c3.py permissions permissive # Apply permissive tierPermissions are also integrated into the setup flow:
python cli/c3.py init . --force --ide claude --permissions standard
python cli/c3.py install-mcp claude --permissions standardDuring interactive c3 init, a permissions step is offered as Step 4 when the IDE is Claude Code.
| Tier | Description |
|---|---|
read-only |
Safe exploration β no file writes, no git writes, no installs |
standard |
Normal dev workflow β edit, build, test, local git (recommended) |
permissive |
Full trust β everything except dangerous/destructive operations |
All tiers always allow C3 MCP tools (c3_compress, c3_search, etc.) and include a deny list that blocks destructive operations (rm -rf, sudo, git push --force, etc.).
Permissions can also be set from the Web UI (Settings > Permissions) or the Project Hub. The section only appears when the project IDE is Claude Code.
These remain available, but they are not part of the recommended default path:
- Proxy-driven dynamic tool filtering
c3_routec3_summarizec3_rawc3_why_contextc3_token_statsc3_context_statusc3_notifications- CLAUDE.md lifecycle tools
python cli/c3.py ui /path/to/projectThe UI now treats direct MCP mode as the recommended default and labels proxy mode as advanced.
C3 is a labor of love by a heavy AI power user dedicated to building the tools I wish already existed. If C3 saves you time (and context tokens), consider supporting its development:
- π Sponsor on GitHub: github.com/sponsors/drknowhow
- β One-time Support: Every coffee helps fund the thousands of API test runs required to keep C3 stable.
- β Star the Repo: If you find this useful, a star helps others discover C3.
- π οΈ Contribute: As an AI-first builder, I'm always looking for better workflows. Open an issue or a PR if you have ideas!
- $5/mo Supporter: Coffee & API Token Fund.
- $20/mo AI Power User: Your name in
SPONSORS.md+ early access to experiments. - $75/mo Builder Tier: Priority on feature requests for your AI workflows.
- $200/mo Partner Tier: Logo in README + monthly AI strategy call.
- Claude Code hooks still enforce large-read and log-read guardrails when installed.
--gitruns a local-onlygit init; it does not add remotes or use any hosted service.- Existing installs are not automatically migrated; rerun
install-mcporinit --forceto switch defaults. - Legacy
SHOW_SAVINGS_SUMMARYconfig is still honored for compatibility.