For complex tasks, add explicit \"done when\" criteria:
- [ ] T2.0: Authentication system\n **Done when**:\n - [ ] User can register with email\n - [ ] User can log in and get a token\n - [ ] Protected routes reject unauthenticated requests\n
This prevents premature \"task complete\" when only the implementation details are done, but the feature doesn't actually work.
Completing all subtasks does not mean the parent task is complete.
The parent task describes what the user gets.
Subtasks describe how to build it.
Always re-read the parent task description before marking it complete. Verify the stated deliverable exists and works.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#why-do-these-approaches-work","level":2,"title":"Why Do These Approaches Work?","text":"
The patterns in this guide aren't invented here: They are practitioner translations of well-established, peer-reviewed research, most of which predate the current AI (hype) wave.
The underlying ideas come from decades of work in machine learning, cognitive science, and numerical optimization. For a concrete case study showing how these principles play out when an agent decides whether to follow instructions (attention competition, optimization toward least-resistance paths, and observable compliance as a design goal) see The Dog Ate My Homework.
Phased work (\"Explore → Plan → Implement\") applies chain-of-thought reasoning: Decomposing a problem into sequential steps before acting. Forcing intermediate reasoning steps measurably improves output quality in language models, just as it does in human problem-solving. Wei et al., Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (2022).
Root-cause prompts (\"Why doesn't X work?\") use step-back abstraction: Retreating to a higher-level question before diving into specifics. This mirrors how experienced engineers debug: they ask \"what should happen?\" before asking \"what went wrong?\" Zheng et al., Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models (2023).
Exploring alternatives (\"Propose 2-3 approaches\") leverages self-consistency: Generating multiple independent reasoning paths and selecting the most coherent result. The idea traces back to ensemble methods in ML: A committee of diverse solutions outperforms any single one. Wang et al., Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022).
Impact analysis (\"What would break if we...\") is a form of tree-structured exploration: Branching into multiple consequence paths before committing. This is the same principle behind game-tree search (minimax, MCTS) that has powered decision-making systems since the 1950s. Yao et al., Tree of Thoughts: Deliberate Problem Solving with Large Language Models (2023).
Motivation prompting (\"Build X because Y\") works through goal conditioning: Providing the objective function alongside the task. In optimization terms, you are giving the gradient direction, not just the loss. The model can make locally coherent decisions that serve the global objective because it knows what \"better\" means.
Scope constraints (\"Only change files in X\") apply constrained optimization: Bounding the search space to prevent divergence. This is the same principle behind regularization in ML: Without boundaries, powerful optimizers find solutions that technically satisfy the objective but are practically useless.
CLI commands as prompts (\"Run ctx status\") interleave reasoning with acting: The model thinks, acts on external tools, observes results, then thinks again. Grounding reasoning in real tool output reduces hallucination because the model can't ignore evidence it just retrieved. Yao et al., ReAct: Synergizing Reasoning and Acting in Language Models (2022).
Task decomposition (\"Prompts by Task Type\") applies least-to-most prompting: Breaking a complex problem into subproblems and solving them sequentially, each building on the last. This is the research version of \"plan, then implement one slice.\" Zhou et al., Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2022).
Explicit planning (\"Explore → Plan → Implement\") is directly supported by plan-and-solve prompting, which addresses missing-step failures in zero-shot reasoning by extracting a plan before executing. The phased structure prevents the model from jumping to code before understanding the problem. Wang et al., Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models (2023).
Session reflection (\"What did we learn?\", /ctx-reflect) is a form of verbal reinforcement learning: Improving future performance by persisting linguistic feedback as memory rather than updating weights. This is exactly what LEARNINGS.md and DECISIONS.md provide: a durable feedback signal across sessions. Shinn et al., Reflexion: Language Agents with Verbal Reinforcement Learning (2023).
These aren't prompting \"hacks\" that you will find in the \"1000 AI Prompts for the Curious\" listicles: They are applications of foundational principles:
Decomposition,
Abstraction,
Ensemble Reasoning,
Search,
and Constrained Optimization.
They work because language models are, at their core, optimization systems navigating probabilistic landscapes.
The Attention Budget: Why your AI forgets what you just told it, and how token budgets shape context strategy
The Dog Ate My Homework: A case study in making agents follow instructions: attention timing, delegation decay, and observable compliance as a design goal
Found a prompt that works well? Open an issue or PR with:
The prompt text;
What behavior it triggers;
When to use it;
Why it works (optional but helpful).
Dive Deeper:
Recipes: targeted how-to guides for specific tasks
CLI Reference: all commands and flags
Integrations: setup for Claude Code, Cursor, Aider
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/repeated-mistakes/","level":1,"title":"My AI Keeps Making the Same Mistakes","text":"","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#the-problem","level":2,"title":"The Problem","text":"
You found a bug last Tuesday. You debugged it, understood the root cause, and moved on. Today, a new session hits the exact same bug. The AI rediscovers it from scratch, burning twenty minutes on something you already solved.
Worse: you spent an hour last week evaluating two database migration strategies, picked one, documented why in a comment somewhere, and now the AI is cheerfully suggesting the approach you rejected. Again.
This is not a model problem. It is a memory problem. Without persistent context, every session starts with amnesia.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#how-ctx-stops-the-loop","level":2,"title":"How ctx Stops the Loop","text":"
ctx gives your AI three files that directly prevent repeated mistakes, each targeting a different failure mode.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#decisionsmd-stop-relitigating-settled-choices","level":3,"title":"DECISIONS.md: Stop Relitigating Settled Choices","text":"
When you make an architectural decision, record it with rationale and rejected alternatives. The AI reads this at session start and treats it as settled.
## [2026-02-12] Use JWT for Authentication\n\n**Status**: Accepted\n\n**Context**: Need stateless auth for the API layer.\n\n**Decision**: JWT with short-lived access tokens and refresh rotation.\n\n**Rationale**: Stateless, scales horizontally, team has prior experience.\n\n**Alternatives Considered**:\n- Session-based auth: Rejected. Requires sticky sessions or shared store.\n- API keys only: Rejected. No user identity, no expiry rotation.\n
Next session, when the AI considers auth, it reads this entry and builds on the decision instead of re-debating it. If someone asks \"why not sessions?\", the rationale is already there.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#learningsmd-capture-gotchas-once","level":3,"title":"LEARNINGS.md: Capture Gotchas Once","text":"
Learnings are the bugs, quirks, and non-obvious behaviors that cost you time the first time around. Write them down so they cost you zero time the second time.
## Build\n\n### CGO Required for SQLite on Alpine\n\n**Discovered**: 2026-01-20\n\n**Context**: Docker build failed silently with \"no such table\" at runtime.\n\n**Lesson**: The go-sqlite3 driver requires CGO_ENABLED=1 and gcc\ninstalled in the build stage. Alpine needs apk add build-base.\n\n**Application**: Always use the golang:alpine image with build-base\nfor SQLite builds. Never set CGO_ENABLED=0.\n
Without this entry, the next session that touches the Dockerfile will hit the same wall. With it, the AI knows before it starts.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#constitutionmd-draw-hard-lines","level":3,"title":"CONSTITUTION.md: Draw Hard Lines","text":"
Some mistakes are not about forgetting - they are about boundaries the AI should never cross. CONSTITUTION.md sets inviolable rules.
* [ ] Never commit secrets, tokens, API keys, or credentials\n* [ ] Never disable security linters without a documented exception\n* [ ] All database migrations must be reversible\n
The AI reads these as absolute constraints. It does not weigh them against convenience. It refuses tasks that would violate them.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#the-accumulation-effect","level":2,"title":"The Accumulation Effect","text":"
Each of these files grows over time. Session one captures two decisions. Session five adds a tricky learning about timezone handling. Session twelve records a convention about error message formatting.
By session twenty, your AI has a knowledge base that no single person carries in their head. New team members - human or AI - inherit it instantly.
The key insight: you are not just coding. You are building a knowledge layer that makes every future session faster.
ctx files version with your code in git. They survive branch switches, team changes, and model upgrades. The context outlives any single session.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#getting-started","level":2,"title":"Getting Started","text":"
Capture your first decision or learning right now:
ctx add decision \"Use PostgreSQL\" \\\n --context \"Need a relational database for the project\" \\\n --rationale \"Team expertise, JSONB support, mature ecosystem\"\n\nctx add learning \"Vitest mock hoisting\" \\\n --context \"Tests failing intermittently\" \\\n --lesson \"vi.mock() must be at file top level\" \\\n --application \"Use vi.doMock() for dynamic mocks\"\n
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#further-reading","level":2,"title":"Further Reading","text":"
Knowledge Capture: the full workflow for persisting decisions, learnings, and conventions
Context Files Reference: structure and format for every file in .context/
About ctx: the bigger picture - why persistent context changes how you work with AI
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"operations/","level":1,"title":"Operations","text":"
Guides for installing, upgrading, integrating, and running ctx.
Run an unattended AI agent that works through tasks overnight, with ctx providing persistent memory between iterations.
","path":["Operations"],"tags":[]},{"location":"operations/autonomous-loop/","level":1,"title":"Autonomous Loops","text":"","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#autonomous-ai-development","level":2,"title":"Autonomous AI Development","text":"
Iterate until done.
An autonomous loop is an iterative AI development workflow where an agent works on tasks until completion, without constant human intervention.
ctx provides the memory that makes this possible:
ctx provides the memory: persistent context that survives across iterations
The loop provides the automation: continuous execution until done
Together, they enable fully autonomous AI development where the agent remembers everything across iterations.
Origin
This pattern is inspired by Geoffrey Huntley's Ralph Wiggum technique.
We use generic terminology here so the concepts remain clear regardless of trends.
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#how-it-works","level":2,"title":"How It Works","text":"
graph TD\n A[Start Loop] --> B[Load .context/loop.md]\n B --> C[AI reads .context/]\n C --> D[AI picks task from TASKS.md]\n D --> E[AI completes task]\n E --> F[AI updates context files]\n F --> G[AI commits changes]\n G --> H{Check signals}\n H -->|SYSTEM_CONVERGED| I[Done - all tasks complete]\n H -->|SYSTEM_BLOCKED| J[Done - needs human input]\n H -->|Continue| B
Loop reads .context/loop.md and invokes AI
AI loads context from .context/
AI picks one task and completes it
AI updates context files (mark task done, add learnings)
AI commits changes
Loop checks for completion signals
Repeat until converged or blocked
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#quick-start-shell-while-loop-recommended","level":2,"title":"Quick Start: Shell While Loop (Recommended)","text":"
The best way to run an autonomous loop is a plain shell script that invokes your AI tool in a fresh process on each iteration. This is \"pure ralph\":
The only state that carries between iterations is what lives in .context/ and the git history. No context window bleed, no accumulated tokens, no hidden state.
Create a loop.sh:
#!/bin/bash\n# loop.sh: an autonomous iteration loop\n\nPROMPT_FILE=\"${1:-.context/loop.md}\"\nMAX_ITERATIONS=\"${2:-10}\"\nOUTPUT_FILE=\"/tmp/loop_output.txt\"\n\nfor i in $(seq 1 $MAX_ITERATIONS); do\n echo \"=== Iteration $i ===\"\n\n # Invoke AI with prompt\n cat \"$PROMPT_FILE\" | claude --print > \"$OUTPUT_FILE\" 2>&1\n\n # Display output\n cat \"$OUTPUT_FILE\"\n\n # Check for completion signals\n if grep -q \"SYSTEM_CONVERGED\" \"$OUTPUT_FILE\"; then\n echo \"Loop complete: All tasks done\"\n break\n fi\n\n if grep -q \"SYSTEM_BLOCKED\" \"$OUTPUT_FILE\"; then\n echo \"Loop blocked: Needs human input\"\n break\n fi\n\n sleep 2\ndone\n
Make it executable and run:
chmod +x loop.sh\n./loop.sh\n
You can also generate this script with ctx loop (see CLI Reference).
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#why-do-we-use-a-shell-loop","level":3,"title":"Why Do We Use a Shell Loop?","text":"
Each iteration starts a fresh AI process with zero context window history. The agent knows only what it reads from .context/ files: Exactly the information you chose to persist.
This is the core loop principle: memory is explicit, not accidental.
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#alternative-claude-codes-built-in-loop","level":2,"title":"Alternative: Claude Code's Built-in Loop","text":"
This is convenient for quick iterations, but be aware of important caveats:
This Loop Is not Pure
Claude Code's /loop runs all iterations within the same session. This means:
State leaks between iterations: The context window accumulates output from every previous iteration. The agent \"remembers\" things it saw earlier (even if they were never persisted to .context/).
Token budget degrades: Each iteration adds to the context window, leaving less room for actual work in later iterations.
Not ergonomic for long runs: Users report that the built-in loop is less predictable for 10+ iteration runs compared to a shell loop.
For short explorations (2-5 iterations) or interactive use, /loop works fine. For overnight unattended runs or anything where iteration independence matters, use the shell while loop instead.
The prompt file instructs the AI on how to work autonomously. Here's a template:
# Autonomous Development Prompt\n\nYou are working on this project autonomously. Follow these steps:\n\n## 1. Load Context\n\nRead these files in order:\n\n1. `.context/CONSTITUTION.md`: NEVER violate these rules\n2. `.context/TASKS.md`: Find work to do\n3. `.context/CONVENTIONS.md`: Follow these patterns\n4. `.context/DECISIONS.md`: Understand past choices\n\n## 2. Pick One Task\n\nFrom `.context/TASKS.md`, select ONE task that is:\n\n- Not blocked\n- Highest priority available\n- Within your capabilities\n\n## 3. Complete the Task\n\n- Write code following conventions\n- Run tests if applicable\n- Keep changes focused and minimal\n\n## 4. Update Context\n\nAfter completing work:\n\n- Mark task complete in `TASKS.md`\n- Add any learnings to `LEARNINGS.md`\n- Add any decisions to `DECISIONS.md`\n\n## 5. Commit Changes\n\nCreate a focused commit with clear message.\n\n## 6. Signal Status\n\nEnd your response with exactly ONE of:\n\n- `SYSTEM_CONVERGED`: All tasks in TASKS.md are complete\n- `SYSTEM_BLOCKED`: Cannot proceed, need human input (explain why)\n- (no signal): More work remains, continue to next iteration\n\n## Rules\n\n- ONE task per iteration\n- NEVER skip tests\n- NEVER violate CONSTITUTION.md\n- Commit after each task\n
Signal Meaning When to Use SYSTEM_CONVERGED All tasks complete No pending tasks in TASKS.md SYSTEM_BLOCKED Cannot proceed Needs clarification, access, or decision BOOTSTRAP_COMPLETE Initial setup done Project scaffolding finished","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#example-usage","level":3,"title":"Example Usage","text":"
converged state
I've completed all tasks in TASKS.md:\n- [x] Set up project structure\n- [x] Implement core API\n- [x] Add authentication\n- [x] Write tests\n\nNo pending tasks remain.\n\nSYSTEM_CONVERGED\n
blocked state
I cannot proceed with the \"Deploy to production\" task because:\n- Missing AWS credentials\n- Need confirmation on region selection\n\nPlease provide credentials and confirm deployment region.\n\nSYSTEM_BLOCKED\n
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#why-ctx-and-loops-work-well-together","level":2,"title":"Why ctx and Loops Work Well Together","text":"Without ctx With ctx Each iteration starts fresh Each iteration has full history Decisions get re-made Decisions persist in DECISIONS.md Learnings are lost Learnings accumulate in LEARNINGS.md Tasks can be forgotten Tasks tracked in TASKS.md","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#automatic-context-updates","level":3,"title":"Automatic Context Updates","text":"
During the loop, the AI should update context files:
End EVERY response with one of:\n- SYSTEM_CONVERGED (if all tasks done)\n- SYSTEM_BLOCKED (if stuck)\n
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#context-not-persisting","level":3,"title":"Context Not Persisting","text":"
Cause: AI not updating context files
Fix: Add explicit instructions to .context/loop.md:
After completing a task, you MUST:\n1. Run: ctx task complete \"<task>\"\n2. Add learnings: ctx add learning \"...\"\n
Cause: Task not marked complete before next iteration
Fix: Ensure commit happens after context update:
Order of operations:\n1. Complete coding work\n2. Update context files (*`ctx task complete`, `ctx add`*)\n3. Commit **ALL** changes including `.context/`\n4. Then signal status\n
# From the ctx repository\nclaude /plugin install ./internal/assets/claude\n\n# Or from the marketplace\nclaude /plugin marketplace add ActiveMemory/ctx\nclaude /plugin install ctx@activememory-ctx\n
Ensure the Plugin Is Enabled
Installing a plugin registers it, but local installs may not auto-enable it globally. Verify ~/.claude/settings.json contains:
Without this, the plugin's hooks and skills won't appear in other projects. Running ctx init auto-enables the plugin; use --no-plugin-enable to skip this step.
This gives you:
Component Purpose .context/ All context files CLAUDE.md Bootstrap instructions Plugin hooks Lifecycle automation Plugin skills Agent Skills","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#how-it-works","level":3,"title":"How It Works","text":"
graph TD\n A[Session Start] --> B[Claude reads CLAUDE.md]\n B --> C[PreToolUse hook runs]\n C --> D[ctx agent loads context]\n D --> E[Work happens]\n E --> F[Session End]
Session start: Claude reads CLAUDE.md, which tells it to check .context/
First tool use: PreToolUse hook runs ctx agent and emits the context packet (subsequent invocations within the cooldown window are silent)
Next session: Claude reads context files and continues with context
The ctx plugin provides lifecycle hooks implemented as Go subcommands (ctx system *):
Hook Event Purpose ctx system context-load-gate PreToolUse (.*) Auto-inject context on first tool use ctx system block-non-path-ctx PreToolUse (Bash) Block ./ctx or go run: force $PATH install ctx system qa-reminder PreToolUse (Bash) Remind agent to lint/test before committing ctx system specs-nudge PreToolUse (EnterPlanMode) Nudge agent to use project specs when planning ctx system check-context-size UserPromptSubmit Nudge context assessment as sessions grow ctx system check-ceremonies UserPromptSubmit Nudge /ctx-remember and /ctx-wrap-up adoption ctx system check-persistence UserPromptSubmit Remind to persist learnings/decisions ctx system check-journal UserPromptSubmit Remind to export/enrich journal entries ctx system check-reminders UserPromptSubmit Relay pending reminders at session start ctx system check-version UserPromptSubmit Warn when binary/plugin versions diverge ctx system check-resources UserPromptSubmit Warn when memory/swap/disk/load hit DANGER level ctx system check-knowledge UserPromptSubmit Nudge when knowledge files grow large ctx system check-map-staleness UserPromptSubmit Nudge when ARCHITECTURE.md is stale ctx system heartbeat UserPromptSubmit Session-alive signal with prompt count metadata ctx system post-commit PostToolUse (Bash) Nudge context capture and QA after git commits
A catch-all PreToolUse hook also runs ctx agent on every tool use (with cooldown) to autoload context.
The --session $PPID flag isolates the cooldown per session: $PPID resolves to the Claude Code process PID, so concurrent sessions don't interfere. The default cooldown is 10 minutes; use --cooldown 0 to disable it.
When developing ctx locally (adding skills, hooks, or changing plugin behavior), Claude Code caches the plugin by version. You must bump the version in both files and update the marketplace for changes to take effect:
Start a new Claude Code session: skill changes aren't reflected in existing sessions.
Both Version Files Must Match
If you only bump plugin.json but not marketplace.json (or vice versa), Claude Code may not detect the update. Always bump both together.
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#troubleshooting","level":3,"title":"Troubleshooting","text":"Issue Solution Context not loading Check ctx is in PATH: which ctx Hook errors Verify plugin is installed: claude /plugin list New skill not visible Bump version in both plugin.json files, update marketplace","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#manual-context-load","level":3,"title":"Manual Context Load","text":"
If hooks aren't working, manually load context:
# Get context packet\nctx agent --budget 4000\n\n# Or paste into conversation\ncat .context/TASKS.md\n
The ctx plugin ships Agent Skills following the agentskills.io specification.
These are invoked in Claude Code with /skill-name.
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#session-lifecycle-skills","level":4,"title":"Session Lifecycle Skills","text":"Skill Description /ctx-remember Recall project context at session start (ceremony) /ctx-wrap-up End-of-session context persistence (ceremony) /ctx-status Show context summary (tasks, decisions, learnings) /ctx-agent Get AI-optimized context packet /ctx-next Suggest 1-3 concrete next actions from context /ctx-commit Commit with integrated context capture /ctx-reflect Review session and suggest what to persist /ctx-remind Manage session-scoped reminders /ctx-pause Pause context hooks for this session /ctx-resume Resume context hooks after a pause","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#context-persistence-skills","level":4,"title":"Context Persistence Skills","text":"Skill Description /ctx-add-task Add a task to TASKS.md /ctx-add-learning Add a learning to LEARNINGS.md /ctx-add-decision Add a decision with context/rationale/consequence /ctx-add-convention Add a coding convention to CONVENTIONS.md /ctx-archive Archive completed tasks","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#scratchpad-skills","level":4,"title":"Scratchpad Skills","text":"Skill Description /ctx-pad Manage encrypted scratchpad entries","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#session-history-skills","level":4,"title":"Session History Skills","text":"Skill Description /ctx-history Browse AI session history /ctx-journal-enrich Enrich a journal entry with frontmatter/tags /ctx-journal-enrich-all Full journal pipeline: export if needed, then batch-enrich","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#blogging-skills","level":4,"title":"Blogging Skills","text":"
Blogging is a Better Way of Creating Release Notes
The blogging workflow can also double as generating release notes:
AI reads your git commit history and creates a \"narrative\", which is essentially what a release note is for.
Skill Description /ctx-blog Generate blog post from recent activity /ctx-blog-changelog Generate blog post from commit range with theme","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#auditing-health-skills","level":4,"title":"Auditing & Health Skills","text":"Skill Description /ctx-doctor Troubleshoot ctx behavior with structural health checks /ctx-drift Detect and fix context drift (structural + semantic) /ctx-consolidate Merge redundant learnings or decisions into denser entries /ctx-alignment-audit Audit doc claims against playbook instructions /ctx-prompt-audit Analyze session logs for vague prompts /check-links Audit docs for dead internal and external links","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#planning-execution-skills","level":4,"title":"Planning & Execution Skills","text":"Skill Description /ctx-loop Generate a Ralph Loop iteration script /ctx-implement Execute a plan step-by-step with checks /ctx-import-plans Import Claude Code plan files into project specs /ctx-worktree Manage git worktrees for parallel agents /ctx-architecture Build and maintain architecture maps","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#usage-examples","level":4,"title":"Usage Examples","text":"
// split to multiple lines for readability\n{\n \"ai.systemPrompt\": \"Read .context/TASKS.md and \n .context/CONVENTIONS.md before responding. \n Follow rules in .context/CONSTITUTION.md.\",\n}\n
The --write flag creates .github/copilot-instructions.md, which Copilot reads automatically at the start of every session. This file contains your project's constitution rules, current tasks, conventions, and architecture: giving Copilot persistent context without manual copy-paste.
Re-run ctx setup copilot --write after updating your .context/ files to regenerate the instructions.
The ctx VS Code extension adds a @ctx chat participant to GitHub Copilot Chat, giving you direct access to all context commands from within the editor.
Typing @ctx without a command shows help with all available commands. The extension also supports natural language: asking @ctx about \"status\" or \"drift\" routes to the correct command automatically.
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#configuration_2","level":4,"title":"Configuration","text":"Setting Default Description ctx.executablePathctx Path to the ctx binary. Set this if ctx is not in your PATH.","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#follow-up-suggestions","level":4,"title":"Follow-Up Suggestions","text":"
After each command, the extension suggests relevant next steps. For example, after /init it suggests /status and /hook; after /drift it suggests /sync.
ctx init creates a .context/sessions/ directory for storing session data from non-Claude tools. The Markdown session parser scans this directory during ctx journal, enabling session history for Copilot and other tools.
These patterns work without the extension, using Copilot's built-in file awareness:
Pattern 1: Keep context files open
Open .context/CONVENTIONS.md in a split pane. Copilot will reference it.
Pattern 2: Reference in comments
// See .context/CONVENTIONS.md for naming patterns\n// Following decision in .context/DECISIONS.md: Use PostgreSQL\n\nfunction getUserById(id: string) {\n // Copilot now has context\n}\n
Pattern 3: Paste context into Copilot Chat
ctx agent --budget 2000\n
Paste output into Copilot Chat for context-aware responses.
// Split to multiple lines for readability\n{\n \"ai.customInstructions\": \"Always read .context/CONSTITUTION.md first. \n Check .context/TASKS.md for current work. \n Follow patterns in .context/CONVENTIONS.md.\"\n}\n
You are working on a project with persistent context in .context/\n\nBefore responding:\n1. Read .context/CONSTITUTION.md - NEVER violate these rules\n2. Check .context/TASKS.md for current work\n3. Follow .context/CONVENTIONS.md patterns\n4. Reference .context/DECISIONS.md for architectural choices\n\nWhen you learn something new, note it for .context/LEARNINGS.md\nWhen you make a decision, document it for .context/DECISIONS.md\n
<context-update type=\"task\">Implement rate limiting</context-update>\n<context-update type=\"convention\">Use kebab-case for files</context-update>\n<context-update type=\"complete\">rate limiting</context-update>\n
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#structured-format-learnings-decisions","level":3,"title":"Structured Format (learnings, decisions)","text":"
Learnings and decisions support structured attributes for better documentation:
Learning with full structure:
<context-update type=\"learning\"\n context=\"Debugging Claude Code hooks\"\n lesson=\"Hooks receive JSON via stdin, not environment variables\"\n application=\"Parse JSON stdin with the host language (Go, Python, etc.): no jq needed\"\n>Hook Input Format</context-update>\n
Decision with full structure:
<context-update type=\"decision\"\n context=\"Need a caching layer for API responses\"\n rationale=\"Redis is fast, well-supported, and team has experience\"\n consequence=\"Must provision Redis infrastructure; team training on Redis patterns\"\n>Use Redis for caching</context-update>\n
Learnings require: context, lesson, application attributes. Decisions require: context, rationale, consequence attributes. Updates missing required attributes are rejected with an error.
Skills That Fight the Platform: Common pitfalls in skill design that work against the host tool
The Anatomy of a Skill That Works: What makes a skill reliable: the E/A/R framework and quality gates
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/migration/","level":1,"title":"Integration","text":"","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#adopting-ctx-in-existing-projects","level":2,"title":"Adopting ctx in Existing Projects","text":"
Claude Code User?
You probably want the plugin instead of this page.
Install ctx from the marketplace: (/plugin → search \"ctx\" → Install) and you're done: hooks, skills, and updates are handled for you.
See Getting Started for the full walkthrough.
This guide covers adopting ctx in existing projects regardless of which tools your team uses.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#quick-paths","level":2,"title":"Quick Paths","text":"You have... Command What happens Nothing (greenfield) ctx init Creates .context/, CLAUDE.md, permissions Existing CLAUDE.mdctx init --merge Backs up your file, inserts ctx block after the H1 Existing CLAUDE.md + ctx markers ctx init --force Replaces the ctx block, leaves your content intact .cursorrules / .aider.conf.ymlctx initctx ignores those files: they coexist cleanly Team repo, first adopter ctx init --merge && git add .context/ CLAUDE.md Initialize and commit for the team","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#existing-claudemd","level":2,"title":"Existing CLAUDE.md","text":"
This is the most common scenario:
You have a CLAUDE.md with project-specific instructions and don't want to lose them.
You Own CLAUDE.md
After initialization, CLAUDE.md is yours: edit it freely.
Add project instructions, remove sections you don't need, reorganize as you see fit.
The only part ctx manages is the block between the <!-- ctx:context --> and <!-- ctx:end --> markers; everything outside those markers is yours to change at any time.
If you remove the markers, nothing breaks: ctx simply treats the file as having no ctx content and will offer to merge again on the next ctx init.
When ctx init detects an existing CLAUDE.md, it checks for ctx markers (<!-- ctx:context --> ... <!-- ctx:end -->):
State Default behavior With --merge With --force No CLAUDE.md Creates from template Creates from template Creates from template Exists, no ctx markers Prompts to merge Auto-merges (no prompt) Auto-merges (no prompt) Exists, has ctx markers Skips (already set up) Skips Replaces the ctx block only","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#the-merge-flag","level":3,"title":"The --merge Flag","text":"
--merge auto-merges without prompting. The merge process:
Backs up your existing CLAUDE.md to CLAUDE.md.<timestamp>.bak;
Finds the H1 heading (e.g., # My Project) in your file;
Inserts the ctx block immediately after it;
Preserves everything else untouched.
Your content before and after the ctx block remains exactly as it was.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#before-after-example","level":3,"title":"Before / After Example","text":"
Before: your existing CLAUDE.md:
# My Project\n\n## Build Commands\n\n-`npm run build`: production build\n- `npm test`: run tests\n\n## Code Style\n\n- Use TypeScript strict mode\n- Prefer named exports\n
After ctx init --merge:
# My Project\n\n<!-- ctx:context -->\n<!-- DO NOT REMOVE: This marker indicates ctx-managed content -->\n\n## IMPORTANT: You Have Persistent Memory\n\nThis project uses Context (`ctx`) for context persistence across sessions.\n...\n\n<!-- ctx:end -->\n\n## Build Commands\n\n- `npm run build`: production build\n- `npm test`: run tests\n\n## Code Style\n\n- Use TypeScript strict mode\n- Prefer named exports\n
Your build commands and code style sections are untouched. The ctx block sits between markers and can be updated independently.
If your CLAUDE.md already has ctx markers (from a previous ctx init), the default behavior is to skip it. Use --force to replace the ctx block with the latest template: This is useful after upgrading ctx:
ctx init --force\n
This only replaces content between <!-- ctx:context --> and <!-- ctx:end -->. Your own content outside the markers is preserved. A timestamped backup is created before any changes.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#undoing-a-merge","level":3,"title":"Undoing a Merge","text":"
ctx doesn't touch tool-specific config files. It creates its own files (.context/, CLAUDE.md) and coexists with whatever you already have.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#what-does-ctx-create","level":3,"title":"What Does ctx Create?","text":"ctx creates ctx does NOT touch .context/ directory .cursorrulesCLAUDE.md (or merges into) .aider.conf.yml.claude/settings.local.json (seeded by ctx init; the plugin manages hooks and skills) .github/copilot-instructions.md.windsurfrules Any other tool-specific config
Claude Code hooks and skills are provided by the ctx plugin, installed from the Claude Code marketplace (/plugin → search \"ctx\" → Install).
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#running-ctx-alongside-other-tools","level":3,"title":"Running ctx Alongside Other Tools","text":"
The .context/ directory is the source of truth. Tool-specific configs point to it:
Cursor: Reference .context/ files in your system prompt (see Cursor setup)
Aider: Add .context/ files to the read: list in .aider.conf.yml (see Aider setup)
Copilot: Keep .context/ files open or reference them in comments (see Copilot setup)
You can generate a tool-specific configuration with:
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#migrating-content-into-context","level":3,"title":"Migrating Content Into .context/","text":"
If you have project knowledge scattered across .cursorrules or custom prompt files, consider migrating it:
Rules / invariants → .context/CONSTITUTION.md
Code patterns → .context/CONVENTIONS.md
Architecture notes → .context/ARCHITECTURE.md
Known issues / tips → .context/LEARNINGS.md
You don't need to delete the originals: ctx and tool-specific files can coexist. But centralizing in .context/ means every tool gets the same context.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#team-adoption","level":2,"title":"Team Adoption","text":"","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#context-is-designed-to-be-committed","level":3,"title":".context/ Is Designed to Be Committed","text":"
The context files (tasks, decisions, learnings, conventions, architecture) are meant to live in version control. However, some subdirectories are personal or sensitive and should not be committed.
ctx init automatically adds these .gitignore entries:
# Journals contain full session transcripts: personal, potentially large\n.context/journal/\n.context/journal-site/\n.context/journal-obsidian/\n\n# Legacy encryption key path (copy to ~/.ctx/.ctx.key if needed)\n.context/.ctx.key\n\n# Runtime state and logs (ephemeral, machine-specific):\n.context/state/\n.context/logs/\n\n# Claude Code local settings (machine-specific)\n.claude/settings.local.json\n
With those in place, committing is straightforward:
# One person initializes\nctx init --merge\n\n# Commit context files (journals and keys are already gitignored)\ngit add .context/ CLAUDE.md\ngit commit -m \"Add ctx context management\"\ngit push\n
Teammates pull and immediately have context. No per-developer setup needed.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#what-about-claude","level":3,"title":"What About .claude/?","text":"
The .claude/ directory contains permissions that ctx init seeds. Hooks and skills are provided by the ctx plugin (not per-project files).
Context files are plain Markdown. Resolve conflicts the same way you would for any other documentation file:
# After a conflicting pull\ngit diff .context/TASKS.md # See both sides\n# Edit to keep both sets of tasks, then:\ngit add .context/TASKS.md\ngit commit\n
Common conflict scenarios:
TASKS.md: Two people added tasks: Keep both.
DECISIONS.md: Same decision recorded differently: Unify the entry.
CLAUDE.md instructions work immediately for Claude Code users;
Other tool users can adopt at their own pace using ctx setup <tool>;
Context files benefit everyone who reads them, even without tool integration.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#verifying-it-worked","level":2,"title":"Verifying It Worked","text":"","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#check-status","level":3,"title":"Check Status","text":"
ctx status\n
You should see your context files listed with token counts and no warnings.
Start a new AI session and ask: \"Do you remember?\"
The AI should cite specific context:
Current tasks from .context/TASKS.md;
Recent decisions or learnings;
Session history (if you've had prior sessions);
If it responds with generic \"I don't have memory\", check that ctx is in your PATH (which ctx) and that hooks are configured (see Troubleshooting).
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#verify-the-merge","level":3,"title":"Verify the Merge","text":"
If you used --merge, check that your original content is intact:
# Your original content should still be there\ncat CLAUDE.md\n\n# The ctx block should be between markers\ngrep -c \"ctx:context\" CLAUDE.md # Should print 1\ngrep -c \"ctx:end\" CLAUDE.md # Should print 1\n
","path":["Operations","Integration"],"tags":[]},{"location":"operations/release/","level":1,"title":"Cutting a Release","text":"","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#prerequisites","level":2,"title":"Prerequisites","text":"
Before you can cut a release you need:
Push access to origin (GitHub)
GPG signing configured (make gpg-test)
Go installed (version in go.mod)
Zensical installed (make site-setup)
A clean working tree (git status shows nothing to commit)
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#step-by-step","level":2,"title":"Step-by-Step","text":"","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#1-update-the-version-file","level":3,"title":"1. Update the VERSION File","text":"
echo \"0.9.0\" > VERSION\ngit add VERSION\ngit commit -m \"chore: bump version to 0.9.0\"\n
The VERSION file uses bare semver (0.9.0), no v prefix. The release script adds the v prefix for git tags.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#2-generate-release-notes","level":3,"title":"2. Generate Release Notes","text":"
In Claude Code:
/_ctx-release-notes\n
This analyzes commits since the last tag and writes dist/RELEASE_NOTES.md. The release script refuses to proceed without this file.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#3-verify-docs-and-commit-any-remaining-changes","level":3,"title":"3. Verify Docs and Commit Any Remaining Changes","text":"
/ctx-check-links # audit docs for dead links\nmake audit # full check: fmt, vet, lint, style, test\ngit status # must be clean\n
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#4-run-the-release","level":3,"title":"4. Run the Release","text":"
make release\n
Or, if you are in a Claude Code session:
/_ctx-release\n
The release script does everything in order:
Step What happens 1 Reads VERSION, verifies release notes exist 2 Verifies working tree is clean 3 Updates version in 4 config files (plugin.json, marketplace.json, VS Code package.json + lock) 4 Updates download URLs in 3 doc files (index.md, getting-started.md, integrations.md) 5 Adds new row to versions.md 6 Rebuilds the documentation site (make site) 7 Commits all version and docs updates 8 Runs make test and make smoke 9 Builds binaries for all 6 platforms via hack/build-all.sh 10 Creates a signed git tag (v0.9.0) 11 Pushes the tag to origin 12 Updates and pushes the latest tag","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#5-github-ci-takes-over","level":3,"title":"5. GitHub CI Takes Over","text":"
Pushing a v* tag triggers .github/workflows/release.yml:
Checks out the tagged commit
Runs the full test suite
Builds binaries for all platforms
Creates a GitHub Release with auto-generated notes
Uploads binaries and SHA256 checksums
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#6-verify","level":3,"title":"6. Verify","text":"
GitHub Releases shows the new version
All 6 binaries are attached (linux/darwin x amd64/arm64, windows x amd64)
SHA256 files are attached
Release notes look correct
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#what-gets-updated-automatically","level":2,"title":"What Gets Updated Automatically","text":"
The release script updates 8 files so you do not have to:
File What changes internal/assets/claude/.claude-plugin/plugin.json Plugin version .claude-plugin/marketplace.json Marketplace version (2 fields) editors/vscode/package.json VS Code extension version editors/vscode/package-lock.json VS Code lock version (2 fields) docs/index.md Download URLs docs/home/getting-started.md Download URLs docs/operations/integrations.md VSIX filename version docs/reference/versions.md New version row + latest pointer
The Go binary version is injected at build time via -ldflags from the VERSION file. No source file needs editing.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#build-targets-reference","level":2,"title":"Build Targets Reference","text":"Target What it does make release Full release (script + tag + push) make build Build binary for current platform make build-all Build all 6 platform binaries make test Unit tests make smoke Integration smoke tests make audit Full check (fmt + vet + lint + drift + docs + test) make site Rebuild documentation site","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#troubleshooting","level":2,"title":"Troubleshooting","text":"","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#release-notes-not-found","level":3,"title":"\"Release notes not found\"","text":"
ERROR: dist/RELEASE_NOTES.md not found.\n
Run /_ctx-release-notes in Claude Code first, or write dist/RELEASE_NOTES.md manually.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#working-tree-is-not-clean","level":3,"title":"\"Working tree is not clean\"","text":"
ERROR: Working tree is not clean.\n
Commit or stash all changes before running make release.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#tag-already-exists","level":3,"title":"\"Tag already exists\"","text":"
ERROR: Tag v0.9.0 already exists.\n
You cannot release the same version twice. Either bump VERSION to a new version, or delete the old tag if the release was incomplete:
git tag -d v0.9.0\ngit push origin :refs/tags/v0.9.0\n
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#ci-build-fails-after-tag-push","level":3,"title":"CI build fails after tag push","text":"
The tag is already published. Fix the issue, bump to a patch version (e.g. 0.9.1), and release again. Do not force-push tags that others may have already fetched.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/upgrading/","level":1,"title":"Upgrade","text":"","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#upgrade","level":2,"title":"Upgrade","text":"
New versions of ctx may ship updated permissions, CLAUDE.md directives, or plugin hooks and skills.
Claude Code User?
The marketplace can update skills, hooks, and prompts independently: /plugin → select ctx → Update now (or enable auto-update).
The ctx binary is separate: rebuild from source or download a new release when one is available, then run ctx init --force --merge. Knowledge files are preserved automatically.
# Plugin users (Claude Code)\n# /plugin → select ctx → Update now\n# Then update the binary and reinitialize:\nctx init --force --merge\n\n# From-source / manual users\n# install new ctx binary, then:\nctx init --force --merge\n# /plugin → select ctx → Update now (if using Claude Code)\n
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#what-changes-between-versions","level":2,"title":"What Changes Between Versions","text":"
ctx init generates two categories of files:
Category Examples Changes between versions? Infrastructure .claude/settings.local.json (permissions), ctx-managed sections in CLAUDE.md, ctx plugin (hooks + skills) Yes Knowledge .context/TASKS.md, DECISIONS.md, LEARNINGS.md, CONVENTIONS.md, ARCHITECTURE.md, GLOSSARY.md, CONSTITUTION.md, AGENT_PLAYBOOK.md No: this is your data
Infrastructure is regenerated by ctx init and plugin updates. Knowledge files are yours and should never be overwritten.
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#upgrade-steps","level":2,"title":"Upgrade Steps","text":"","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#1-install-the-new-version","level":3,"title":"1. Install the New Version","text":"
Build from source or download the binary:
cd /path/to/ctx-source\ngit pull\nmake build\nsudo make install\nctx --version # verify\n
--force regenerates infrastructure files (permissions, ctx-managed sections in CLAUDE.md).
--merge preserves your content outside ctx markers.
Knowledge files (.context/TASKS.md, DECISIONS.md, etc.) are preserved automatically: ctx init only overwrites infrastructure, never your data.
Encryption key: The encryption key lives at ~/.ctx/.ctx.key (outside the project). Reinit does not affect it. If you have a legacy key at .context/.ctx.key or ~/.local/ctx/keys/, copy it manually (see Syncing Scratchpad Notes).
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#3-update-the-ctx-plugin","level":3,"title":"3. Update the ctx Plugin","text":"
If you use Claude Code, update the plugin to get new hooks and skills:
Open /plugin in Claude Code.
Select ctx.
Click Update now.
Or enable auto-update so the plugin stays current without manual steps.
If you made manual backups, remove them once satisfied:
rm -rf .context.bak .claude.bak CLAUDE.md.bak\n
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#what-if-i-skip-the-upgrade","level":2,"title":"What If I Skip the Upgrade?","text":"
The old binary still works with your existing .context/ files. But you may miss:
New plugin hooks that enforce better practices or catch mistakes;
Updated skill prompts that produce better results;
New .gitignore entries for directories added in newer versions;
Bug fixes in the CLI itself.
The plugin and the binary can be updated independently. You can update the plugin (for new hooks/skills) even if you stay on an older binary, and vice versa.
Context files are plain Markdown: They never break between versions.
Workflow recipes combining ctx commands and skills to solve specific problems.
","path":["Recipes"],"tags":[]},{"location":"recipes/#getting-started","level":2,"title":"Getting Started","text":"","path":["Recipes"],"tags":[]},{"location":"recipes/#guide-your-agent","level":3,"title":"Guide Your Agent","text":"
How commands, skills, and conversational patterns work together. Train your agent to be proactive through ask, guide, reinforce.
","path":["Recipes"],"tags":[]},{"location":"recipes/#setup-across-ai-tools","level":3,"title":"Setup Across AI Tools","text":"
Initialize ctx and configure hooks for Claude Code, Cursor, Aider, Copilot, or Windsurf. Includes shell completion, watch mode for non-native tools, and verification.
","path":["Recipes"],"tags":[]},{"location":"recipes/#keeping-context-in-a-separate-repo","level":3,"title":"Keeping Context in a Separate Repo","text":"
Store context files outside the project tree: in a private repo, shared directory, or anywhere else. Useful for open source projects with private context or multi-repo setups.
The two bookend rituals for every session: /ctx-remember at the start to load and confirm context, /ctx-wrap-up at the end to review the session and persist learnings, decisions, and tasks.
","path":["Recipes"],"tags":[]},{"location":"recipes/#browsing-and-enriching-past-sessions","level":3,"title":"Browsing and Enriching Past Sessions","text":"
Export your AI session history to a browsable journal site. Enrich entries with metadata and search across months of work.
Leave a message for your next session. Reminders surface automatically at session start and repeat until dismissed. Date-gate reminders to surface only after a specific date.
Silence all nudge hooks for a quick task that doesn't need ceremony overhead. Session-scoped: Other sessions are unaffected. Security hooks still fire.
","path":["Recipes"],"tags":[]},{"location":"recipes/#knowledge-tasks","level":2,"title":"Knowledge & Tasks","text":"","path":["Recipes"],"tags":[]},{"location":"recipes/#persisting-decisions-learnings-and-conventions","level":3,"title":"Persisting Decisions, Learnings, and Conventions","text":"
Record architectural decisions with rationale, capture gotchas and lessons learned, and codify conventions so they survive across sessions and team members.
","path":["Recipes"],"tags":[]},{"location":"recipes/#using-the-scratchpad","level":3,"title":"Using the Scratchpad","text":"
Use the encrypted scratchpad for quick notes, working memory, and sensitive values during AI sessions. Natural language in, encrypted storage out.
Uses: ctx pad, /ctx-pad, ctx pad show, ctx pad edit
","path":["Recipes"],"tags":[]},{"location":"recipes/#syncing-scratchpad-notes-across-machines","level":3,"title":"Syncing Scratchpad Notes Across Machines","text":"
Distribute your scratchpad encryption key, push and pull encrypted notes via git, and resolve merge conflicts when two machines edit simultaneously.
Uses: ctx init, ctx pad, ctx pad resolve, scp
","path":["Recipes"],"tags":[]},{"location":"recipes/#bridging-claude-code-auto-memory","level":3,"title":"Bridging Claude Code Auto Memory","text":"
Mirror Claude Code's auto memory (MEMORY.md) into .context/ for version control, portability, and drift detection. Import entries into structured context files with heuristic classification.
Choose the right output pattern for your Claude Code hooks: VERBATIM relay for user-facing reminders, hard gates for invariants, agent directives for nudges, and five more patterns across the spectrum.
Customize what hooks say without changing what they do. Override the QA gate for Python (pytest instead of make lint), silence noisy ceremony nudges, or tailor post-commit instructions for your stack.
Uses: ctx system message list, ctx system message show, ctx system message edit, ctx system message reset
Mermaid sequence diagrams for every system hook: entry conditions, state reads, output, throttling, and exit points. Includes throttling summary table and state file reference.
Uses: All ctx system hooks
","path":["Recipes"],"tags":[]},{"location":"recipes/#auditing-system-hooks","level":3,"title":"Auditing System Hooks","text":"
The 12 system hooks that run invisibly during every session: what each one does, why it exists, and how to verify they're actually firing. Covers webhook-based audit trails, log inspection, and detecting silent hook failures.
Get push notifications when loops complete, hooks fire, or agents hit milestones. Webhook URL is encrypted: never stored in plaintext. Works with IFTTT, Slack, Discord, ntfy.sh, or any HTTP endpoint.
","path":["Recipes"],"tags":[]},{"location":"recipes/#maintenance","level":2,"title":"Maintenance","text":"","path":["Recipes"],"tags":[]},{"location":"recipes/#detecting-and-fixing-drift","level":3,"title":"Detecting and Fixing Drift","text":"
Keep context files accurate by detecting structural drift (stale paths, missing files, stale file ages) and task staleness.
Diagnose hook failures, noisy nudges, stale context, and configuration issues. Start with ctx doctor for a structural health check, then use /ctx-doctor for agent-driven analysis of event patterns.
Keep .claude/settings.local.json clean: recommended safe defaults, what to never pre-approve, and a maintenance workflow for cleaning up session debris.
","path":["Recipes"],"tags":[]},{"location":"recipes/#importing-claude-code-plans","level":3,"title":"Importing Claude Code Plans","text":"
Import Claude Code plan files (~/.claude/plans/*.md) into specs/ as permanent project specs. Filter by date, select interactively, and optionally create tasks referencing each imported spec.
Uses: /ctx-import-plans, /ctx-add-task
","path":["Recipes"],"tags":[]},{"location":"recipes/#design-before-coding","level":3,"title":"Design Before Coding","text":"
Front-load design with a four-skill chain: brainstorm the approach, spec the design, task the work, implement step-by-step. Each step produces an artifact that feeds the next.
Encode repeating workflows into reusable skills the agent loads automatically. Covers the full cycle: identify a pattern, create the skill, test with realistic prompts, and iterate until it triggers correctly.
Uses: /ctx-skill-creator, ctx init
","path":["Recipes"],"tags":[]},{"location":"recipes/#running-an-unattended-ai-agent","level":3,"title":"Running an Unattended AI Agent","text":"
Set up a loop where an AI agent works through tasks overnight without you at the keyboard, using ctx for persistent memory between iterations.
This recipe shows how ctx supports long-running agent loops without losing context or intent.
","path":["Recipes"],"tags":[]},{"location":"recipes/#when-to-use-a-team-of-agents","level":3,"title":"When to Use a Team of Agents","text":"
Decision framework for choosing between a single agent, parallel worktrees, and a full agent team.
This recipe covers the file overlap test, when teams make things worse, and what ctx provides at each level.
Uses: /ctx-worktree, /ctx-next, ctx status
","path":["Recipes"],"tags":[]},{"location":"recipes/#parallel-agent-development-with-git-worktrees","level":3,"title":"Parallel Agent Development with Git Worktrees","text":"
Split a large backlog across 3-4 agents using git worktrees, each on its own branch and working directory. Group tasks by file overlap, work in parallel, merge back.
Map your project's internal and external dependency structure. Auto-detects Go, Node.js, Python, and Rust. Output as Mermaid, table, or JSON.
Uses: ctx dep, ctx drift
","path":["Recipes"],"tags":[]},{"location":"recipes/autonomous-loops/","level":1,"title":"Running an Unattended AI Agent","text":"","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-problem","level":2,"title":"The Problem","text":"
You have a project with a clear list of tasks, and you want an AI agent to work through them autonomously: overnight, unattended, without you sitting at the keyboard.
Each iteration needs to remember what the previous one did, mark tasks as completed, and know when to stop.
Without persistent memory, every iteration starts fresh and the loop collapses. With ctx, each iteration can pick up where the last one left off, but only if the agent persists its context as part of the work.
Unattended operation works because the agent treats context persistence as a first-class deliverable, not an afterthought.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#tldr","level":2,"title":"TL;DR","text":"
ctx init # 1. init context\n# Edit TASKS.md with phased work items\nctx loop --tool claude --max-iterations 10 # 2. generate loop.sh\n./loop.sh 2>&1 | tee /tmp/loop.log & # 3. run the loop\nctx watch --log /tmp/loop.log # 4. process context updates\n# Next morning:\nctx status && ctx load # 5. review the results\n
Read on for permissions, isolation, and completion signals.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx init Command Initialize project context and prompt templates ctx loop Command Generate the loop shell script ctx watch Command Monitor AI output and persist context updates ctx load Command Display assembled context (for debugging) /ctx-loop Skill Generate loop script from inside Claude Code /ctx-implement Skill Execute a plan step-by-step with verification","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-1-initialize-for-unattended-operation","level":3,"title":"Step 1: Initialize for Unattended Operation","text":"
Start by creating a .context/ directory configured so the agent can work without human input.
ctx init\n
This creates .context/ with the template files (including a loop prompt at .context/loop.md), and seeds Claude Code permissions in .claude/settings.local.json. Install the ctx plugin for hooks and skills.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-2-populate-tasksmd-with-phased-work","level":3,"title":"Step 2: Populate TASKS.md with Phased Work","text":"
Open .context/TASKS.md and organize your work into phases. The agent works through these systematically, top to bottom, using priority tags to break ties.
# Tasks\n\n## Phase 1: Foundation\n\n- [ ] Set up project structure and build system `#priority:high`\n- [ ] Configure testing framework `#priority:high`\n- [ ] Create CI pipeline `#priority:medium`\n\n## Phase 2: Core Features\n\n- [ ] Implement user registration `#priority:high`\n- [ ] Add email verification `#priority:high`\n- [ ] Create password reset flow `#priority:medium`\n\n## Phase 3: Hardening\n\n- [ ] Add rate limiting to API endpoints `#priority:medium`\n- [ ] Improve error messages `#priority:low`\n- [ ] Write integration tests `#priority:medium`\n
Phased organization matters because it gives the agent natural boundaries. Phase 1 tasks should be completable without Phase 2 code existing yet.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-3-configure-the-loop-prompt","level":3,"title":"Step 3: Configure the Loop Prompt","text":"
The loop prompt at .context/loop.md instructs the agent to operate autonomously:
Read .context/CONSTITUTION.md first (hard rules, never violated)
Load context from .context/ files
Pick one task per iteration
Complete the task and update context files
Commit changes (including .context/)
Signal status with a completion signal
You can customize .context/loop.md for your project. The critical parts are the one-task-per-iteration discipline, proactive context persistence, and completion signals at the end:
## Signal Status\n\nEnd your response with exactly ONE of:\n\n* `SYSTEM_CONVERGED`: All tasks in `TASKS.md` are complete (*this is the\n signal the loop script detects by default*)\n* `SYSTEM_BLOCKED`: Cannot proceed, need human input (explain why)\n* (*no signal*): More work remains, continue to the next iteration\n\nNote: the loop script only checks for `SYSTEM_CONVERGED` by default.\n`SYSTEM_BLOCKED` is a convention for the human reviewing the log.\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-4-configure-permissions","level":3,"title":"Step 4: Configure Permissions","text":"
An unattended agent needs permission to use tools without prompting. By default, Claude Code asks for confirmation on file writes, bash commands, and other operations, which stops the loop and waits for a human who is not there.
There are two approaches.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#option-a-explicit-allowlist-recommended","level":4,"title":"Option A: Explicit Allowlist (Recommended)","text":"
Grant only the permissions the agent needs. In .claude/settings.local.json:
Adjust the Bash patterns for your project's toolchain. The agent can run make, go, git, and ctx commands but cannot run arbitrary shell commands.
This is recommended even in sandboxed environments because it limits blast radius.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#option-b-skip-all-permission-checks","level":4,"title":"Option B: Skip All Permission Checks","text":"
Claude Code supports a --dangerously-skip-permissions flag that disables all permission prompts:
claude --dangerously-skip-permissions -p \"$(cat .context/loop.md)\"\n
This Flag Means What It Says
With --dangerously-skip-permissions, the agent can execute any shell command, write to any file, and make network requests without confirmation.
Only use this on a sandboxed machine: ideally a virtual machine with no access to host credentials, no SSH keys, and no access to production systems.
If you would not give an untrusted intern sudo on this machine, do not use this flag.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#enforce-isolation-at-the-os-level","level":4,"title":"Enforce Isolation at the OS Level","text":"
The only controls an agent cannot override are the ones enforced by the operating system, the container runtime, or the hypervisor.
Do Not Skip This Section
This is not optional hardening:
An unattended agent with unrestricted OS access is an unattended shell with unrestricted OS access.
The allowlist above is a strong first layer, but do not rely on a single runtime boundary.
For unattended runs, enforce isolation at the infrastructure level:
Layer What to enforce User account Run the agent as a dedicated unprivileged user with no sudo access and no membership in privileged groups (docker, wheel, adm). Filesystem Restrict the project directory via POSIX permissions or ACLs. The agent should have no access to other users' files or system directories. Container Run inside a Docker/Podman sandbox. Mount only the project directory. Drop capabilities (--cap-drop=ALL). Disable network if not needed (--network=none). Never mount the Docker socket and do not run privileged containers. Prefer rootless containers. Virtual machine Prefer a dedicated VM with no shared folders, no host passthrough, and no keys to other machines. Network If the agent does not need the internet, disable outbound access entirely. If it does, restrict to specific domains via firewall rules. Resource limits Apply CPU, memory, and disk limits (cgroups/container limits). A runaway loop should not fill disk or consume all RAM. Self-modification Make instruction files read-only. CLAUDE.md, .claude/settings.local.json, and .context/CONSTITUTION.md should not be writable by the agent user. If using project-local hooks, protect those too.
Use multiple layers together: OS-level isolation (the boundary the agent cannot cross), a permission allowlist (what Claude Code will do within that boundary), and CONSTITUTION.md (a soft nudge for the common case).
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-5-generate-the-loop-script","level":3,"title":"Step 5: Generate the Loop Script","text":"
Use ctx loop to generate a loop.sh tailored to your AI tool:
# Generate for Claude Code with a 10-iteration cap\nctx loop --tool claude --max-iterations 10\n\n# Generate for Aider\nctx loop --tool aider --max-iterations 10\n\n# Custom prompt file and output filename\nctx loop --tool claude --prompt my-prompt.md --output my-loop.sh\n
The generated script reads .context/loop.md, runs the tool, checks for completion signals, and loops until done or the cap is reached.
You can also use the /ctx-loop skill from inside Claude Code.
A Shell Loop Is the Best Practice
The shell loop approach spawns a fresh AI process each iteration, so the only state that carries between iterations is what lives in .context/ and git.
Claude Code's built-in /loop runs iterations within the same session, which can allow context window state to leak between iterations. This can be convenient for short runs, but it is less reliable for unattended loops.
See Shell Loop vs Built-in Loop for details.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-6-run-with-watch-mode","level":3,"title":"Step 6: Run with Watch Mode","text":"
Open two terminals. In the first, run the loop. In the second, run ctx watch to process context updates from the AI output.
# Terminal 1: Run the loop\n./loop.sh 2>&1 | tee /tmp/loop.log\n\n# Terminal 2: Watch for context updates\nctx watch --log /tmp/loop.log\n
The watch command parses XML context-update commands from the AI output and applies them:
<context-update type=\"complete\">user registration</context-update>\n<context-update type=\"learning\"\n context=\"Setting up user registration\"\n lesson=\"Email verification needs SMTP configured\"\n application=\"Add SMTP setup to deployment checklist\"\n>SMTP Requirement</context-update>\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-7-completion-signals-end-the-loop","level":3,"title":"Step 7: Completion Signals End the Loop","text":"
The generated script checks for one completion signal per run. By default this is SYSTEM_CONVERGED. You can change it with the --completion flag:
ctx loop --tool claude --completion BOOTSTRAP_COMPLETE --max-iterations 5\n
The following signals are conventions used in .context/loop.md:
Signal Convention How the script handles it SYSTEM_CONVERGED All tasks in TASKS.md are done Detected by default (--completion default value) SYSTEM_BLOCKED Agent cannot proceed Only detected if you set --completion to this BOOTSTRAP_COMPLETE Initial scaffolding done Only detected if you set --completion to this
The script uses grep -q on the agent's output, so any string works as a signal. If you need to detect multiple signals in one run, edit the generated loop.sh to add additional grep checks.
When you return in the morning, check the log and the context files:
tail -100 /tmp/loop.log\nctx status\nctx load\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-8-use-ctx-implement-for-plan-execution","level":3,"title":"Step 8: Use /ctx-implement for Plan Execution","text":"
Within each iteration, the agent can use /ctx-implement to execute multi-step plans with verification between steps. This is useful for complex tasks that touch multiple files.
The skill breaks a plan into atomic, verifiable steps:
Step 1/6: Create user model .................. OK\nStep 2/6: Add database migration ............. OK\nStep 3/6: Implement registration handler ..... OK\nStep 4/6: Write unit tests ................... OK\nStep 5/6: Run test suite ..................... FAIL\n -> Fixed: missing test dependency\n -> Re-verify ............................... OK\nStep 6/6: Update TASKS.md .................... OK\n
Each step is verified (build, test, syntax check) before moving to the next.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
A typical overnight run:
ctx init\n# Edit TASKS.md and .context/loop.md\n\nctx loop --tool claude --max-iterations 20\n\n./loop.sh 2>&1 | tee /tmp/loop.log &\nctx watch --log /tmp/loop.log\n\n# Next morning:\nctx status\nctx load\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#why-autonomous-loops-work-proactive-context-persistence","level":2,"title":"Why Autonomous Loops Work: Proactive Context Persistence","text":"
The autonomous loop pattern works because the agent persists context as part of the job.
Without proactive persistence, the loop degrades into disconnected iterations that repeat work, contradict decisions, and lose track of progress. The agent cannot rely on a human to prompt it. It must treat context updates as part of every task, not as a separate step.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-agent-playbook-self-check-drives-autonomy","level":3,"title":"The Agent Playbook Self-Check Drives Autonomy","text":"
The Agent Playbook defines a self-check prompt the agent applies continuously:
\"If this session ended right now, would the next session know what happened?\"\n
In interactive sessions, a human might remind the agent to save context. When the agent runs unattended, nobody is there to remind it.
The self-check becomes the trigger: after meaningful work, the agent evaluates whether the context files reflect reality and updates them immediately if they do not.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#what-the-agent-does-proactively-between-iterations","level":3,"title":"What the Agent Does Proactively Between Iterations","text":"
At milestones within an iteration, the agent persists without waiting for instructions:
Milestone What the agent does proactively Completes a task Marks it [x] in TASKS.md with #done timestamp Discovers a gotcha Adds it to LEARNINGS.md Makes a design choice Records it in DECISIONS.md with rationale Identifies follow-up work Creates new tasks in TASKS.md with #added timestamp Hits an unexpected failure Documents the root cause before moving on","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#example-what-proactive-persistence-looks-like","level":3,"title":"Example: What Proactive Persistence Looks Like","text":"
Here is what the agent does during a single iteration beyond writing code:
Iteration 4:\n 1. Read TASKS.md -> pick \"Add email verification\" (#priority:high)\n 2. Add #started:2026-01-25-030012 to the task\n 3. Implement the feature (code, tests, docs if needed)\n 4. Tests pass -> mark task [x], add #done:2026-01-25-031544\n 5. Add learning: \"SMTP config must be set before verification handler registers. Order matters in init().\"\n 6. Add decision: \"Use token-based verification links (not codes) because links work better in automated tests.\"\n 7. Create follow-up task: \"Add rate limiting to verification endpoint\" #added:...\n 8. Commit all changes including `.context/`\n 9. No signal emitted -> loop continues to iteration 5\n
Steps 2, 4, 5, 6, and 7 are proactive context persistence:
The agent was not asked to do any of them.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#context-persistence-at-milestones","level":3,"title":"Context Persistence at Milestones","text":"
For long autonomous runs, the agent persists context at natural boundaries, often at phase transitions or after completing a cluster of related tasks. It updates TASKS.md, DECISIONS.md, and LEARNINGS.md as it goes.
If the loop crashes at 4 AM, the context files tell you exactly where to resume. You can also use ctx journal source to review the session transcripts.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-persistence-contract","level":3,"title":"The Persistence Contract","text":"
The autonomous loop has an implicit contract:
Every iteration reads context: TASKS.md, DECISIONS.md, LEARNINGS.md
Every iteration writes context: task updates, new learnings, decisions
Every commit includes .context/ so the next iteration sees changes
Context stays current: if the loop stopped right now, nothing important is lost
Break any part of this contract and the loop degrades.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#tips","level":2,"title":"Tips","text":"
Markdown Is Not Enforcement
Your real guardrails are permissions and isolation, not Markdown. CONSTITUTION.md can nudge the agent, but it is probabilistic.
The permission allowlist and OS isolation are deterministic:
For unattended runs, trust the sandbox and the allowlist, not the prose.
Start with a small iteration cap. Use --max-iterations 5 on your first run.
Keep tasks atomic. Each task should be completable in a single iteration.
Check signal discipline. If the loop runs forever, the agent is not emitting SYSTEM_CONVERGED or SYSTEM_BLOCKED. Make the signal requirement explicit in .context/loop.md.
Commit after context updates. Finish code, update .context/, commit including .context/, then signal.
Set up webhook notifications to get notified when the loop completes, hits max iterations, or when hooks fire nudges. The generated loop script includes ctx notify calls automatically.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#next-up","level":2,"title":"Next Up","text":"
When to Use a Team of Agents →: Decision framework for choosing between a single agent, parallel worktrees, and a full agent team.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#see-also","level":2,"title":"See Also","text":"
Tracking Work Across Sessions: structuring TASKS.md
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/building-skills/","level":1,"title":"Building Project Skills","text":"","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#the-problem","level":2,"title":"The Problem","text":"
You have workflows your agent needs to repeat across sessions: a deploy checklist, a review protocol, a release process. Each time, you re-explain the steps. The agent gets it mostly right but forgets edge cases you corrected last time.
Skills solve this by encoding domain knowledge into a reusable document the agent loads automatically when triggered. A skill is not code - it is a structured prompt that captures what took you sessions to learn.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#tldr","level":2,"title":"TL;DR","text":"
/ctx-skill-creator\n
The skill-creator walks you through: identify a repeating workflow, draft a skill, test with realistic prompts, iterate until it triggers correctly and produces good output.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-skill-creator Skill Interactive skill creation and improvement workflow ctx init Command Deploys template skills to .claude/skills/ on first setup","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-1-identify-a-repeating-pattern","level":3,"title":"Step 1: Identify a Repeating Pattern","text":"
Good skill candidates:
Checklists you repeat: deploy steps, release prep, code review
Decisions the agent gets wrong: if you keep correcting the same behavior, encode the correction
Multi-step workflows: anything with a sequence of commands and conditional branches
Domain knowledge: project-specific terminology, architecture constraints, or conventions the agent cannot infer from code alone
Not good candidates: one-off instructions, things the platform already handles (file editing, git operations), or tasks too narrow to reuse.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-2-create-the-skill","level":3,"title":"Step 2: Create the Skill","text":"
Invoke the skill-creator:
You: \"I want a skill for our deploy process\"\n\nAgent: [Asks about the workflow: what steps, what tools,\n what edge cases, what the output should look like]\n
Or capture a workflow you just did:
You: \"Turn what we just did into a skill\"\n\nAgent: [Extracts the steps from conversation history,\n confirms understanding, drafts the skill]\n
The skill-creator produces a SKILL.md file in .claude/skills/your-skill/.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-3-test-with-realistic-prompts","level":3,"title":"Step 3: Test with Realistic Prompts","text":"
The skill-creator proposes 2-3 test prompts - the kind of thing a real user would say. It runs each one and shows the result alongside a baseline (same prompt without the skill) so you can compare.
Agent: \"Here are test prompts I'd try:\n 1. 'Deploy to staging'\n 2. 'Ship the hotfix'\n 3. 'Run the release checklist'\n Want to adjust these?\"\n
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-4-iterate-on-the-description","level":3,"title":"Step 4: Iterate on the Description","text":"
The description field in frontmatter determines when a skill triggers. Claude tends to undertrigger - descriptions need to be specific and slightly \"pushy\":
# Weak - too vague, will undertrigger\ndescription: \"Use for deployments\"\n\n# Strong - covers situations and synonyms\ndescription: >-\n Use when deploying to staging or production, running the release\n checklist, or when the user says 'ship it', 'deploy this', or\n 'push to prod'. Also use after merging to main when a deploy\n is expected.\n
The skill-creator helps you tune this iteratively.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-5-deploy-as-template-optional","level":3,"title":"Step 5: Deploy as Template (Optional)","text":"
If the skill should be available to all projects (not just this one), place it in internal/assets/claude/skills/ so ctx init deploys it to new projects automatically.
Most project-specific skills stay in .claude/skills/ and travel with the repo.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#skill-anatomy","level":2,"title":"Skill Anatomy","text":"
my-skill/\n SKILL.md # Required: frontmatter + instructions (<500 lines)\n scripts/ # Optional: deterministic code the skill can execute\n references/ # Optional: detail loaded on demand (not always)\n assets/ # Optional: output templates, not loaded into context\n
Key sections in SKILL.md:
Section Purpose Required? Frontmatter Name, description (trigger) Yes When to Use Positive triggers Yes When NOT to Use Prevents false activations Yes Process Steps and commands Yes Examples Good/bad output pairs Recommended Quality Checklist Verify before reporting completion For complex skills","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#tips","level":2,"title":"Tips","text":"
Description is everything. A great skill with a vague description never fires. Spend time on trigger coverage - synonyms, concrete situations, edge cases.
Stay under 500 lines. If your skill is growing past this, move detail into references/ files and point to them from SKILL.md.
Do not duplicate the platform. If the agent already knows how to do something (edit files, run git commands), do not restate it. Tag paragraphs as Expert/Activation/Redundant and delete Redundant ones.
Explain why, not just what. \"Sort by date because users want recent results first\" beats \"ALWAYS sort by date.\" The agent generalizes from reasoning better than from rigid rules.
Test negative triggers. Make sure the skill does not fire on unrelated prompts. A skill that activates too broadly becomes noise.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#next-up","level":2,"title":"Next Up","text":"
Parallel Agent Development with Git Worktrees ->: Split work across multiple agents using git worktrees.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#see-also","level":2,"title":"See Also","text":"
Skills Reference: full listing of all bundled and project-local skills
Guide Your Agent: how commands, skills, and conversational patterns work together
Design Before Coding: the four-skill chain for front-loading design work
Claude Code's .claude/settings.local.json controls what the agent can do without asking. Over time, this file accumulates one-off permissions from individual sessions: Exact commands with hardcoded paths, duplicate entries, and stale skill references.
A noisy \"allowlist\" makes it harder to spot dangerous permissions and increases the surface area for unintended behavior.
Since settings.local.json is .gitignored, it drifts independently of your codebase. There is no PR review, no CI check: just whatever you clicked \"Allow\" on.
This recipe shows what a well-maintained permission file looks like and how to keep it clean.
","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Command/Skill Role in this workflow ctx init Populates default ctx permissions /ctx-drift Detects missing or stale permission entries /ctx-sanitize-permissions Audits for dangerous patterns (security-focused)","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#recommended-defaults","level":2,"title":"Recommended Defaults","text":"
After running ctx init, your settings.local.json will have the ctx defaults pre-populated. Here is an opinionated safe starting point for a Go project using ctx:
The goal is intentional permissions: Every entry should be there because you decided it belongs, not because you clicked \"Allow\" once during debugging.
Use wildcards for trusted binaries: If you trust the binary (your own project's CLI, make, go), a single wildcard like Bash(ctx:*) beats twenty subcommand entries. It reduces noise and means new subcommands work without re-prompting.
Keep git commands granular: Unlike ctx or make, git has both safe commands (git log, git status) and destructive ones (git reset --hard, git clean -f). Listing safe commands individually prevents accidentally pre-approving dangerous ones.
Pre-approve all ctx- skills: Skills shipped with ctx (Skill(ctx-*)) are safe to pre-approve. They are part of your project and you control their content. This prevents the agent from prompting on every skill invocation.
ctx init automatically populates permissions.deny with rules that block dangerous operations. Deny rules are evaluated before allow rules: A denied pattern always prompts the user, even if it also matches an allow entry.
The defaults block:
Pattern Why Bash(sudo *) Cannot enter password; will hang Bash(git push *) Must be explicit user action Bash(rm -rf /*) etc. Recursive delete of system/home directories Bash(curl *) / wget Arbitrary network requests Bash(chmod 777 *) World-writable permissions Read/Edit(**/.env*) Secrets and credentials Read(**/*.pem, *.key) Private keys
Read/Edit Deny Rules
Read() and Edit() deny rules have known upstream enforcement issues (claude-code#6631,#24846).
They are included as defense-in-depth and intent documentation.
Blocked by default deny rules: no action needed, ctx init handles these:
Pattern Risk Bash(git push:*) Must be explicit user action Bash(sudo:*) Privilege escalation Bash(rm -rf:*) Recursive delete with no confirmation Bash(curl:*) / Bash(wget:*) Arbitrary network requests
Requires manual discipline: Never add these to allow:
Pattern Risk Bash(git reset:*) Can discard uncommitted work Bash(git clean:*) Deletes untracked files Skill(ctx-sanitize-permissions) Edits this file: self-modification vector Skill(release) Runs the release pipeline: high impact","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#hooks-regex-safety-net","level":2,"title":"Hooks: Regex Safety Net","text":"
Deny rules handle prefix-based blocking natively. Hooks complement them by catching patterns that require regex matching: Things deny rules can't express.
The ctx plugin ships these blocking hooks:
Hook What it blocks ctx system block-non-path-ctx Running ctx from wrong path
Project-local hooks (not part of the plugin) catch regex edge cases:
Hook What it blocks block-dangerous-commands.sh Mid-command sudo/git push (after &&), copies to bin dirs, absolute-path ctx
Pre-Approved + Hook-Blocked = Silent Block
If you pre-approve a command that a hook blocks, the user never sees the confirmation dialog. The agent gets a block response and must handle it, which is confusing.
It's better not to pre-approve commands that hooks are designed to intercept.
If manual cleanup is too tedious, use a golden image to automate it:
Snapshot a curated permission set, then restore at session start to automatically drop session-accumulated permissions. See the Permission Snapshots recipe for the full workflow.
","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#adapting-for-other-languages","level":2,"title":"Adapting for Other Languages","text":"
The recommended defaults above are Go-specific. For other stacks, swap the build/test tooling:
","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/context-health/","level":1,"title":"Detecting and Fixing Drift","text":"","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#the-problem","level":2,"title":"The Problem","text":"
ctx files drift: you rename a package, delete a module, or finish a sprint, and suddenly ARCHITECTURE.md references paths that no longer exist, TASKS.md is 80 percent completed checkboxes, and CONVENTIONS.md describes patterns you stopped using two months ago.
Stale context is worse than no context:
An AI tool that trusts outdated references will hallucinate confidently.
This recipe shows how to detect drift, fix it, and keep your .context/ directory lean and accurate.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#tldr","level":2,"title":"TL;DR","text":"
ctx drift # detect problems\nctx drift --fix # auto-fix the easy ones\nctx sync --dry-run && ctx sync # reconcile after refactors\nctx compact --archive # archive old completed tasks\nctx status # verify\n
Or just ask your agent: \"Is our context clean?\"
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx drift Command Detect stale paths, missing files, violations ctx drift --fix Command Auto-fix simple issues ctx sync Command Reconcile context with codebase structure ctx compact Command Archive completed tasks, clean up empty sections ctx status Command Quick health overview /ctx-drift Skill Structural plus semantic drift detection /ctx-architecture Skill Refresh ARCHITECTURE.md from actual codebase /ctx-status Skill In-session context summary /ctx-prompt-audit Skill Audit prompt quality and token efficiency","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#the-workflow","level":2,"title":"The Workflow","text":"
The best way to maintain context health is conversational: Ask your agent, guide it, and let it detect problems, explain them, and fix them with your approval. CLI commands exist for CI pipelines, scripting, and fine-grained control.
For day-to-day maintenance, talk to your agent.
Your Questions Reinforce the Pattern
Asking \"is our context clean?\" does two things:
It triggers a drift check right now
It reinforces the habit
This is reinforcement, not enforcement.
Do not wait for the agent to be proactive on its own:
Guide your agent, especially in early sessions.
Over time, you will ask less and the agent will start offering more.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-1-ask-your-agent","level":3,"title":"Step 1: Ask Your Agent","text":"
The simplest way to check context health:
Is our context clean?\nAnything stale?\nHow healthy are our context files?\n
Or invoke the skill directly:
/ctx-drift\n
The agent performs two layers of analysis:
Layer 1, structural checks (via ctx drift): Dead paths, missing files, completed task counts, constitution violations. Fast and programmatic.
Layer 2, semantic analysis (agent-driven): Does CONVENTIONS.md describe patterns the code no longer follows? Does DECISIONS.md contain entries whose rationale no longer applies? Are there learnings about bugs that are now fixed? This is where the agent adds value the CLI cannot: It reads both context files and source code and compares them.
The agent reports both layers together, explains each finding in plain language, and offers to fix what it can.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-2-maintenance-at-session-start","level":3,"title":"Step 2: Maintenance at Session Start","text":"
You do not need to ask explicitly.
Using Claude Code
ctx ships with Claude Code hooks that remind the agent at the right time to take initiative.
Checking context health at the session start, offering to persist learnings before you quit, and flagging drift when it matters. The agent stays proactive without you having to prompt it:
Agent: Good morning. I've loaded the context files. A few things\n before we start:\n\n - ARCHITECTURE.md references `pkg/auth/` which is now empty\n - DECISIONS.md hasn't been updated in 40 days\n - There are 18 completed tasks ready for archival\n\n Want me to run a quick maintenance pass, or should we jump\n straight into today's work?\n
☝️️ this is what persistent, initiative-driven sessions feel like when context is treated as a system instead of a prompt.
If the agent does not offer this on its own, a gentle nudge is enough:
Anything stale before we start?\nHow's the context looking?\n
This turns maintenance from a scheduled chore into a conversation that happens when it matters.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-3-real-time-detection-during-work","level":3,"title":"Step 3: Real-Time Detection During Work","text":"
Agents can notice drift while working: When a mismatch is directly in the path of their current task. If an agent reads ARCHITECTURE.md to find where to add a handler and internal/handlers/ doesn't exist, it will notice because the stale reference blocks its work:
Agent: ARCHITECTURE.md references `internal/handlers/` but that directory\n doesn't exist. I'll look at the actual source tree to find where\n handlers live now.\n
This happens reliably when the drift intersects the task. What is less reliable is the agent generalizing from one mismatch to \"there might be more stale references; let me run drift detection\" That leap requires the agent to know /ctx-drift exists and to decide the current task should pause for maintenance.
If you want that behavior, reinforce it:
Good catch. Yes, run /ctx-drift and clean up any other stale references.\n
Over time, agents that have seen this pattern will start offering proactively. But do not expect it from a cold start.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-4-archival-and-cleanup","level":3,"title":"Step 4: Archival and Cleanup","text":"
ctx drift detects when TASKS.md has more than 10 completed items and flags it as a staleness warning. Running ctx drift --fix archives completed tasks automatically.
You can also run /ctx-archive to compact on demand.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#knowledge-health-flow","level":3,"title":"Knowledge Health Flow","text":"
Over time, LEARNINGS.md and DECISIONS.md accumulate entries that overlap or partially repeat each other. The check-persistence hook detects when entry counts exceed a configurable threshold and surfaces a nudge:
\"LEARNINGS.md has 25+ entries. Consider running /ctx-consolidate to merge overlapping items.\"
The consolidation workflow:
Review: /ctx-consolidate groups entries by keyword similarity and presents candidate merges for your approval.
Merge: Approved groups are combined into single entries that preserve the key information from each original.
Archive: Originals move to .context/archive/, not deleted -- the full history is preserved in git and the archive directory.
Verify: Run ctx drift after consolidation to confirm no cross-references were broken by the merge.
This replaces ad-hoc cleanup with a repeatable, nudge-driven cycle: detect accumulation, review candidates, merge with approval, archive originals.
See also: Knowledge Capture for the recording workflow that feeds into this maintenance cycle.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-doctor-the-superset-check","level":2,"title":"ctx doctor: The Superset Check","text":"
ctx doctor combines drift detection with hook auditing, configuration checks, event logging status, and token size reporting in a single command. If you want one command that covers structural health, hooks, and state:
ctx doctor # everything in one pass\nctx doctor --json # machine-readable for scripting\n
Use /ctx-doctor Too
For agent-driven diagnosis that adds semantic analysis on top of the structural checks, use /ctx-doctor.
See the Troubleshooting recipe for the full workflow.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#cli-reference","level":2,"title":"CLI Reference","text":"
The conversational approach above uses CLI commands under the hood. When you need direct control, use the commands directly.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-drift","level":3,"title":"ctx drift","text":"
Scan context files for structural problems:
ctx drift\n
Sample output:
Drift Report\n============\n\nWarnings (3):\n ARCHITECTURE.md:14 path \"internal/api/router.go\" does not exist\n ARCHITECTURE.md:28 path \"pkg/auth/\" directory is empty\n CONVENTIONS.md:9 path \"internal/handlers/\" not found\n\nViolations (1):\n TASKS.md 31 completed tasks (recommend archival)\n\nStaleness:\n DECISIONS.md last modified 45 days ago\n LEARNINGS.md last modified 32 days ago\n\nExit code: 1 (warnings found)\n
Level Meaning Action Warning Stale path references, missing files Fix or remove Violation Constitution rule heuristic failures, heavy clutter Fix soon Staleness Files not updated recently Review content
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-drift-fix","level":3,"title":"ctx drift --fix","text":"
Auto-fix mechanical issues:
ctx drift --fix\n
This handles removing dead path references, updating unambiguous renames, clearing empty sections. Issues requiring judgment are flagged but left for you.
Run ctx drift again afterward to confirm what remains.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-sync","level":3,"title":"ctx sync","text":"
After a refactor, reconcile context with the actual codebase structure:
ctx sync scans for structural changes, compares with ARCHITECTURE.md, checks for new dependencies worth documenting, and identifies context referring to code that no longer exists.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-compact","level":3,"title":"ctx compact","text":"
Consolidate completed tasks and clean up empty sections:
ctx compact # move completed tasks to Completed section,\n # remove empty sections\nctx compact --archive # also archive old tasks to .context/archive/\n
Tasks: moves completed items (with all subtasks done) into the Completed section of TASKS.md
All files: removes empty sections left behind
With --archive: writes tasks older than 7 days to .context/archive/tasks-YYYY-MM-DD.md
Without --archive, nothing is deleted: Tasks are reorganized in place.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-status","level":3,"title":"ctx status","text":"
Quick health overview:
ctx status --verbose\n
Shows file counts, token estimates, modification times, and drift warnings in a single glance.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-prompt-audit","level":3,"title":"/ctx-prompt-audit","text":"
Checks whether your context files are readable, compact, and token-efficient for the model.
/ctx-prompt-audit\n
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
Conversational approach (recommended):
Is our context clean? -> agent runs structural plus semantic checks\nFix what you can -> agent auto-fixes and proposes edits\nArchive the done tasks -> agent runs ctx compact --archive\nHow's token usage? -> agent checks ctx status\n
CLI approach (for CI, scripts, or direct control):
ctx drift # 1. Detect problems\nctx drift --fix # 2. Auto-fix the easy ones\nctx sync --dry-run && ctx sync # 3. Reconcile after refactors\nctx compact --archive # 4. Archive old completed tasks\nctx status # 5. Verify\n
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#tips","level":2,"title":"Tips","text":"
Agents cross-reference context files with source code during normal work. When drift intersects their current task, they will notice: a renamed package, a deleted directory, a path that doesn't resolve. But they rarely generalize from one mismatch to a full audit on their own. Reinforce the pattern: when an agent mentions a stale reference, ask it to run /ctx-drift. Over time, it starts offering.
When an agent says \"this reference looks stale,\" it is usually right.
Semantic drift is more damaging than structural drift: ctx drift catches dead paths. But CONVENTIONS.md describing a pattern your code stopped following three weeks ago is worse. When you ask \"is our context clean?\", the agent can do both checks.
Use ctx status as a quick check: It shows file counts, token estimates, and drift warnings in a single glance. Good for a fast \"is everything ok?\" before diving into work.
Drift detection in CI: add ctx drift --json to your CI pipeline and fail on exit code 3 (violations). This catches constitution-level problems before they reach upstream.
Do not over-compact: Completed tasks have historical value. The --archive flag preserves them in .context/archive/ so you can search past work without cluttering active context.
Sync is cautious by default: Use --dry-run after large refactors, then apply.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#next-up","level":2,"title":"Next Up","text":"
Claude Code Permission Hygiene →: Recommended permission defaults and maintenance workflow for Claude Code.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#see-also","level":2,"title":"See Also","text":"
Troubleshooting: full diagnostic workflow using ctx doctor, event logs, and /ctx-doctor
Tracking Work Across Sessions: task lifecycle and archival
Persisting Decisions, Learnings, and Conventions: keeping knowledge files current
The Complete Session: where maintenance fits in the daily workflow
CLI Reference: full flag documentation for all commands
Context Files: structure and purpose of each .context/ file
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/customizing-hook-messages/","level":1,"title":"Customizing Hook Messages","text":"","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#the-problem","level":2,"title":"The Problem","text":"
ctx hooks speak ctx's language, not your project's. The QA gate says \"lint the ENTIRE project\" and \"make build,\" but your Python project uses pytest and ruff. The post-commit nudge suggests running lints, but your project uses npm test. You could remove the hook entirely, but then you lose the logic (counting, state tracking, adaptive frequency) just to change the words.
How do you customize what hooks say without removing what they do?
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#tldr","level":2,"title":"TL;DR","text":"
ctx system message list # see all hooks and their messages\nctx system message show qa-reminder gate # view the current template\nctx system message edit qa-reminder gate # copy default to .context/ for editing\nctx system message reset qa-reminder gate # revert to embedded default\n
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#commands-used","level":2,"title":"Commands Used","text":"Tool Type Purpose ctx system message list CLI command Show all hook messages with category and override status ctx system message show CLI command Print the effective message template ctx system message edit CLI command Copy embedded default to .context/ for editing ctx system message reset CLI command Delete user override, revert to default","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#how-it-works","level":2,"title":"How It Works","text":"
Hook messages use a 3-tier fallback:
User override: .context/hooks/messages/{hook}/{variant}.txt
Embedded default: compiled into the ctx binary
Hardcoded fallback: belt-and-suspenders safety net
The hook logic (when to fire, counting, state tracking, cooldowns) is unchanged. Only the content (what text gets emitted) comes from the template. You customize what the hook says without touching how it decides to speak.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#finding-the-original-templates","level":3,"title":"Finding the Original Templates","text":"
The default templates live in the ctx source tree at:
You can also browse them on GitHub: internal/assets/hooks/messages/
Or use ctx system message show to print any template without digging through source code:
ctx system message show qa-reminder gate # QA gate instructions\nctx system message show check-persistence nudge # persistence nudge\nctx system message show post-commit nudge # post-commit reminder\n
The show output includes the template source and available variables -- everything you need to write a replacement.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#template-variables","level":3,"title":"Template Variables","text":"
Some messages use Go text/template variables for dynamic content:
No context files updated in {{.PromptsSinceNudge}}+ prompts.\nHave you discovered learnings, made decisions,\nestablished conventions, or completed tasks\nworth persisting?\n
The show and edit commands list available variables for each message. When writing a replacement, keep the same {{.VariableName}} placeholders to preserve dynamic content. Variables that you omit render as <no value>: no error, but the output may look odd.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#intentional-silence","level":3,"title":"Intentional Silence","text":"
An empty template file (0 bytes or whitespace-only) means \"don't emit a message\". The hook still runs its logic but produces no output. This lets you silence specific messages without removing the hook from hooks.json.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#example-python-project-qa-gate","level":2,"title":"Example: Python Project QA Gate","text":"
The default QA gate says \"lint the ENTIRE project\" and references make lint. For a Python project, you want pytest and ruff:
# See the current default\nctx system message show qa-reminder gate\n\n# Copy it to .context/ for editing\nctx system message edit qa-reminder gate\n\n# Edit the override\n
Replace the content in .context/hooks/messages/qa-reminder/gate.txt:
HARD GATE! DO NOT COMMIT without completing ALL of these steps first:\n(1) Run the full test suite: pytest -x\n(2) Run the linter: ruff check .\n(3) Verify a clean working tree\nRun tests and linter BEFORE every git commit, no exceptions.\n
The hook still fires on every Edit call. The logic is identical. Only the instructions changed.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#example-silencing-ceremony-nudges","level":2,"title":"Example: Silencing Ceremony Nudges","text":"
The ceremony check nudges you to use /ctx-remember and /ctx-wrap-up. If your team has a different workflow and finds these noisy:
ctx system message edit check-ceremonies both\nctx system message edit check-ceremonies remember\nctx system message edit check-ceremonies wrapup\n
The hooks still track ceremony usage internally, but they no longer emit any visible output.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#example-javascript-project-post-commit","level":2,"title":"Example: JavaScript Project Post-Commit","text":"
The default post-commit nudge mentions generic \"lints and tests.\" For a JavaScript project:
ctx system message edit post-commit nudge\n
Replace with:
Commit succeeded. 1. Offer context capture to the user: Decision (design\nchoice?), Learning (gotcha?), or Neither. 2. Ask the user: \"Want me to\nrun npm test and eslint before you push?\" Do NOT push. The user pushes\nmanually.\n
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#the-two-categories","level":2,"title":"The Two Categories","text":"
Not all messages are equal. The list command shows each message's category:
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#customizable-17-messages","level":3,"title":"Customizable (17 messages)","text":"
Messages that are opinions: project-specific wording that benefits from customization. These are the primary targets for override.
Templates that reference undefined variables render <no value>: no error, graceful degradation.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#tips","level":2,"title":"Tips","text":"
Override files are version-controlled: they live in .context/ alongside your other context files. Team members get the same customized messages.
Start with show: always check the current default before editing. The embedded template is the baseline your override replaces.
Use reset to undo: if a customization causes confusion, reset reverts to the embedded default instantly.
Empty file = silence: you don't need to delete the hook. An empty override file silences the message while preserving the hook's logic.
JSON output for scripting: ctx system message list --json returns structured data for automation.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#see-also","level":2,"title":"See Also","text":"
Hook Output Patterns: understanding VERBATIM relays, agent directives, and hard gates
Auditing System Hooks: verifying hooks are running and auditing their output
Understanding how packages relate to each other is the first step in onboarding, refactoring, and architecture review. ctx dep generates dependency graphs from source code so you can see the structure at a glance instead of tracing imports by hand.
# Auto-detect ecosystem and output Mermaid (default)\nctx dep\n\n# Table format for a quick terminal overview\nctx dep --format table\n\n# JSON for programmatic consumption\nctx dep --format json\n
By default, only internal (first-party) dependencies are shown. Add --external to include third-party packages:
ctx dep --external\nctx dep --external --format table\n
This is useful when auditing transitive dependencies or checking which packages pull in heavy external libraries.
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#when-to-use-it","level":2,"title":"When to Use It","text":"
Onboarding. Generate a Mermaid graph and drop it into the project wiki. New contributors see the architecture before reading code.
Refactoring. Before moving packages, check what depends on them. Combine with ctx drift to find stale references after the move.
Architecture review. Table format gives a quick overview; Mermaid format goes into design docs and PRs.
Pre-commit. Run in CI to detect unexpected new dependencies between packages.
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#combining-with-other-commands","level":2,"title":"Combining with Other Commands","text":"","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#refactoring-with-ctx-drift","level":3,"title":"Refactoring with ctx drift","text":"
# See the dependency structure before refactoring\nctx dep --format table\n\n# After moving packages, check for broken references\nctx drift\n
Use JSON output as input for context files or architecture documentation:
# Generate a dependency snapshot for the context directory\nctx dep --format json > .context/deps.json\n\n# Or pipe into other tools\nctx dep --format mermaid >> docs/architecture.md\n
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#monorepos-and-multi-ecosystem-projects","level":2,"title":"Monorepos and Multi-Ecosystem Projects","text":"
In a monorepo with multiple ecosystems, ctx dep picks the first manifest it finds (Go beats Node.js beats Python beats Rust). Use --type to target a specific ecosystem:
# In a repo with both go.mod and package.json\nctx dep --type node\nctx dep --type go\n
For separate subdirectories, run from each root:
cd services/api && ctx dep --format table\ncd frontend && ctx dep --type node --format mermaid\n
Start with table format. It is the fastest way to get a mental model of the dependency structure. Switch to Mermaid when you need a visual for documentation or a PR.
Pipe JSON to jq. Filter for specific packages, count edges, or extract subgraphs programmatically.
Skip --external unless you need it. Internal-only graphs are cleaner and load faster. Add external deps when you are specifically auditing third-party usage.
Force --type in CI. Auto-detection is convenient locally, but explicit types prevent surprises when the repo structure changes.
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/design-before-coding/","level":1,"title":"Design Before Coding","text":"","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#the-problem","level":2,"title":"The Problem","text":"
You start coding a feature. Halfway through, you realize the approach doesn't handle a key edge case. You refactor. Then you discover the CLI interface doesn't fit the existing patterns. More refactoring.
The design work happened during implementation, mixed in with debugging and trial-and-error. The result works, but the spec was never written down, the trade-offs were never recorded, and the next session has no idea why things are shaped this way.
How do you front-load design so the implementation is straightforward?
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#tldr","level":2,"title":"TL;DR","text":"
/ctx-brainstorm # explore the design space\n/ctx-spec # write the spec document\n/ctx-add-task # break it into tasks\n/ctx-implement # execute step-by-step\n
Four skills, used in sequence. Each produces an artifact that feeds the next.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-brainstorm Skill Structured design dialogue: explore approaches /ctx-spec Skill Scaffold and fill out a spec from the template /ctx-add-task Skill Add implementation tasks to TASKS.md /ctx-implement Skill Execute a plan step-by-step with verification /ctx-add-decision Skill Record design choices made during brainstorming","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-1-brainstorm-the-design","level":3,"title":"Step 1: Brainstorm the Design","text":"
Start with /ctx-brainstorm when you have a rough idea but haven't committed to an approach. The skill follows a gated process:
Understand context: what exists today, what's the gap
Clarify the idea: one question at a time, not a wall of questions
Lock understanding: confirm before exploring solutions
Explore 2-3 approaches: with trade-offs for each
Stress-test: poke holes in the chosen approach
Present the design: detailed enough to implement from
You: \"I want to add RSS feed generation to the blog.\"\n\nAgent: \"Before we design, let me understand: is this for the journal\n site, the docs site, or both?\"\n\nYou: \"Journal site only.\"\n\nAgent: [asks about update frequency, content scope, existing infra...]\n [presents 3 approaches with trade-offs]\n\nYou: \"Option 2 - generate at build time.\"\n\nAgent: \"Let me stress-test that. What happens when...\"\n [probes edge cases]\n \"Design looks solid. Want me to record the approach as a\n decision and write a spec?\"\n
The brainstorm produces a validated design in conversation. It doesn't produce a file - that's what the spec is for.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-2-write-the-spec","level":3,"title":"Step 2: Write the Spec","text":"
Once the design is clear, use /ctx-spec to produce the written artifact. The skill reads the project's spec template (specs/tpl/spec-template.md) and walks through each section:
You: /ctx-spec (rss feed generation)\n\nAgent: \"Creating specs/rss-feed.md from template.\"\n\n \"Problem: What user-visible problem does this solve? Why now?\"\n\nYou: \"Users can't subscribe to blog updates. They have to check\n the site manually.\"\n\nAgent: [writes the Problem section, moves to Approach]\n\n \"Approach: High-level, how does this work?\"\n\nYou: \"Generate feed.xml during ctx journal site. Atom format.\"\n\nAgent: [writes Approach, moves to Happy Path, then Edge Cases...]\n
The skill spends extra time on Edge Cases and Error Handling: These are where specs earn their value. The skill pushes for at least three edge cases and won't accept \"none\" without a challenge.
Sections that don't apply can be skipped. The result is a complete spec at specs/{feature-name}.md.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-3-break-into-tasks","level":3,"title":"Step 3: Break Into Tasks","text":"
After the spec is written, the skill offers to create tasks:
Agent: \"Spec written to specs/rss-feed.md. Want me to break this\n into tasks in TASKS.md?\"\n\nYou: \"Yes.\"\n\nAgent: [creates a Phase section referencing the spec, adds tasks]\n \"Added Phase 0.8: RSS/Atom Feed Generation with 4 tasks:\n - [ ] Implement feed.xml generation in ctx journal site\n - [ ] Add --feed flag and .ctxrc feed config\n - [ ] Write tests for feed generation\n - [ ] Add recipe and update docs\"\n
Each task is independently completable and references the spec via Spec: specs/rss-feed.md in the Phase header.
You can also invoke /ctx-add-task directly at any point to add individual tasks.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-4-implement-step-by-step","level":3,"title":"Step 4: Implement Step-by-Step","text":"
Pick up tasks with /ctx-implement. The skill loads the spec, breaks work into atomic steps, and checkpoints after every 3-5 steps with a build and test verification:
You: /ctx-implement (specs/rss-feed.md)\n\nAgent: \"Loaded spec. Breaking into steps:\n 1. Add FeedEntry type to journal package\n 2. Implement feed.xml template\n 3. Wire into site generation pipeline\n [builds and tests after step 3]\n 4. Add --feed flag to ctx journal site\n 5. Add .ctxrc feed configuration\n [builds and tests after step 5]\n ...\"\n
If a build or test fails, the agent stops, diagnoses, and fixes before continuing.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#when-to-skip-steps","level":2,"title":"When to Skip Steps","text":"
Not every feature needs all four steps. Use your judgment:
Situation Start at Vague idea, multiple valid approaches Step 1: Brainstorm Clear approach, need to document it Step 2: Spec Spec already exists, need to plan work Step 3: Tasks Tasks exist, ready to code Step 4: Implement
A brainstorm without a spec is fine for small decisions. A spec without a brainstorm is fine when the design is obvious. The full chain is for features complex enough to warrant front-loaded design.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't need skill names. Natural language works:
You say What happens \"Let's think through this feature\" /ctx-brainstorm \"Spec this out\" /ctx-spec \"Write a design doc for...\" /ctx-spec \"Break this into tasks\" /ctx-add-task \"Implement the spec\" /ctx-implement \"Let's design before we build\" Starts at brainstorm","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#tips","level":2,"title":"Tips","text":"
Brainstorm first when uncertain. If you can articulate the approach in two sentences, skip to spec. If you can't, brainstorm.
Specs prevent scope creep. The Non-Goals section is as important as the approach. Writing down what you won't do keeps implementation focused.
Edge cases are the point. A spec that only describes the happy path isn't a spec - it's a wish. The /ctx-spec skill pushes for at least 3 edge cases because that's where designs break.
Record decisions during brainstorming. When you choose between approaches, the agent offers to persist the trade-off via /ctx-add-decision. Accept - future sessions need to know why, not just what.
Specs are living documents. Update them when implementation reveals new constraints. A spec that diverges from reality is worse than no spec.
The spec template is customizable. Edit specs/tpl/spec-template.md to match your project's needs. The /ctx-spec skill reads whatever template it finds there.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#see-also","level":2,"title":"See Also","text":"
Skills Reference: /ctx-spec: spec scaffolding from template
Skills Reference: /ctx-implement: step-by-step execution with verification
Tracking Work Across Sessions: task lifecycle and archival
Importing Claude Code Plans: turning ephemeral plans into permanent specs
Persisting Decisions, Learnings, and Conventions: capturing design trade-offs
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/external-context/","level":1,"title":"Keeping Context in a Separate Repo","text":"","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#the-problem","level":2,"title":"The Problem","text":"
ctx files contain project-specific decisions, learnings, conventions, and tasks. By default, they live in .context/ inside the project tree, and that works well when the context can be public.
But sometimes you need the context outside the project:
Open-source projects with private context: Your architectural notes, internal task lists, and scratchpad entries shouldn't ship with the public repo.
Compliance or IP concerns: Context files reference sensitive design rationale that belongs in a separate access-controlled repository.
Personal preference: You want a single context repo that covers multiple projects, or you just prefer keeping notes separate from code.
ctx supports this through three configuration methods. This recipe shows how to set them up and how to tell your AI assistant where to find the context.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#tldr","level":2,"title":"TL;DR","text":"
All ctx commands now use the external directory automatically.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx init CLI command Initialize context directory --context-dir Global flag Point ctx at a non-default directory --allow-outside-cwd Global flag Permit context outside the project root .ctxrc Config file Persist the context directory setting CTX_DIR Env variable Override context directory per-session /ctx-status Skill Verify context is loading correctly","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-1-create-the-private-context-repo","level":3,"title":"Step 1: Create the Private Context Repo","text":"
Create a separate repository for your context files. This can live anywhere: a private GitHub repo, a shared drive, a sibling directory:
# Create the context repo\nmkdir ~/repos/myproject-context\ncd ~/repos/myproject-context\ngit init\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-2-initialize-ctx-pointing-at-it","level":3,"title":"Step 2: Initialize ctx Pointing at It","text":"
From your project root, initialize ctx with --context-dir pointing to the external location. Because the directory is outside your project tree, you also need --allow-outside-cwd:
cd ~/repos/myproject\nctx --context-dir ~/repos/myproject-context \\\n --allow-outside-cwd \\\n init\n
This creates the full .context/-style file set inside ~/repos/myproject-context/ instead of ~/repos/myproject/.context/.
Boundary Validation
ctx validates that the .context directory is within the current working directory.
If your external directory is truly outside the project root:
Either every ctx command needs --allow-outside-cwd,
or you can persist the setting in .ctxrc (next step).
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-3-make-it-stick","level":3,"title":"Step 3: Make It Stick","text":"
Typing --context-dir and --allow-outside-cwd on every command is tedious. Pick one of these methods to make the configuration permanent.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#option-a-ctxrc-recommended","level":4,"title":"Option A: .ctxrc (Recommended)","text":"
Create a .ctxrc file in your project root:
# .ctxrc: committed to the project repo\ncontext_dir: ~/repos/myproject-context\nallow_outside_cwd: true\n
ctx reads .ctxrc automatically. Every command now uses the external directory without extra flags:
ctx status # reads from ~/repos/myproject-context\nctx add learning \"Redis MULTI doesn't roll back on error\"\n
Commit .ctxrc
.ctxrc belongs in the project repo. It contains no secrets: It's just a path and a boundary override.
.ctxrc lets teammates share the same configuration.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#option-b-ctx_dir-environment-variable","level":4,"title":"Option B: CTX_DIR Environment Variable","text":"
Good for CI pipelines, temporary overrides, or when you don't want to commit a .ctxrc:
# In your shell profile (~/.bashrc, ~/.zshrc)\nexport CTX_DIR=~/repos/myproject-context\n
Or for a single session:
CTX_DIR=~/repos/myproject-context ctx status\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#option-c-shell-alias","level":4,"title":"Option C: Shell Alias","text":"
If you prefer a shell alias over .ctxrc:
# ~/.bashrc or ~/.zshrc\nalias ctx='ctx --context-dir ~/repos/myproject-context --allow-outside-cwd'\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#priority-order","level":4,"title":"Priority Order","text":"
When multiple methods are set, ctx resolves the context directory in this order (highest priority first):
--context-dir flag
CTX_DIR environment variable
context_dir in .ctxrc
Default: .context/
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-4-agent-auto-discovery-via-bootstrap","level":3,"title":"Step 4: Agent Auto-Discovery via Bootstrap","text":"
When context lives outside the project tree, your AI assistant needs to know where to find it. The ctx system bootstrap command resolves the configured context directory and communicates it to the agent automatically:
$ ctx system bootstrap\nctx bootstrap\n=============\n\ncontext_dir: /home/user/repos/myproject-context\n\nFiles:\n CONSTITUTION.md, TASKS.md, DECISIONS.md, ...\n
The CLAUDE.md template generated by ctx init already instructs the agent to run ctx system bootstrap at session start. Because .ctxrc is in the project root, your agent inherits the external path automatically via the ctx system boostrap call instruction.
Here is the relevant section from CLAUDE.md for reference:
<!-- CLAUDE.md -->\n1. **Run `ctx system bootstrap`**: CRITICAL, not optional.\n This tells you where the context directory is. If it fails or returns\n no context_dir, STOP and warn the user.\n
Moreover, every nudge (context checkpoint, persistence reminder, etc.) also includes a Context: /home/user/repos/myproject-context footer, so the agent remains anchored to the correct directory even in long sessions.
If you use CTX_DIR instead of .ctxrc, export it in your shell profile so the hook process inherits it:
export CTX_DIR=~/repos/myproject-context\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-5-share-with-teammates","level":3,"title":"Step 5: Share with Teammates","text":"
Teammates clone both repos and set up .ctxrc:
# Clone the project\ngit clone git@github.com:org/myproject.git\ncd myproject\n\n# Clone the private context repo\ngit clone git@github.com:org/myproject-context.git ~/repos/myproject-context\n
If .ctxrc is already committed to the project, they're done: ctx commands will find the external context automatically.
If teammates use different paths, each developer sets their own CTX_DIR:
export CTX_DIR=~/my-own-path/myproject-context\n
For encryption key distribution across the team, see the Syncing Scratchpad Notes recipe.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-6-day-to-day-sync","level":3,"title":"Step 6: Day-to-Day Sync","text":"
The external context repo has its own git history. Treat it like any other repo: Commit and push after sessions:
cd ~/repos/myproject-context\n\n# After a session\ngit add -A\ngit commit -m \"Session: refactored auth module, added rate-limit learning\"\ngit push\n
Your AI assistant can do this too. When ending a session:
You: \"Save what we learned and push the context repo.\"\n\nAgent: [runs ctx add learning, then commits and pushes the context repo]\n
You can also set up a post-session habit: project code gets committed to the project repo, context gets committed to the context repo.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't need to remember the flags; simply ask your assistant:
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#set-up-your-system-using-natural-language","level":3,"title":"Set Up Your System Using Natural Language","text":"
You: \"Set up ctx to use ~/repos/myproject-context as the context directory.\"\n\nAgent: \"I'll create a .ctxrc in the project root pointing to that path.\n I'll also update CLAUDE.md so future sessions know where to find\n context. Want me to initialize the context files there too?\"\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#configure-separate-repo-for-context-folder-using-natural-language","level":3,"title":"Configure Separate Repo for .context Folder Using Natural Language","text":"
You: \"My context is in a separate repo. Can you load it?\"\n\nAgent: [reads .ctxrc, finds the path, loads context from the external dir]\n \"Loaded. You have 3 pending tasks, last session was about the auth\n refactor.\"\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#tips","level":2,"title":"Tips","text":"
Start simple. If you don't need external context yet, don't set it up. The default .context/ in-tree is the easiest path. Move to an external repo when you have a concrete reason.
One context repo per project. Sharing a single context directory across multiple projects creates confusion. Keep the mapping 1:1.
Use .ctxrc over env vars when the path is stable. It's committed, documented, and works for the whole team without per-developer shell setup.
Don't forget the boundary flag. The most common error is Error: context directory is outside the project root. Set allow_outside_cwd: true in .ctxrc or pass --allow-outside-cwd.
Commit both repos at session boundaries. Context without code history (or code without context history) loses half the value.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#next-up","level":2,"title":"Next Up","text":"
The Complete Session →: Walk through a full ctx session from start to finish.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#see-also","level":2,"title":"See Also","text":"
Setting Up ctx Across AI Tools: initial setup recipe
Syncing Scratchpad Notes Across Machines: distribute encryption keys when context is shared
CLI Reference: all global flags including --context-dir and --allow-outside-cwd
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/guide-your-agent/","level":1,"title":"Guide Your Agent","text":"
Commands vs. Skills
Commands (ctx status, ctx add task) run in your terminal.
Skills (/ctx-reflect, /ctx-next) run inside your AI coding assistant.
Recipes combine both.
Think of commands as structure and skills as behavior.
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/guide-your-agent/#proactive-behavior","level":2,"title":"Proactive Behavior","text":"
These recipes show explicit commands and skills, but agents trained on the ctx playbook are proactive: They offer to save learnings after debugging, record decisions after trade-offs, create follow-up tasks after completing work, and suggest what to work on next.
Your questions train the agent. Asking \"what have we learned?\" or \"is our context clean?\" does two things:
It triggers the workflow right now,
and it reinforces the pattern.
The more you guide, the more the agent habituates the behavior and begins offering on its own.
Each recipe includes a Conversational Approach section showing these natural-language patterns.
Tip
Don't wait passively for proactive behavior: especially in early sessions.
Ask, guide, reinforce. Over time, you ask less and the agent offers more.
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/guide-your-agent/#next-up","level":2,"title":"Next Up","text":"
Setup Across AI Tools →: Initialize ctx and configure hooks for Claude Code, Cursor, Aider, Copilot, or Windsurf.
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/guide-your-agent/#see-also","level":2,"title":"See Also","text":"
The Complete Session: full session lifecycle from start to finish
Prompting Guide: general tips for working effectively with AI coding assistants
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/hook-output-patterns/","level":1,"title":"Hook Output Patterns","text":"","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#the-problem","level":2,"title":"The Problem","text":"
Claude Code hooks can output text, JSON, or nothing at all. But the format of that output determines who sees it and who acts on it.
Choose the wrong pattern, and your carefully crafted warning gets silently absorbed by the agent, or your agent-directed nudge gets dumped on the user as noise.
This recipe catalogs the known hook output patterns and explains when to use each one.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#tldr","level":2,"title":"TL;DR","text":"
Eight patterns from full control to full invisibility:
hard gate (exit 2),
VERBATIM relay (agent MUST show),
agent directive (context injection),
and silent side-effect (background work).
Most hooks belong in the middle.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#the-spectrum","level":2,"title":"The Spectrum","text":"
These patterns form a spectrum based on who decides what the user sees:
The spectrum runs from full hook control (hard gate) to full invisibility (silent side effect).
Most hooks belong somewhere in the middle.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-1-hard-gate","level":2,"title":"Pattern 1: Hard Gate","text":"
Block the tool call entirely. The agent cannot proceed: it must find another approach or tell the user.
echo '{\"decision\": \"block\", \"reason\": \"Use ctx from PATH, not ./ctx\"}'\n
When to use: Enforcing invariants that must never be violated: Constitution rules, security boundaries, destructive command prevention.
Hook type: PreToolUse only (Claude Code first-class mechanism).
Examples in ctx:
ctx system block-non-path-ctx: Enforces the PATH invocation rule
block-git-push.sh: Requires explicit user approval for pushes (project-local)
block-dangerous-commands.sh: Prevents sudo, copies to ~/.local/bin (project-local)
Trade-off: The agent gets a block response with a reason. Good reasons help the agent recover (\"use X instead\"); bad reasons leave it stuck.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-2-verbatim-relay","level":2,"title":"Pattern 2: VERBATIM Relay","text":"
Force the agent to show this to the user as-is. The explicit instruction overcomes the agent's tendency to silently absorb context.
echo \"IMPORTANT: Relay this warning to the user VERBATIM before answering their question.\"\necho \"\"\necho \"┌─ Journal Reminder ─────────────────────────────\"\necho \"│ You have 12 sessions not yet exported.\"\necho \"└────────────────────────────────────────────────\"\n
When to use: Actionable reminders the user needs to see regardless of what they asked: Stale backups, unimported sessions, resource warnings.
Hook type: UserPromptSubmit (runs before the agent sees the prompt).
Examples in ctx:
ctx system check-journal: Unexported sessions and unenriched entries
ctx system check-context-size: Context capacity warning
ctx system check-resources: Resource pressure (memory, swap, disk, load): DANGER only
ctx system check-freshness: Technology constant staleness warning
check-backup-age.sh: Stale backup warning (project-local)
Trade-off: Noisy if overused. Every VERBATIM relay adds a preamble before the agent's actual answer. Throttle with once-per-day markers or adaptive frequency.
Key detail: The phrase IMPORTANT: Relay this ... VERBATIM is what makes this work. Without it, agents tend to process the information internally and never surface it. The explicit instruction is the pattern: the box-drawing is just fancy formatting.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-3-agent-directive","level":2,"title":"Pattern 3: Agent Directive","text":"
Tell the agent to do something, not the user. The agent decides whether and how to involve the user.
echo \"┌─ Persistence Checkpoint (prompt #25) ───────────\"\necho \"│ No context files updated in 15+ prompts.\"\necho \"│ Have you discovered learnings, decisions,\"\necho \"│ or completed tasks worth persisting?\"\necho \"└──────────────────────────────────────────────────\"\n
When to use: Behavioral nudges. The hook detects a condition and asks the agent to consider an action. The user may never need to know.
Hook type: UserPromptSubmit.
Examples in ctx:
ctx system check-persistence: Nudges the agent to persist context
Trade-off: No guarantee the agent acts. The nudge is one signal among many in the context window. Strong phrasing helps (\"Have you...?\" is better than \"Consider...\"), but ultimately the agent decides.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-4-silent-context-injection","level":2,"title":"Pattern 4: Silent Context Injection","text":"
Load context with no visible output. The agent gets enriched without either party noticing.
ctx agent --budget 4000 >/dev/null || true\n
When to use: Background context loading that should be invisible. The agent benefits from the information, but neither it, nor the user needs to know it happened.
Hook type: PreToolUse with .* matcher (runs on every tool call).
Examples in ctx:
The ctx agentPreToolUse hook: injects project context silently
Trade-off: Adds latency to every tool call. Keep the injected content small and fast to generate.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-5-silent-side-effect","level":2,"title":"Pattern 5: Silent Side-Effect","text":"
Do work, produce no output: Housekeeping that needs no acknowledgment.
find \"$CTX_TMPDIR\" -type f -mtime +15 -delete\n
When to use: Cleanup, log rotation, temp file management. Anything where the action is the point and nobody needs to know it happened.
Hook type: Any hook where output is irrelevant.
Examples in ctx:
Log rotation, marker file cleanup, state directory maintenance
Trade-off: None, if the action is truly invisible. If it can fail in a way that matters, consider logging.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-6-conditional-relay","level":3,"title":"Pattern 6: Conditional Relay","text":"
Tell the agent to relay only if a condition holds in context.
echo \"If the user's question involves modifying .context/ files,\"\necho \"relay this warning VERBATIM:\"\necho \"\"\necho \"┌─ Context Integrity ─────────────────────────────\"\necho \"│ CONSTITUTION.md has not been verified in 7 days.\"\necho \"└────────────────────────────────────────────────\"\necho \"\"\necho \"Otherwise, proceed normally.\"\n
When to use: Warnings that only matter in certain contexts. Avoids noise when the user is doing unrelated work.
Trade-off: Depends on the agent's judgment about when the condition holds. More fragile than VERBATIM relay, but less noisy.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-7-suggested-action","level":3,"title":"Pattern 7: Suggested Action","text":"
Give the agent a specific command to propose to the user.
echo \"┌─ Stale Dependencies ──────────────────────────\"\necho \"│ go.sum is 30+ days newer than go.mod.\"\necho \"│ Suggested: run \\`go mod tidy\\`\"\necho \"│ Ask the user before proceeding.\"\necho \"└───────────────────────────────────────────────\"\n
When to use: The hook detects a fixable condition and knows the fix. Goes beyond a nudge: Gives the agent a concrete next step. The agent still asks for permission but knows exactly what to propose.
Trade-off: The suggestion might be wrong or outdated. The \"ask the user before proceeding\" part is critical.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-8-escalating-severity","level":3,"title":"Pattern 8: Escalating Severity","text":"
Different urgency tiers with different relay expectations.
# INFO: agent processes silently, mentions if relevant\necho \"INFO: Last test run was 3 days ago.\"\n\n# WARN: agent should mention to user at next natural pause\necho \"WARN: 12 uncommitted changes across 3 branches.\"\n\n# CRITICAL: agent must relay immediately, before any other work\necho \"CRITICAL: Relay VERBATIM before answering. Disk usage at 95%.\"\n
When to use: When you have multiple hooks producing output and need to avoid overwhelming the user. INFO gets absorbed, WARN gets mentioned, CRITICAL interrupts.
Examples in ctx:
ctx system check-resources: Uses two tiers (WARNING/DANGER) internally but only fires the VERBATIM relay at DANGER level: WARNING is silent. See ctx system for the user-facing command that shows both tiers.
Trade-off: Requires agent training or convention to recognize the tiers. Without a shared protocol, the prefixes are just text.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#choosing-a-pattern","level":2,"title":"Choosing a Pattern","text":"
Is the agent about to do something forbidden?\n └─ Yes → Hard gate\n\nDoes the user need to see this regardless of what they asked?\n └─ Yes → VERBATIM relay\n └─ Sometimes → Conditional relay\n\nShould the agent consider an action?\n └─ Yes, with a specific fix → Suggested action\n └─ Yes, open-ended → Agent directive\n\nIs this background context the agent should have?\n └─ Yes → Silent injection\n\nIs this housekeeping?\n └─ Yes → Silent side-effect\n
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#design-tips","level":2,"title":"Design Tips","text":"
Throttle aggressively: VERBATIM relays that fire every prompt will be ignored or resented. Use once-per-day markers (touch $REMINDED), adaptive frequency (every Nth prompt), or staleness checks (only fire if condition persists).
Include actionable commands: \"You have 12 unimported sessions\" is less useful than \"You have 12 unimported sessions. Run: ctx journal import --all.\" Give the user (or agent) the exact next step.
Use box-drawing for visual structure: The ┌─ ─┐ │ └─ ─┘ pattern makes hook output visually distinct from agent prose. It also signals \"this is machine-generated, not agent opinion.\"
Test the silence path: Most hook runs should produce no output (the condition isn't met). Make sure the common case is fast and silent.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#common-pitfalls","level":2,"title":"Common Pitfalls","text":"
Lessons from 19 days of hook debugging in ctx. Every one of these was encountered, debugged, and fixed in production.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#silent-misfire-wrong-key-name","level":3,"title":"Silent Misfire: Wrong Key Name","text":"
{ \"PreToolUseHooks\": [ ... ] }\n
The key is PreToolUse, not PreToolUseHooks. Claude Code validates silently: A misspelled key means the hook is ignored with no error. Always test with a debug echo first to confirm the hook fires before adding real logic.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#json-escaping-breaks-shell-commands","level":3,"title":"JSON Escaping Breaks Shell Commands","text":"
Go's json.Marshal escapes >, <, and & as Unicode sequences (\\u003e) by default. This breaks shell commands in generated config:
\"command\": \"ctx agent 2\\u003e/dev/null\"\n
Fix: use json.Encoder with SetEscapeHTML(false) when generating hook configuration.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#stdin-not-environment-variables","level":3,"title":"stdin, Not Environment Variables","text":"
Hook input arrives as JSON via stdin, not environment variables:
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#regex-overfitting","level":3,"title":"Regex Overfitting","text":"
A regex meant to catch ctx as a binary will also match ctx as a directory component:
# Too broad: blocks: git -C /home/jose/WORKSPACE/ctx status\n(/home/|/tmp/|/var/)[^ ]*ctx[^ ]*\n\n# Narrow to binary only:\n(/home/|/tmp/|/var/)[^ ]*/ctx( |$)\n
Test hook regexes against paths that contain the target string as a substring, not just as the final component.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#repetition-fatigue","level":3,"title":"Repetition Fatigue","text":"
Injecting context on every tool call sounds safe. In practice, after seeing the same context injection fifteen times, the agent treats it as background noise: Conventions stated in the injected context get violated because salience has been destroyed by repetition.
Fix: cooldowns. ctx agent --session $PPID --cooldown 10m injects at most once per ten minutes per session using a tombstone file in /tmp/. This is not an optimization; it is a correction for a design flaw. Every injection consumes attention budget: 50 tool calls at 4,000 tokens each means 200,000 tokens of repeated context, most of it wasted.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#hardcoded-paths","level":3,"title":"Hardcoded Paths","text":"
A username rename (parallels to jose) broke every hook at once. Use $CLAUDE_PROJECT_DIR instead of absolute paths:
If the platform provides a runtime variable for paths, always use it.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#next-up","level":2,"title":"Next Up","text":"
Webhook Notifications →: Get push notifications when loops complete, hooks fire, or agents hit milestones.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#see-also","level":2,"title":"See Also","text":"
Customizing Hook Messages: override what hooks say without changing what they do
Claude Code Permission Hygiene: how permissions and hooks work together
Defense in Depth: why hooks matter for agent security
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-sequence-diagrams/","level":1,"title":"Hook Sequence Diagrams","text":"","path":["Hook Sequence Diagrams"],"tags":[]},{"location":"recipes/hook-sequence-diagrams/#hook-lifecycle","level":2,"title":"Hook Lifecycle","text":"
Every ctx hook is a Go binary invoked by Claude Code at one of three lifecycle events: PreToolUse (before a tool runs, can block), PostToolUse (after a tool completes), or UserPromptSubmit (on every user prompt, before any tools run). Hooks receive JSON on stdin and emit JSON or plain text on stdout.
This page documents the execution flow of every hook as a sequence diagram.
Daily check for unimported sessions and unenriched journal entries.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-journal\n participant State as .context/state/\n participant Journal as Journal dir\n participant Claude as Claude projects dir\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Check daily throttle marker\n alt throttled\n Hook-->>CC: (silent exit)\n end\n Hook->>Journal: Check dir exists\n Hook->>Claude: Check dir exists\n alt either dir missing\n Hook-->>CC: (silent exit)\n end\n Hook->>Journal: Get newest entry mtime\n Hook->>Claude: Count .jsonl files newer than journal\n Hook->>Journal: Count unenriched entries\n alt unimported == 0 and unenriched == 0\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, variant, {counts})\n Note over Hook: variant: both | unimported | unenriched\n Hook-->>CC: Nudge box (counts)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Touch throttle marker
Per-session check for MEMORY.md changes since last sync.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-memory-drift\n participant State as .context/state/\n participant Mem as memory.Discover\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Check session tombstone\n alt already nudged this session\n Hook-->>CC: (silent exit)\n end\n Hook->>Mem: DiscoverMemoryPath(projectRoot)\n alt auto memory not active\n Hook-->>CC: (silent exit)\n end\n Hook->>Mem: HasDrift(contextDir, sourcePath)\n alt no drift\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, nudge, fallback)\n Hook-->>CC: Nudge box (drift reminder)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Touch session tombstone
Tracks context file modification and nudges when edits happen without persisting context. Adaptive threshold based on prompt count.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-persistence\n participant State as .context/state/\n participant Ctx as .context/ files\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Read persistence state {Count, LastNudge, LastMtime}\n alt first prompt (no state)\n Hook->>State: Initialize state {Count:1, LastNudge:0, LastMtime:now}\n Hook-->>CC: (silent exit)\n end\n Hook->>Hook: Increment Count\n Hook->>Ctx: Get current context mtime\n alt context modified since LastMtime\n Hook->>State: Reset LastNudge = Count, update LastMtime\n Hook-->>CC: (silent exit)\n end\n Hook->>Hook: sinceNudge = Count - LastNudge\n Hook->>Hook: PersistenceNudgeNeeded(Count, sinceNudge)?\n alt threshold not reached\n Hook->>State: Write state\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, nudge, vars)\n Hook-->>CC: Nudge box (prompt count, time since last persist)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Update LastNudge = Count, write state
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-reminders\n participant Store as Reminders store\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>Store: ReadReminders()\n alt load error\n Hook-->>CC: (silent exit)\n end\n Hook->>Hook: Filter by due date (After <= today)\n alt no due reminders\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, reminders, {list})\n Hook-->>CC: Nudge box (reminder list + dismiss hints)\n Hook->>Hook: NudgeAndRelay(message)
Silent per-prompt pulse. Tracks prompt count, context modification, and token usage. The agent never sees this hook's output.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as heartbeat\n participant State as .context/state/\n participant Ctx as .context/ files\n participant Notify as Webhook + EventLog\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Increment heartbeat counter\n Hook->>Ctx: Get latest context file mtime\n Hook->>State: Compare with last recorded mtime\n Hook->>State: Update mtime record\n Hook->>State: Read session token info\n Hook->>Notify: Send heartbeat notification\n Hook->>Notify: Append to event log\n Hook->>State: Write heartbeat log entry\n Note over Hook: No stdout - agent never sees this
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-backup-age\n participant State as .context/state/\n participant FS as Filesystem\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Check daily throttle marker\n alt throttled\n Hook-->>CC: (silent exit)\n end\n Hook->>FS: Check SMB mount (if env var set)\n Hook->>FS: Check backup marker file age\n alt no warnings\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, warning, {Warnings})\n Hook-->>CC: Nudge box (warnings)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Touch throttle marker
","path":["Hook Sequence Diagrams"],"tags":[]},{"location":"recipes/hook-sequence-diagrams/#throttling-summary","level":2,"title":"Throttling Summary","text":"Hook Lifecycle Throttle Type Scope context-load-gate PreToolUse One-shot marker Per session block-non-path-ctx PreToolUse None Every match qa-reminder PreToolUse None Every git command specs-nudge PreToolUse None Every prompt post-commit PostToolUse None Every git commit check-task-completion PostToolUse Configurable interval Per session check-context-size UserPromptSubmit Adaptive counter Per session check-ceremonies UserPromptSubmit Daily marker Once per day check-freshness UserPromptSubmit Daily marker Once per day check-journal UserPromptSubmit Daily marker Once per day check-knowledge UserPromptSubmit Daily marker Once per day check-map-staleness UserPromptSubmit Daily marker Once per day check-memory-drift UserPromptSubmit Session tombstone Once per session check-persistence UserPromptSubmit Adaptive counter Per session check-reminders UserPromptSubmit None Every prompt check-resources UserPromptSubmit None Every prompt check-version UserPromptSubmit Daily marker Once per day heartbeat UserPromptSubmit None Every prompt block-dangerous-commands PreToolUse * None Every match check-backup-age UserPromptSubmit * Daily marker Once per day
* Project-local hook (settings.local.json), not shipped with ctx.
Claude Code plan files (~/.claude/plans/*.md) are ephemeral: They have structured context, approach, and file lists, but they're orphaned after the session ends. The filenames are UUIDs, so you can't tell what's in them without opening each one.
How do you turn a useful plan into a permanent project spec?
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#tldr","level":2,"title":"TL;DR","text":"
You: /ctx-import-plans\nAgent: [lists plans with dates and titles]\n 1. 2026-02-28 Add authentication middleware\n 2. 2026-02-27 Refactor database connection pool\nYou: \"import 1\"\nAgent: [copies to specs/add-authentication-middleware.md]\n
Plans are copied (not moved) to specs/, slugified by their H1 heading.
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-import-plans Skill List, filter, and import plan files to specs /ctx-add-task Skill Optionally add a task referencing the spec","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-1-list-available-plans","level":3,"title":"Step 1: List Available Plans","text":"
Invoke the skill and it lists plans with modification dates and titles:
You: /ctx-import-plans\n\nAgent: Found 3 plan files:\n 1. 2026-02-28 Add authentication middleware\n 2. 2026-02-27 Refactor database connection pool\n 3. 2026-02-25 Import plans skill\n Which plans would you like to import?\n
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-2-filter-optional","level":3,"title":"Step 2: Filter (Optional)","text":"
You can narrow the list with arguments:
Argument Effect --today Only plans modified today --since YYYY-MM-DD Only plans modified on or after the date --all Import everything without prompting (none) Interactive selection
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-3-select-and-import","level":3,"title":"Step 3: Select and Import","text":"
Pick one or more plans by number:
You: \"import 1 and 3\"\n\nAgent: Imported 2 plan(s):\n ~/.claude/plans/abc123.md -> specs/add-authentication-middleware.md\n ~/.claude/plans/ghi789.md -> specs/import-plans-skill.md\n Want me to add tasks referencing these specs?\n
The agent reads the H1 heading from each plan and slugifies it for the filename. If a plan has no H1 heading, the original filename (minus extension) is used as the slug.
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-4-add-follow-up-tasks-optional","level":3,"title":"Step 4: Add Follow-Up Tasks (Optional)","text":"
If you say yes, the agent creates tasks in TASKS.md that reference the imported specs:
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't need to remember the exact skill name:
You say What happens \"import my plans\" /ctx-import-plans (interactive) \"save today's plans as specs\" /ctx-import-plans --today \"import all plans from this week\" /ctx-import-plans --since ... \"turn that plan into a spec\" /ctx-import-plans (filtered)","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#tips","level":2,"title":"Tips","text":"
Plans are copied, not moved: The originals stay in ~/.claude/plans/. Claude Code manages that directory; ctx doesn't delete from it.
Conflict handling: If specs/{slug}.md already exists, the agent asks whether to overwrite or pick a different name.
Specs are project memory: Once imported, specs are tracked in git and available to future sessions. Reference them from TASKS.md phase headers with Spec: specs/slug.md.
Pair with /ctx-implement: After importing a plan as a spec, use /ctx-implement to execute it step-by-step with verification.
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#see-also","level":2,"title":"See Also","text":"
Skills Reference: /ctx-import-plans: full skill description
The Complete Session: where plan import fits in the session flow
Tracking Work Across Sessions: managing tasks that reference imported specs
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/knowledge-capture/","level":1,"title":"Persisting Decisions, Learnings, and Conventions","text":"","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#the-problem","level":2,"title":"The Problem","text":"
You debug a subtle issue, discover the root cause, and move on.
Three weeks later, a different session hits the same issue. The knowledge existed briefly in one session's memory but was never written down.
Architectural decisions suffer the same fate: you weigh trade-offs, pick an approach, and six sessions later the AI suggests the alternative you already rejected.
How do you make sure important context survives across sessions?
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#tldr","level":2,"title":"TL;DR","text":"
/ctx-reflect # surface items worth persisting\n/ctx-add-decision \"Title\" # record with context/rationale/consequence\n/ctx-add-learning \"Title\" # record with context/lesson/application\n
Or just tell your agent: \"What have we learned this session?\"
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx add decision Command Record an architectural decision ctx add learning Command Record a gotcha, tip, or lesson ctx add convention Command Record a coding pattern or standard ctx reindex Command Rebuild both quick-reference indices ctx decision reindex Command Rebuild the DECISIONS.md index ctx learning reindex Command Rebuild the LEARNINGS.md index /ctx-add-decision Skill AI-guided decision capture with validation /ctx-add-learning Skill AI-guided learning capture with validation /ctx-add-convention Skill AI-guided convention recording with placement /ctx-reflect Skill Surface items worth persisting at breakpoints","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#the-workflow","level":2,"title":"The Workflow","text":"
Decision, Learning, or Convention?
If you chose between alternatives, it is a decision.
If you discovered something surprising, it is a learning.
If you are codifying a repeated pattern, it is a convention.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-1-understand-what-to-persist","level":3,"title":"Step 1: Understand What to Persist","text":"
Three context files serve different purposes:
Decisions (DECISIONS.md) answer \"why is it this way?\" They record trade-offs between alternatives with structured fields: context, rationale, and consequences. Decisions prevent re-debating settled questions.
Learnings (LEARNINGS.md) answer \"what did we discover the hard way?\" They record gotchas and debugging insights specific to this project with structured fields: context, lesson, and application.
Learnings prevent repeating past mistakes.
Conventions (CONVENTIONS.md) answer \"how do we do things here?\" They record patterns and standards. No structured fields required: just a name, a rule, and an example. Conventions keep code consistent across sessions.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-2-record-decisions","level":3,"title":"Step 2: Record Decisions","text":"
Decisions have three structured fields:
context,
rationale,
and consequences.
The easiest way to record one is to tell your agent:
You: \"Save that as a decision: we picked file-based cooldown\n tokens over env vars.\"\n\nAgent: \"Recorded in DECISIONS.md:\n - Decision: Use file-based cooldown tokens instead of env vars\n - Context: Hook subprocesses cannot persist env vars to parent shell\n - Rationale: File tokens survive across processes. Simpler than\n IPC. Cleanup is automatic via TTL.\n - Consequence: Tombstone files accumulate in /tmp. Cannot share\n state across machines.\"\n
The agent was present for the trade-off discussion. It fills in the structured fields from what it observed, which is hard to reconstruct later as a terminal one-liner.
When multiple options were considered, the agent includes rejected alternatives in the rationale automatically.
The /ctx-add-decision skill also guides you through the fields interactively. For quick decisions, it supports a Y-statement:
\"In the context of [situation], facing [constraint], we decided for [choice] and against [alternatives], to achieve [benefit], accepting that [trade-off].\"
CLI Command for Scripting and Automation
When no agent is in the loop (CI pipelines, shell scripts, hooks), use the CLI directly with structured flags:
ctx add decision \"Use file-based cooldown tokens instead of env vars\" \\\n --context \"Hook subprocesses cannot persist env vars to parent shell\" \\\n --rationale \"File tokens survive across processes. Simpler than IPC. Cleanup is automatic via TTL.\" \\\n --consequence \"Tombstone files accumulate in /tmp. Cannot share state across machines.\"\n
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-3-record-learnings","level":3,"title":"Step 3: Record Learnings","text":"
Learnings have three structured fields: context, lesson, and application.
Again, your agent is the best tool for this. It was there when you discovered the gotcha and can express it more completely than a backslash-escaped terminal command:
You: \"That subprocess thing was painful. Save it as a learning.\"\n\nAgent: \"Added to LEARNINGS.md:\n - Learning: Claude Code hooks run in a subprocess\n - Context: Set env var in PreToolUse hook, but it was not visible\n in the main session\n - Lesson: Hook scripts execute in a child process. Env changes do\n not propagate to parent.\n - Application: Use tombstone files for hook-to-session\n communication. Never rely on hook env vars.\"\n
You said one sentence. The agent produced a structured entry with all three fields filled from the session context.
That is the point: the agent has the full conversation and can be far more expressive than what you would type at a command prompt.
The /ctx-add-learning skill applies three quality filters:
Could someone Google this in 5 minutes?
Is it specific to this codebase?
Did it take real effort to discover?
All three must pass.
Learnings capture principles and heuristics, not code snippets.
CLI Command for Scripting and Automation
When no agent is in the loop:
ctx add learning \"Claude Code hooks run in a subprocess\" \\\n --context \"Set env var in PreToolUse hook, but it was not visible in the main session\" \\\n --lesson \"Hook scripts execute in a child process. Env changes do not propagate to parent.\" \\\n --application \"Use tombstone files for hook-to-session communication. Never rely on hook env vars.\"\n
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-4-record-conventions","level":3,"title":"Step 4: Record Conventions","text":"
Conventions are simpler: just a name, a rule, and optionally a section.
These are short enough that either approach works:
You: \"We've been using kebab-case for every CLI flag. Codify that.\"\n\nAgent: \"Added to CONVENTIONS.md under Naming:\n 'Use kebab-case for all CLI flag names.'\"\n
Or from the terminal:
ctx add convention \"Use kebab-case for all CLI flag names\" --section \"Naming\"\n
Conventions work best for rules that come up repeatedly. Codify a pattern the third time you see it, not the first.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-5-reindex-after-manual-edits","level":3,"title":"Step 5: Reindex After Manual Edits","text":"
DECISIONS.md and LEARNINGS.md maintain a quick-reference index at the top: a compact table of date and title for each entry. The index updates automatically via ctx add, but falls out of sync after hand edits.
ctx reindex\n
This single command regenerates both indices. You can also reindex individually with ctx decision reindex or ctx learning reindex.
Run reindex after any manual edit. The index lets AI tools scan all entries without reading the full file, which matters when token budgets are tight.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-6-use-ctx-reflect-to-surface-what-to-capture","level":3,"title":"Step 6: Use /ctx-reflect to Surface What to Capture","text":"
Keep It Conversational
/ctx-reflect is not the only way to trigger reflection.
Agents trained on the ctx playbook naturally surface persist-worthy items at breakpoints, even without invoking the skill explicitly.
A conversational prompt like \"anything worth saving?\" or \"let's wrap up\" can trigger the same review.
The skill provides a structured checklist, but the behavior is available through natural conversation.
At natural breakpoints (after completing a feature, fixing a bug, or before ending a session) use /ctx-reflect to identify items worth persisting.
/ctx-reflect\n
The skill walks through learnings, decisions, tasks, and session notes, skipping categories with nothing to report. The output includes specific commands for each suggested persist:
This session implemented file-based cooldown for `ctx agent` and\ndiscovered that hook subprocesses cannot set env vars in the parent.\n\nI'd suggest persisting:\n- **Learning**: Hook subprocesses cannot propagate env vars\n `ctx add learning \"...\" --context \"...\" --lesson \"...\" --application \"...\"`\n- **Decision**: File-based cooldown tokens over env vars\n `ctx add decision \"...\" --context \"...\" --rationale \"...\" --consequence \"...\"`\n\nWant me to persist any of these?\n
The skill always asks before persisting.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-7-the-conversational-approach","level":3,"title":"Step 7: The Conversational Approach","text":"
The commands shown in Steps 2 to 4 are precise, but you rarely need to type them yourself. When working with an agent that has loaded the ctx playbook, natural language triggers the same operations.
You ask, the agent acts:
You say What the agent does \"What have we learned this session?\" Reviews recent work, identifies learnings, offers to persist \"Save that as a decision\" Captures the trade-off you just discussed with structured fields \"Any gotchas worth remembering?\" Scans recent work for non-obvious lessons \"Record that convention\" Adds the pattern you just agreed on to CONVENTIONS.md \"Let's wrap up\" Runs through the reflect checklist conversationally
Example exchange:
You: \"What did we learn from that debugging session?\"\n\nAgent: \"We discovered three things worth persisting:\n 1. Hook subprocesses can't propagate env vars to the parent. We\n spent 20 minutes on this before switching to tombstone files.\n 2. Tombstone files need a TTL or they accumulate in /tmp.\n 3. The cooldown logic must check file mtime, not just existence.\n Want me to save these as learnings?\"\n\nYou: \"Yes, all of them.\"\n\nAgent: \"Done. Added all three to LEARNINGS.md with structured fields.\"\n
The agent can also be proactive without being asked. The playbook encourages agents to persist context at natural milestones.
You may see:
After fixing a tricky bug: \"I noticed this was a non-obvious issue. Want me to save it as a learning?\"
After choosing between approaches: \"We just made a trade-off between file-based tokens and IPC. Should I record this as a decision?\"
At session milestones: the agent checks what is worth persisting and offers a brief summary.
When agents run unattended (no human in the loop), they may persist directly:
\"I've added the subprocess env var gotcha to LEARNINGS.md so we don't hit it again.\"
\"Recorded the decision to use file-based tokens over env vars in DECISIONS.md.\"
\"Marked the cooldown task done and added a follow-up for TTL cleanup.\"
This is behavior by design.
The playbook's self-check prompt, \"If this session ended right now, would the next session know what happened?\" drives agents to persist early and often rather than waiting for explicit instructions.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#command-line-approach-scripting-and-automation","level":3,"title":"Command-line Approach (Scripting and Automation)","text":"
# Decision: record the trade-off\nctx add decision \"Use PostgreSQL over SQLite\" \\\n --context \"Need concurrent multi-user access\" \\\n --rationale \"SQLite locks on writes; Postgres handles concurrency\" \\\n --consequence \"Requires a database server; team needs Postgres training\"\n\n# Learning: record the gotcha\nctx add learning \"SQL migrations must be idempotent\" \\\n --context \"Deploy failed when migration ran twice after rollback\" \\\n --lesson \"CREATE TABLE without IF NOT EXISTS fails on retry\" \\\n --application \"Always use IF NOT EXISTS guards in migrations\"\n\n# Convention: record the pattern\nctx add convention \"API handlers return structured errors\" --section \"API\"\n\n# Reindex after manual edits\nctx reindex\n\n# Reflect at breakpoints (from AI assistant)\n# /ctx-reflect\n
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#conversational-approach-interactive-sessions","level":3,"title":"Conversational Approach (Interactive Sessions)","text":"
You: \"We just picked Postgres over SQLite for the concurrency\n reasons. Save that?\"\n\nAgent: \"Recorded. Added to DECISIONS.md:\n - Decision: Use PostgreSQL over SQLite\n - Context: Need concurrent multi-user access\n - Rationale: SQLite locks on writes; Postgres handles concurrency\n - Consequence: Requires a database server; team needs Postgres\n training\"\n\nYou: \"Also that migration thing. The deploy blew up when the\n migration ran twice.\"\n\nAgent: \"Got it. Added to LEARNINGS.md:\n - Learning: SQL migrations must be idempotent\n - Lesson: CREATE TABLE without IF NOT EXISTS fails on retry\n - Application: Always use IF NOT EXISTS guards in migrations\"\n\nYou: \"Good. Anything else worth saving before we wrap up?\"\n\nAgent: \"We've been using structured error returns in every API\n handler this session. Want me to codify that as a convention?\"\n\nYou: \"Do it.\"\n\nAgent: \"Done. Added 'API handlers return structured errors' to\n CONVENTIONS.md under the API section.\"\n
Both approaches produce the same structured entries in the same context files.
The conversational approach is the natural fit for interactive sessions;
the CLI commands are better suited for scripts, hooks, and automation pipelines.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#tips","level":2,"title":"Tips","text":"
Record decisions at the moment of choice. The alternatives you considered and the reasons you rejected them fade quickly. Capture trade-offs while they are fresh.
Learnings should fail the Gemini test. If someone could find it in a 5-minute Gemini search, it does not belong in LEARNINGS.md.
Conventions earn their place through repetition. Add a convention the third time you see a pattern, not the first.
Use /ctx-reflect at natural breakpoints. The checklist catches items you might otherwise lose.
Keep the entries self-contained. Each entry should make sense on its own. A future session may load only one due to token budget constraints.
Reindex after every hand edit. It takes less than a second. A stale index causes AI tools to miss entries.
Prefer the structured fields. The verbosity forces clarity. A decision without a rationale is just a fact. A learning without an application is just a story.
Talk to your agent, do not type commands. In interactive sessions, the conversational approach is the recommended way to capture knowledge. Say \"save that as a learning\" or \"any decisions worth recording?\" and let the agent handle the structured fields. Reserve the CLI commands for scripting, automation, and CI/CD pipelines where there is no agent in the loop.
Trust the agent's proactive instincts. Agents trained on the ctx playbook will offer to persist context at milestones. A brief \"want me to save this?\" is cheaper than re-discovering the same lesson three sessions later.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#next-up","level":2,"title":"Next Up","text":"
Tracking Work Across Sessions →: Add, prioritize, complete, and archive tasks across sessions.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#see-also","level":2,"title":"See Also","text":"
Tracking Work Across Sessions: managing the tasks that decisions and learnings support
The Complete Session: full session lifecycle including reflection and context persistence
Detecting and Fixing Drift: keeping knowledge files accurate as the codebase evolves
CLI Reference: full documentation for ctx add, ctx decision, ctx learning
Context Files: format and conventions for DECISIONS.md, LEARNINGS.md, and CONVENTIONS.md
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/memory-bridge/","level":1,"title":"Bridging Claude Code Auto Memory","text":"","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#the-problem","level":2,"title":"The Problem","text":"
Claude Code maintains per-project auto memory at ~/.claude/projects/<slug>/memory/MEMORY.md. This file is:
Outside the repo - not version-controlled, not portable
Machine-specific - tied to one ~/.claude/ directory
Invisible to ctx - context loading and hooks don't read it
Meanwhile, ctx maintains structured context files (DECISIONS.md, LEARNINGS.md, CONVENTIONS.md) that are git-tracked, portable, and token-budgeted - but Claude Code doesn't automatically write to them.
The two systems hold complementary knowledge with no bridge between them.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#tldr","level":2,"title":"TL;DR","text":"
ctx memory sync # Mirror MEMORY.md into .context/memory/mirror.md\nctx memory status # Check for drift\nctx memory diff # See what changed since last sync\n
The check-memory-drift hook nudges automatically when MEMORY.md changes - you don't need to remember to sync manually.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx memory sync CLI command Copy MEMORY.md to mirror, archive previous ctx memory status CLI command Show drift, timestamps, line counts ctx memory diff CLI command Show changes since last sync ctx memory import CLI command Classify and promote entries to .context/ files ctx memory publish CLI command Push curated .context/ content to MEMORY.md ctx memory unpublish CLI command Remove published block from MEMORY.md ctx system check-memory-drift Hook Nudge when MEMORY.md has changed (once/session)","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#how-it-works","level":2,"title":"How It Works","text":"","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#discovery","level":3,"title":"Discovery","text":"
Claude Code encodes project paths as directory names under ~/.claude/projects/. The encoding replaces / with - and prefixes with -:
ctx memory uses this encoding to locate MEMORY.md automatically from your project root - no configuration needed.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#mirroring","level":3,"title":"Mirroring","text":"
When you run ctx memory sync:
The previous mirror is archived to .context/memory/archive/mirror-<timestamp>.md
MEMORY.md is copied to .context/memory/mirror.md
Sync state is updated in .context/state/memory-import.json
The mirror is git-tracked, so it travels with the project. Archives provide a fallback for projects that don't use git.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#drift-detection","level":3,"title":"Drift Detection","text":"
The check-memory-drift hook compares MEMORY.md's modification time against the mirror. When drift is detected, the agent sees:
┌─ Memory Drift ────────────────────────────────────────────────\n│ MEMORY.md has changed since last sync.\n│ Run: ctx memory sync\n│ Context: .context\n└────────────────────────────────────────────────────────────────\n
The nudge fires once per session to avoid noise.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#typical-workflow","level":2,"title":"Typical Workflow","text":"","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#at-session-start","level":3,"title":"At Session Start","text":"
If the hook fires a drift nudge, sync before diving into work:
ctx memory diff # Review what changed\nctx memory sync # Mirror the changes\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#periodic-check","level":3,"title":"Periodic Check","text":"
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#dry-run","level":3,"title":"Dry Run","text":"
Preview what sync would do without writing:
ctx memory sync --dry-run\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#storage-layout","level":2,"title":"Storage Layout","text":"
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#edge-cases","level":2,"title":"Edge Cases","text":"Scenario Behavior Auto memory not active sync exits 1 with message. status reports \"not active\". Hook skips silently. First sync (no mirror) Creates mirror without archiving. MEMORY.md is empty Syncs to empty mirror (valid). Not initialized Init guard rejects (same as all ctx commands).","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#importing-entries","level":2,"title":"Importing Entries","text":"
Once you've synced, you can classify and promote entries into structured .context/ files:
Keywords Target always use, prefer, never use, standard CONVENTIONS.md decided, chose, trade-off, approach DECISIONS.md gotcha, learned, watch out, bug, caveat LEARNINGS.md todo, need to, follow up TASKS.md Everything else Skipped
Entries that don't match any pattern are skipped - they stay in the mirror for manual review. Deduplication (hash-based) prevents re-importing the same entry on subsequent runs.
Review Before Importing
Use --dry-run first. The heuristic classifier is deliberately simple - it may misclassify ambiguous entries. Review the plan, then import.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#full-workflow","level":3,"title":"Full Workflow","text":"
ctx memory sync # 1. Mirror MEMORY.md\nctx memory import --dry-run # 2. Preview what would be imported\nctx memory import # 3. Promote entries to .context/ files\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#publishing-context-to-memorymd","level":2,"title":"Publishing Context to MEMORY.md","text":"
Push curated .context/ content back into MEMORY.md so Claude Code sees structured project context on session start - without needing hooks.
ctx memory publish --dry-run # Preview what would be published\nctx memory publish # Write to MEMORY.md\nctx memory publish --budget 40 # Tighter line budget\n
ctx memory publish replaces only inside the markers
To remove the published block entirely:
ctx memory unpublish\n
Publish at Wrap-Up, Not on Commit
The best time to publish is during session wrap-up, after persisting decisions and learnings. Never auto-publish - give yourself a chance to review what's going into MEMORY.md.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#full-bidirectional-workflow","level":3,"title":"Full Bidirectional Workflow","text":"
ctx memory sync # 1. Mirror MEMORY.md\nctx memory import --dry-run # 2. Check what Claude wrote\nctx memory import # 3. Promote entries to .context/\nctx memory publish --dry-run # 4. Check what would be published\nctx memory publish # 5. Push context to MEMORY.md\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/multi-tool-setup/","level":1,"title":"Setup Across AI Tools","text":"","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#the-problem","level":2,"title":"The Problem","text":"
You have installed ctx and want to set it up with your AI coding assistant so that context persists across sessions. Different tools have different integration depths. For example:
Claude Code supports native hooks that load and save context automatically.
Cursor injects context via its system prompt.
Aider reads context files through its --read flag.
This recipe walks through the complete setup for each tool, from initialization through verification, so you end up with a working memory layer regardless of which AI tool you use.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#tldr","level":2,"title":"TL;DR","text":"
Create a .ctxrc in your project root to configure token budgets, context directory, drift thresholds, and more.
Then start your AI tool and ask: \"Do you remember?\"
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Command/Skill Role in this workflow ctx init Create .context/ directory, templates, and permissions ctx setup Generate integration configuration for a specific AI tool ctx agent Print a token-budgeted context packet for AI consumption ctx load Output assembled context in read order (for manual pasting) ctx watch Auto-apply context updates from AI output (non-native tools) ctx completion Generate shell autocompletion for bash, zsh, or fish ctx journal import Import sessions to editable journal Markdown","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-1-initialize-ctx","level":3,"title":"Step 1: Initialize ctx","text":"
Run ctx init in your project root. This creates the .context/ directory with all template files and seeds ctx permissions in settings.local.json.
cd your-project\nctx init\n
This produces the following structure:
.context/\n CONSTITUTION.md # Hard rules the AI must never violate\n TASKS.md # Current and planned work\n CONVENTIONS.md # Code patterns and standards\n ARCHITECTURE.md # System overview\n DECISIONS.md # Architectural decisions with rationale\n LEARNINGS.md # Lessons learned, gotchas, tips\n GLOSSARY.md # Domain terms and abbreviations\n AGENT_PLAYBOOK.md # How AI tools should use this system\n
Using a Different .context Directory
The .context/ directory doesn't have to live inside your project. You can point ctx to an external folder via .ctxrc, the CTX_DIR environment variable, or the --context-dir CLI flag.
This is useful for monorepos or shared context across repositories.
See Configuration for details and External Context for a full recipe.
For Claude Code, install the ctx plugin to get hooks and skills:
claude /plugin marketplace add ActiveMemory/ctx\nclaude /plugin install ctx@activememory-ctx\n
If you only need the core files (useful for lightweight setups), use the --minimal flag:
ctx init --minimal\n
This creates only TASKS.md, DECISIONS.md, and CONSTITUTION.md.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-2-generate-tool-specific-hooks","level":3,"title":"Step 2: Generate Tool-Specific Hooks","text":"
If you are using a tool other than Claude Code (which is configured automatically by ctx init), generate its integration configuration:
# For Cursor\nctx setup cursor\n\n# For Aider\nctx setup aider\n\n# For GitHub Copilot\nctx setup copilot\n\n# For Windsurf\nctx setup windsurf\n
Each command prints the configuration you need. How you apply it depends on the tool.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#claude-code","level":4,"title":"Claude Code","text":"
No action needed. Just install ctx from the Marketplace as ActiveMemory/ctx.
Claude Code is a First-Class Citizen
With the ctx plugin installed, Claude Code gets hooks and skills automatically. The PreToolUse hook runs ctx agent --budget 4000 on every tool call (with a 10-minute cooldown so it only fires once per window).
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#cursor","level":4,"title":"Cursor","text":"
Add the system prompt snippet to .cursor/settings.json:
{\n \"ai.systemPrompt\": \"Read .context/TASKS.md and .context/CONVENTIONS.md before responding. Follow rules in .context/CONSTITUTION.md.\"\n}\n
Context files appear in Cursor's file tree. You can also paste a context packet directly into chat:
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#aider","level":4,"title":"Aider","text":"
Create .aider.conf.yml so context files are loaded on every session:
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-3-set-up-shell-completion","level":3,"title":"Step 3: Set Up Shell Completion","text":"
Shell completion lets you tab-complete ctx subcommands and flags, which is especially useful while learning the CLI.
# Bash (add to ~/.bashrc)\nsource <(ctx completion bash)\n\n# Zsh (add to ~/.zshrc)\nsource <(ctx completion zsh)\n\n# Fish\nctx completion fish > ~/.config/fish/completions/ctx.fish\n
After sourcing, typing ctx a<TAB> completes to ctx agent, and ctx journal <TAB> shows list, show, and export.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-4-verify-the-setup-works","level":3,"title":"Step 4: Verify the Setup Works","text":"
Start a fresh session in your AI tool and ask:
\"Do you remember?\"
A correctly configured tool responds with specific context: current tasks from TASKS.md, recent decisions, and previous session topics. It should not say \"I don't have memory\" or \"Let me search for files.\"
This question checks the passive side of memory. A properly set-up agent is also proactive: it treats context maintenance as part of its job:
After a debugging session, it offers to save a learning.
After a trade-off discussion, it asks whether to record the decision.
After completing a task, it suggests follow-up items.
The \"do you remember?\" check verifies both halves: recall and responsibility.
For example, after resolving a tricky bug, a proactive agent might say:
That Redis timeout issue was subtle. Want me to save this as a *learning*\nso we don't hit it again?\n
If you see behavior like this, the setup is working end to end.
In Claude Code, you can also invoke the /ctx-status skill:
/ctx-status\n
This prints a summary of all context files, token counts, and recent activity, confirming that hooks are loading context.
If context is not loading, check the basics:
Symptom Fix ctx: command not found Ensure ctx is in your PATH: which ctx Hook errors Verify plugin is installed: claude /plugin list Context not refreshing Cooldown may be active; wait 10 minutes or set --cooldown 0","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-5-enable-watch-mode-for-non-native-tools","level":3,"title":"Step 5: Enable Watch Mode for Non-Native Tools","text":"
Tools like Aider, Copilot, and Windsurf do not support native hooks for saving context automatically. For these, run ctx watch alongside your AI tool.
Pipe the AI tool's output through ctx watch:
# Terminal 1: Run Aider with output logged\naider 2>&1 | tee /tmp/aider.log\n\n# Terminal 2: Watch the log for context updates\nctx watch --log /tmp/aider.log\n
Or for any generic tool:
your-ai-tool 2>&1 | tee /tmp/ai.log &\nctx watch --log /tmp/ai.log\n
When the AI emits structured update commands, ctx watch parses and applies them automatically:
<context-update type=\"learning\"\n context=\"Debugging rate limiter\"\n lesson=\"Redis MULTI/EXEC does not roll back on error\"\n application=\"Wrap rate-limit checks in Lua scripts instead\"\n>Redis Transaction Behavior</context-update>\n
To preview changes without modifying files:
ctx watch --dry-run --log /tmp/ai.log\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-6-import-session-transcripts-optional","level":3,"title":"Step 6: Import Session Transcripts (Optional)","text":"
If you want to browse past session transcripts, import them to the journal:
ctx journal import --all\n
This converts raw session data into editable Markdown files in .context/journal/. You can then enrich them with metadata using /ctx-journal-enrich-all inside your AI assistant.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
Here is the condensed setup for all three tools:
# ## Common (run once per project) ##\ncd your-project\nctx init\nsource <(ctx completion zsh) # or bash/fish\n\n# ## Claude Code (automatic, just verify) ##\n# Start Claude Code, then ask: \"Do you remember?\"\n\n# ## Cursor ##\nctx setup cursor\n# Add the system prompt to .cursor/settings.json\n# Paste context: ctx agent --budget 4000 | pbcopy\n\n# ## Aider ##\nctx setup aider\n# Create .aider.conf.yml with read: paths\n# Run watch mode alongside: ctx watch --log /tmp/aider.log\n\n# ## Verify any Tool ##\n# Ask your AI: \"Do you remember?\"\n# Expect: specific tasks, decisions, recent context\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#tips","level":2,"title":"Tips","text":"
Start with ctx init (not --minimal) for your first project. The full template set gives the agent more to work with, and you can always delete files later.
For Claude Code, the token budget is configured in the plugin's hooks.json. To customize, adjust the --budget flag in the ctx agent hook command.
The --session $PPID flag isolates cooldowns per Claude Code process, so parallel sessions do not suppress each other.
Commit your .context/ directory to version control. Several ctx features (journals, changelogs, blog generation) rely on git history.
For Cursor and Copilot, keep CONVENTIONS.md visible. These tools treat open files as higher-priority context.
Run ctx drift periodically to catch stale references before they confuse the agent.
The agent playbook instructs the agent to persist context at natural milestones (completed tasks, decisions, gotchas). In practice, this works best when you reinforce the habit: a quick \"anything worth saving?\" after a debugging session goes a long way.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#companion-tools-highly-recommended","level":2,"title":"Companion Tools (Highly Recommended)","text":"
ctx skills can leverage external MCP servers for web search and code intelligence. ctx works without them, but they significantly improve agent behavior across sessions — the investment is small and the benefits compound. Skills like /ctx-code-review, /ctx-explain, and /ctx-refactor all become noticeably better with these tools connected.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#gemini-search","level":3,"title":"Gemini Search","text":"
Provides grounded web search with citations. Used by skills and the agent playbook as the preferred search backend (faster and more accurate than built-in web search).
Setup: Add the Gemini Search MCP server to your Claude Code settings. See the Gemini Search MCP documentation for installation.
Verification:
# The agent checks this automatically during /ctx-remember\n# Manual test: ask the agent to search for something\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#gitnexus","level":3,"title":"GitNexus","text":"
Provides a code knowledge graph with symbol resolution, blast radius analysis, and domain clustering. Used by skills like /ctx-refactor (impact analysis) and /ctx-code-review (dependency awareness).
Setup: Add the GitNexus MCP server to your Claude Code settings, then index your project:
npx gitnexus analyze\n
Verification:
# The agent checks this automatically during /ctx-remember\n# If the index is stale, it will suggest rehydrating\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#suppressing-the-check","level":3,"title":"Suppressing the Check","text":"
If you don't use companion tools and want to skip the availability check at session start, add to .ctxrc:
companion_check: false\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#future-direction","level":3,"title":"Future Direction","text":"
The companion tool integration is evolving toward a pluggable model: bring your own search engine, bring your own code intelligence. The current integration is MCP-based and limited to Gemini Search and GitNexus. If you use a different search or code intelligence tool, skills will degrade gracefully to built-in capabilities.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#next-up","level":2,"title":"Next Up","text":"
Keeping Context in a Separate Repo →: Store context files outside the project tree for multi-repo or open source setups.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#see-also","level":2,"title":"See Also","text":"
The Complete Session: full session lifecycle recipe
Multilingual Session Parsing: configure session header prefixes for other languages
CLI Reference: all commands and flags
Integrations: detailed per-tool integration docs
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multilingual-sessions/","level":1,"title":"Multilingual Session Parsing","text":"","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#the-problem","level":2,"title":"The Problem","text":"
Your team works across languages. Session files written by AI tools might use headers like # Oturum: 2026-01-15 - API Düzeltme (Turkish) or # セッション: 2026-01-15 - テスト (Japanese) instead of # Session: 2026-01-15 - Fix API.
By default, ctx only recognizes Session: as a session header prefix. Files with other prefixes are silently skipped during journal import and journal generation: They look like regular Markdown, not sessions.
session_prefixes:\n - \"Session:\" # English (include to keep default)\n - \"Oturum:\" # Turkish\n - \"セッション:\" # Japanese\n
Restart your session. All configured prefixes are now recognized.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#how-it-works","level":2,"title":"How It Works","text":"
The Markdown session parser detects session files by looking for an H1 header that starts with a known prefix followed by a date:
# Session: 2026-01-15 - Fix API Rate Limiting\n# Oturum: 2026-01-15 - API Düzeltme\n# セッション: 2026-01-15 - テスト\n
The list of recognized prefixes comes from session_prefixes in .ctxrc. When the key is absent or empty, ctx falls back to the built-in default: [\"Session:\"].
Date-only headers (# 2026-01-15 - Morning Work) are always recognized regardless of prefix configuration.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#configuration","level":2,"title":"Configuration","text":"","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#adding-a-language","level":3,"title":"Adding a language","text":"
Add the prefix with a trailing colon to your .ctxrc:
When you override session_prefixes, the default is replaced, not extended. If you still want English headers recognized, include \"Session:\" in your list.
Commit .ctxrc to the repo so all team members share the same prefix list. This ensures ctx journal import and journal generation pick up sessions from all team members regardless of language.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#common-prefixes","level":3,"title":"Common prefixes","text":"Language Prefix English Session: Turkish Oturum: Spanish Sesión: French Session: German Sitzung: Japanese セッション: Korean 세션: Portuguese Sessão: Chinese 会话:","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#verifying","level":3,"title":"Verifying","text":"
After configuring, test with ctx journal source. Sessions with the new prefixes should appear in the output.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#what-this-does-not-do","level":2,"title":"What This Does NOT Do","text":"
Change the interface language: ctx output is always English. This setting only controls which session files ctx can parse.
Generate headers: ctx never writes session headers. The prefix list is recognition-only (input, not output).
Affect JSONL sessions: Claude Code JSONL transcripts don't use header prefixes. This only applies to Markdown session files in .context/sessions/.
See also: Setup Across AI Tools - complete multi-tool setup including Markdown session configuration.
See also: CLI Reference - full .ctxrc field reference including session_prefixes.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/parallel-worktrees/","level":1,"title":"Parallel Agent Development with Git Worktrees","text":"","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#the-problem","level":2,"title":"The Problem","text":"
You have a large backlog (10, 20, 30 open tasks) and many of them are independent: docs work that doesn't touch Go code, a new package that doesn't overlap with existing ones, test coverage for a stable module.
Running one agent at a time means serial execution. You want 3-4 agents working in parallel, each on its own track, without stepping on each other's files.
Git worktrees solve this.
Each worktree is a separate working directory with its own branch, but they share the same .git object database. Combined with ctx's persistent context, each agent session picks up the full project state and works independently.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#tldr","level":2,"title":"TL;DR","text":"
/ctx-worktree # 1. group tasks by file overlap\ngit worktree add ../myproject-docs -b work/docs # 2. create worktrees\ncd ../myproject-docs && claude # 3. launch agents (one per track)\n/ctx-worktree teardown docs # 4. merge back and clean up\n
TASKS.md will conflict on merge: Accept all [x] completions from both sides.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-worktree Skill Create, list, and tear down worktrees /ctx-next Skill Pick tasks from the backlog for each track git worktree Command Underlying git worktree management git merge Command Merge completed tracks back to main","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-1-assess-the-backlog","level":3,"title":"Step 1: Assess the Backlog","text":"
Start in your main checkout. Ask the agent to analyze your tasks and group them by blast radius: which files and directories each task touches.
/ctx-worktree\nLook at TASKS.md and group the pending tasks into 2-3 independent\ntracks based on which files they'd touch. Show me the grouping\nbefore creating anything.\n
The agent reads TASKS.md, estimates file overlap, and proposes groups:
Proposed worktree groups:\n\n work/docs # recipe updates, blog post (touches: docs/)\n work/crypto # scratchpad encryption infra (touches: internal/crypto/)\n work/tests # journal test coverage (touches: internal/cli/journal/)\n
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-2-create-the-worktrees","level":3,"title":"Step 2: Create the Worktrees","text":"
Once you approve the grouping, the agent creates worktrees as sibling directories:
Each worktree is a full working copy on its own branch.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-3-launch-agents","level":3,"title":"Step 3: Launch Agents","text":"
Open a separate terminal (or editor window) for each worktree and start a Claude Code session:
Each agent sees the full project, including .context/, and can work independently.
Do Not Initialize Context in Worktrees
Do not run ctx init in worktrees: The .context directory is already tracked in git.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-4-work","level":3,"title":"Step 4: Work","text":"
Each agent works through its assigned tasks. They can read TASKS.md to know what's assigned to their track, use /ctx-next to pick the next item, and commit normally on their work/* branch.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-5-merge-back","level":3,"title":"Step 5: Merge Back","text":"
As each track finishes, return to the main checkout and merge:
/ctx-worktree teardown docs\n
The agent checks for uncommitted changes, merges work/docs into your current branch, removes the worktree, and deletes the branch.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-6-handle-tasksmd-conflicts","level":3,"title":"Step 6: Handle TASKS.md Conflicts","text":"
TASKS.md will almost always conflict when merging: Multiple agents will mark different tasks as [x]. This is expected and easy to resolve:
Accept all completions from both sides. No task should go from [x] back to [ ]. The merge resolution is always additive.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-7-cleanup","level":3,"title":"Step 7: Cleanup","text":"
After all tracks are merged, verify everything is clean:
/ctx-worktree list\n
Should show only the main working tree. All work/* branches should be gone.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't have to use the skill directly for every step. These natural prompts work:
\"I have a big backlog. Can we split it across worktrees?\"
\"Which of these tasks can run in parallel without conflicts?\"
\"Merge the docs track back in.\"
\"Clean up all the worktrees, we're done.\"
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#what-works-differently-in-worktrees","level":2,"title":"What Works Differently in Worktrees","text":"
The encryption key lives at ~/.ctx/.ctx.key (user-level, outside the project). Because all worktrees on the same machine share this path, ctx pad and ctx notify work in worktrees automatically - no special setup needed.
One thing to watch:
Journal enrichment: ctx journal import and ctx journal enrich write files relative to the current working directory. Enrichments created in a worktree stay there and are discarded on teardown. Enrich journals on the main branch after merging: the JSONL session logs are always intact, and you don't lose any data.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#tips","level":2,"title":"Tips","text":"
3-4 worktrees max. Beyond that, merge complexity outweighs the parallelism benefit. The skill enforces this limit.
Group by package or directory, not by priority. Two high-priority tasks that touch the same files must be in the same track.
TASKS.md will conflict on merge. This is normal. Accept all [x] completions: The resolution is always additive.
Don't run ctx init in worktrees. The .context/ directory is tracked in git. Running init overwrites shared context files.
Name worktrees by concern, not by number. work/docs and work/crypto are more useful than work/track-1 and work/track-2.
Commit frequently in each worktree. Smaller commits make merge conflicts easier to resolve.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#next-up","level":2,"title":"Next Up","text":"
Back to the beginning: Guide Your Agent →
Or explore the full recipe list.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#see-also","level":2,"title":"See Also","text":"
Running an Unattended AI Agent: for serial autonomous loops instead of parallel tracks
Tracking Work Across Sessions: managing the task backlog that feeds into parallelization
The Complete Session: the complete session workflow end-to-end, with examples
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/permission-snapshots/","level":1,"title":"Permission Snapshots","text":"","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#the-problem","level":2,"title":"The Problem","text":"
Claude Code's .claude/settings.local.json accumulates one-off permissions every time you click \"Allow\". After busy sessions the file is full of session-specific entries that expand the agent's surface area beyond intent.
Since settings.local.json is .gitignored, there is no PR review or CI check. The file drifts independently on every machine, and there is no built-in way to reset to a known-good state.
/ctx-sanitize-permissions # audit for dangerous patterns\nctx permission snapshot # save golden image\n# ... sessions accumulate cruft ...\nctx permission restore # reset to golden state\n
Save a curated settings.local.json as a golden image, then restore from it to drop session-accumulated permissions. The golden file (.claude/settings.golden.json) is committed to version control and shared with the team.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Command/Skill Role in this workflow ctx permission snapshot Save settings.local.json as golden image ctx permission restore Reset settings.local.json from golden image /ctx-sanitize-permissions Audit for dangerous patterns before snapshotting","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#step-by-step","level":2,"title":"Step by Step","text":"","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#1-curate-your-permissions","level":3,"title":"1. Curate Your Permissions","text":"
Start with a clean settings.local.json. Optionally run /ctx-sanitize-permissions to remove dangerous patterns first.
Review the file manually. Every entry should be there because you decided it belongs, not because you clicked \"Allow\" once during debugging.
See the Permission Hygiene recipe for recommended defaults.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#2-take-a-snapshot","level":3,"title":"2. Take a Snapshot","text":"
ctx permission snapshot\n# Saved golden image: .claude/settings.golden.json\n
This creates a byte-for-byte copy. No re-encoding, no indent changes.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#3-commit-the-golden-file","level":3,"title":"3. Commit the Golden File","text":"
git add .claude/settings.golden.json\ngit commit -m \"Add permission golden image\"\n
The golden file is not gitignored (unlike settings.local.json). This is intentional: it becomes a team-shared baseline.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#4-auto-restore-at-the-session-start","level":3,"title":"4. Auto-Restore at the Session Start","text":"
Add this instruction to your CLAUDE.md:
## On Session Start\n\nRun `ctx permission restore` to reset permissions to the golden image.\n
The agent will restore the golden image at the start of every session, automatically dropping any permissions accumulated during previous sessions.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#5-update-when-intentional-changes-are-made","level":3,"title":"5. Update When Intentional Changes Are Made","text":"
When you add a new permanent permission (not a one-off debugging entry):
# Edit settings.local.json with the new permission\n# Then update the golden image:\nctx permission snapshot\ngit add .claude/settings.golden.json\ngit commit -m \"Update permission golden image: add cargo test\"\n
You don't need to remember exact commands. These natural-language prompts work with agents trained on the ctx playbook:
What you say What happens \"Save my current permissions as baseline\" Agent runs ctx permission snapshot \"Reset permissions to the golden image\" Agent runs ctx permission restore \"Clean up my permissions\" Agent runs /ctx-sanitize-permissions then snapshot \"What permissions did I accumulate?\" Agent diffs local vs golden","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#next-up","level":2,"title":"Next Up","text":"
Turning Activity into Content →: Generate blog posts, changelogs, and journal sites from your project activity.
Permission Hygiene: recommended defaults and maintenance workflow
CLI Reference: ctx permission: full command documentation
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/publishing/","level":1,"title":"Turning Activity into Content","text":"","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#the-problem","level":2,"title":"The Problem","text":"
Your .context/ directory is full of decisions, learnings, and session history.
Your git log tells the story of a project evolving.
But none of this is visible to anyone outside your terminal.
You want to turn this raw activity into:
a browsable journal site,
blog posts,
changelog posts.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#tldr","level":2,"title":"TL;DR","text":"
ctx journal import --all # 1. import sessions to markdown\n\n/ctx-journal-enrich-all # 2. add metadata and tags\n\nctx journal site --serve # 3. build and serve the journal\n\n/ctx-blog about the caching layer # 4. draft a blog post\n/ctx-blog-changelog v0.1.0 \"v0.2\" # 5. write a changelog post\n
Read on for details on each stage.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx journal import Command Import session JSONL to editable markdown ctx journal site Command Generate a static site from journal entries ctx journal obsidian Command Generate an Obsidian vault from journal entries ctx serve Command Serve any zensical directory (default: journal) ctx site feed Command Generate Atom feed from finalized blog posts make journal Makefile Shortcut for import + site rebuild /ctx-journal-enrich-all Skill Full pipeline: import if needed, then batch-enrich (recommended) /ctx-journal-enrich Skill Add metadata, summaries, and tags to one entry /ctx-blog Skill Draft a blog post from recent project activity /ctx-blog-changelog Skill Write a themed post from a commit range","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-1-import-sessions-to-markdown","level":3,"title":"Step 1: Import Sessions to Markdown","text":"
Raw session data lives as JSONL files in Claude Code's internal storage. The first step is converting these into readable, editable markdown.
# Import all sessions from the current project\nctx journal import --all\n\n# Import from all projects (if you work across multiple repos)\nctx journal import --all --all-projects\n\n# Import a single session by ID or slug\nctx journal import abc123\nctx journal import gleaming-wobbling-sutherland\n
Imported files land in .context/journal/ as individual Markdown files with session metadata and the full conversation transcript.
--all is safe by default: Only new sessions are imported. Existing files are skipped. Use --regenerate to re-import existing files (YAML frontmatter is preserved). Use --regenerate --keep-frontmatter=false -y to regenerate everything including frontmatter.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-2-enrich-entries-with-metadata","level":3,"title":"Step 2: Enrich Entries with Metadata","text":"
Raw entries have timestamps and conversations but lack the structured metadata that makes a journal searchable. Use /ctx-journal-enrich-all to process your entire backlog at once:
/ctx-journal-enrich-all\n
The skill finds all unenriched entries, filters out noise (suggestion sessions, very short sessions, multipart continuations), and processes each one by extracting titles, topics, technologies, and summaries from the conversation.
For large backlogs (20+ entries), it can spawn subagents to process entries in parallel.
This metadata powers better navigation in the journal site:
titles replace slugs,
summaries appear in the index,
and search covers topics and technologies.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-3-generate-the-journal-site","level":3,"title":"Step 3: Generate the Journal Site","text":"
With entries exported and enriched, generate the static site:
# Generate site files\nctx journal site\n\n# Generate and build static HTML\nctx journal site --build\n\n# Generate and serve locally (opens at http://localhost:8000)\nctx journal site --serve\n\n# Custom output directory\nctx journal site --output ~/my-journal\n
The site is generated in .context/journal-site/ by default. It uses zensical for static site generation (pipx install zensical).
Or use the Makefile shortcut that combines export and rebuild:
make journal\n
This runs ctx journal import --all followed by ctx journal site --build, then reminds you to enrich before rebuilding. To serve the built site, use make journal-serve or ctx serve (serve-only, no regeneration).
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#alternative-export-to-obsidian-vault","level":3,"title":"Alternative: Export to Obsidian Vault","text":"
If you use Obsidian for knowledge management, generate a vault instead of (or alongside) the static site:
This produces an Obsidian-ready directory with wikilinks, MOC (Map of Content) pages for topics/files/types, and a \"Related Sessions\" footer on each entry for graph connectivity. Open the output directory in Obsidian as a vault.
The vault uses the same enriched source entries as the static site. Both outputs can coexist: The static site goes to .context/journal-site/, the vault to .context/journal-obsidian/.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-4-draft-blog-posts-from-activity","level":3,"title":"Step 4: Draft Blog Posts from Activity","text":"
When your project reaches a milestone worth sharing, use /ctx-blog to draft a post from recent activity. The skill gathers context from multiple sources: git log, DECISIONS.md, LEARNINGS.md, completed tasks, and journal entries.
/ctx-blog about the caching layer we just built\n/ctx-blog last week's refactoring work\n/ctx-blog lessons learned from the migration\n
The skill gathers recent commits, decisions, and learnings; identifies a narrative arc; drafts an outline for approval; writes the full post; and saves it to docs/blog/YYYY-MM-DD-slug.md.
Posts are written in first person with code snippets, commit references, and an honest discussion of what went wrong.
The Output is zensical-Flavored Markdown
The blog skills produce Markdown tuned for a zensical site: topics: frontmatter (zensical's tag field), a docs/blog/ output path, and a banner image reference.
The content is still standard Markdown and can be adapted to other static site generators, but the defaults assume a zensical project structure.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-5-write-changelog-posts-from-commit-ranges","level":3,"title":"Step 5: Write Changelog Posts from Commit Ranges","text":"
For release notes or \"what changed\" posts, /ctx-blog-changelog takes a starting commit and a theme, then analyzes everything that changed:
/ctx-blog-changelog 040ce99 \"building the journal system\"\n/ctx-blog-changelog HEAD~30 \"what's new in v0.2.0\"\n/ctx-blog-changelog v0.1.0 \"the road to v0.2.0\"\n
The skill diffs the commit range, identifies the most-changed files, and constructs a narrative organized by theme rather than chronology, including a key commits table and before/after comparisons.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-6-generate-the-blog-feed","level":3,"title":"Step 6: Generate the Blog Feed","text":"
After publishing blog posts, generate the Atom feed so readers and automation can discover new content:
ctx site feed\n
This scans docs/blog/ for finalized posts (reviewed_and_finalized: true), extracts title, date, author, topics, and summary, and writes a valid Atom 1.0 feed to site/feed.xml. The feed is also generated automatically as part of make site.
The feed is available at ctx.ist/feed.xml.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#the-conversational-approach","level":2,"title":"The Conversational Approach","text":"
You can also drive your publishing anytime with natural language:
\"write about what we did this week\"\n\"turn today's session into a blog post\"\n\"make a changelog post covering everything since the last release\"\n\"enrich the last few journal entries\"\n
The agent has full visibility into your .context/ state (tasks completed, decisions recorded, learnings captured), so its suggestions are grounded in what actually happened.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
The full pipeline from raw transcripts to published content:
# 1. Import all sessions\nctx journal import --all\n\n# 2. In Claude Code: enrich all entries with metadata\n/ctx-journal-enrich-all\n\n# 3. Build and serve the journal site\nmake journal\nmake journal-serve\n\n# 3b. Or generate an Obsidian vault\nctx journal obsidian\n\n# 4. In Claude Code: draft a blog post\n/ctx-blog about the features we shipped this week\n\n# 5. In Claude Code: write a changelog post\n/ctx-blog-changelog v0.1.0 \"what's new in v0.2.0\"\n
The journal pipeline is idempotent at every stage. You can rerun ctx journal import --all without losing enrichment. You can rebuild the site as many times as you want.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#tips","level":2,"title":"Tips","text":"
Import regularly. Run ctx journal import --all after each session to keep your journal current. Only new sessions are imported: Existing files are skipped by default.
Use batch enrichment. /ctx-journal-enrich-all filters noise (suggestion sessions, trivial sessions, multipart continuations) so you do not have to decide what is worth enriching.
Keep journal files in .gitignore. Session journals can contain sensitive data: file contents, commands, internal discussions, and error messages with stack traces. Add .context/journal/ and .context/journal-site/ to .gitignore.
Use /ctx-blog for narrative posts and /ctx-blog-changelog for release posts. One finds a story in recent activity, the other explains a commit range by theme.
Edit the drafts. These skills produce drafts, not final posts. Review the narrative, add your perspective, and remove anything that does not serve the reader.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#next-up","level":2,"title":"Next Up","text":"
Running an Unattended AI Agent →: Set up an AI agent that works through tasks overnight without you at the keyboard.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#see-also","level":2,"title":"See Also","text":"
CLI Reference: ctx serve: serve-only (no regeneration)
Browsing and Enriching Past Sessions: journal browsing workflow
The Complete Session: capturing context during a session
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/scratchpad-sync/","level":1,"title":"Syncing Scratchpad Notes Across Machines","text":"","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#the-problem","level":2,"title":"The Problem","text":"
You work from multiple machines: a desktop and a laptop, or a local machine and a remote dev server.
The scratchpad entries are encrypted. The ciphertext (.context/scratchpad.enc) travels with git, but the encryption key lives outside the project at ~/.ctx/.ctx.key and is never committed. Without the key on each machine, you cannot read or write entries.
How do you distribute the key and keep the scratchpad in sync?
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#tldr","level":2,"title":"TL;DR","text":"
ctx init # 1. generates key\nscp ~/.ctx/.ctx.key user@machine-b:~/.ctx/.ctx.key # 2. copy key\nchmod 600 ~/.ctx/.ctx.key # 3. secure it\n# Normal git push/pull syncs the encrypted scratchpad.enc\n# On conflict: ctx pad resolve → rebuild → git add + commit\n
Finding Your Key File
The key is always at ~/.ctx/.ctx.key - one key, one machine.
Treat the Key Like a Password
The scratchpad key is the only thing protecting your encrypted entries.
Store a backup in a secure enclave such as a password manager, and treat it with the same care you would give passwords, certificates, or API tokens.
Anyone with the key can decrypt every scratchpad entry.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx init CLI command Initialize context (generates the key automatically) ctx pad add CLI command Add a scratchpad entry ctx pad rm CLI command Remove a scratchpad entry ctx pad edit CLI command Edit a scratchpad entry ctx pad resolve CLI command Show both sides of a merge conflict ctx pad merge CLI command Merge entries from other scratchpad files ctx pad import CLI command Bulk-import lines from a file ctx pad export CLI command Export blob entries to a directory scp Shell Copy the key file between machines git push / git pull Shell Sync the encrypted file via git/ctx-pad Skill Natural language interface to pad commands","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-1-initialize-on-machine-a","level":3,"title":"Step 1: Initialize on Machine A","text":"
Run ctx init on your first machine. The key is created automatically at ~/.ctx/.ctx.key:
ctx init\n# ...\n# Created ~/.ctx/.ctx.key (0600)\n# Created .context/scratchpad.enc\n
The key lives outside the project directory and is never committed. The .enc file is tracked in git.
Key Folder Change (v0.7.0+)
If you built ctx from source or upgraded past v0.6.0, the key location changed to ~/.ctx/.ctx.key. Check these legacy folders and copy your key manually:
# Old locations (pick whichever exists)\nls ~/.local/ctx/keys/ # pre-v0.7.0 user-level\nls .context/.ctx.key # pre-v0.6.0 project-local\n\n# Copy to the new location\nmkdir -p ~/.ctx && chmod 700 ~/.ctx\ncp <old-key-path> ~/.ctx/.ctx.key\nchmod 600 ~/.ctx/.ctx.key\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-2-copy-the-key-to-machine-b","level":3,"title":"Step 2: Copy the Key to Machine B","text":"
Use any secure transfer method. The key is always at ~/.ctx/.ctx.key:
# scp - create the target directory first\nssh user@machine-b \"mkdir -p ~/.ctx && chmod 700 ~/.ctx\"\nscp ~/.ctx/.ctx.key user@machine-b:~/.ctx/.ctx.key\n\n# Or use a password manager, USB drive, etc.\n
Set permissions on Machine B:
chmod 600 ~/.ctx/.ctx.key\n
Secure the Transfer
The key is a raw 256-bit AES key. Anyone with the key can decrypt the scratchpad. Use an encrypted channel (SSH, password manager, vault).
Never paste it in plaintext over email or chat.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-3-normal-pushpull-workflow","level":3,"title":"Step 3: Normal Push/Pull Workflow","text":"
The encrypted file is committed, so standard git sync works:
# Machine A: add entries and push\nctx pad add \"staging API key: sk-test-abc123\"\ngit add .context/scratchpad.enc\ngit commit -m \"Update scratchpad\"\ngit push\n\n# Machine B: pull and read\ngit pull\nctx pad\n# 1. staging API key: sk-test-abc123\n
Both machines have the same key, so both can decrypt the same .enc file.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-4-read-and-write-from-either-machine","level":3,"title":"Step 4: Read and Write from Either Machine","text":"
Once the key is distributed, all ctx pad commands work identically on both machines. Entries added on Machine A are visible on Machine B after a git pull, and vice versa.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-5-handle-merge-conflicts","level":3,"title":"Step 5: Handle Merge Conflicts","text":"
If both machines add entries between syncs, pulling will create a merge conflict on .context/scratchpad.enc. Git cannot merge binary (encrypted) content automatically.
The fastest approach is ctx pad merge: It reads both conflict sides, deduplicates, and writes the union:
# Extract theirs to a temp file, then merge it in\ngit show :3:.context/scratchpad.enc > /tmp/theirs.enc\ngit checkout --ours .context/scratchpad.enc\nctx pad merge /tmp/theirs.enc\n\n# Done: Commit the resolved scratchpad:\ngit add .context/scratchpad.enc\ngit commit -m \"Resolve scratchpad merge conflict\"\n
Alternatively, use ctx pad resolve to inspect both sides manually:
ctx pad resolve\n# === Ours (this machine) ===\n# 1. staging API key: sk-test-abc123\n# 2. check DNS after deploy\n#\n# === Theirs (incoming) ===\n# 1. staging API key: sk-test-abc123\n# 2. new endpoint: api.example.com/v2\n
Then reconstruct the merged scratchpad:
# Start fresh with all entries from both sides\nctx pad add \"staging API key: sk-test-abc123\"\nctx pad add \"check DNS after deploy\"\nctx pad add \"new endpoint: api.example.com/v2\"\n\n# Mark the conflict resolved\ngit add .context/scratchpad.enc\ngit commit -m \"Resolve scratchpad merge conflict\"\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#merge-conflict-walkthrough","level":2,"title":"Merge Conflict Walkthrough","text":"
Here's a full scenario showing how conflicts arise and how to resolve them:
1. Both machines start in sync (1 entry):
Machine A: 1. staging API key: sk-test-abc123\nMachine B: 1. staging API key: sk-test-abc123\n
2. Both add entries independently:
Machine A adds: \"check DNS after deploy\"\nMachine B adds: \"new endpoint: api.example.com/v2\"\n
3. Machine A pushes first. Machine B pulls and gets a conflict:
git pull\n# CONFLICT (content): Merge conflict in .context/scratchpad.enc\n
4. Machine B runs ctx pad resolve:
ctx pad resolve\n# === Ours ===\n# 1. staging API key: sk-test-abc123\n# 2. new endpoint: api.example.com/v2\n#\n# === Theirs ===\n# 1. staging API key: sk-test-abc123\n# 2. check DNS after deploy\n
5. Rebuild with entries from both sides and commit:
# Clear and rebuild (or use the skill to guide you)\nctx pad add \"staging API key: sk-test-abc123\"\nctx pad add \"check DNS after deploy\"\nctx pad add \"new endpoint: api.example.com/v2\"\n\ngit add .context/scratchpad.enc\ngit commit -m \"Merge scratchpad: keep entries from both machines\"\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#conversational-approach","level":3,"title":"Conversational Approach","text":"
When working with an AI assistant, you can resolve conflicts naturally:
You: \"I have a scratchpad merge conflict. Can you resolve it?\"\n\nAgent: \"Let me extract theirs and merge it in.\"\n [runs git show :3:.context/scratchpad.enc > /tmp/theirs.enc]\n [runs git checkout --ours .context/scratchpad.enc]\n [runs ctx pad merge /tmp/theirs.enc]\n \"Merged 2 new entries (1 duplicate skipped). Want me to\n commit the resolution?\"\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#tips","level":2,"title":"Tips","text":"
Back up the key: If you lose it, you lose access to all encrypted entries. Store a copy in your password manager.
One key per project: Each ctx init generates a unique key. Don't reuse keys across projects.
Keys work in worktrees: Because the key lives at ~/.ctx/.ctx.key (outside the project), git worktrees on the same machine share the key automatically. No special setup needed.
Plaintext fallback for non-sensitive projects: If encryption adds friction and you have nothing sensitive, set scratchpad_encrypt: false in .ctxrc. Merge conflicts become trivial text merges.
Never commit the key: The key is stored outside the project at ~/.ctx/.ctx.key and should never be copied into the repository.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#next-up","level":2,"title":"Next Up","text":"
Hook Output Patterns →: Choose the right output pattern for your Claude Code hooks.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#see-also","level":2,"title":"See Also","text":"
Scratchpad: feature overview, all commands, when to use scratchpad vs context files
Persisting Decisions, Learnings, and Conventions: for structured knowledge that outlives the scratchpad
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-with-claude/","level":1,"title":"Using the Scratchpad","text":"","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#the-problem","level":2,"title":"The Problem","text":"
During a session you accumulate quick notes, reminders, intermediate values, and sometimes sensitive tokens. They don't fit TASKS.md (not work items) or DECISIONS.md (not decisions). They don't have the structured fields that LEARNINGS.md requires.
Without somewhere to put them, they get lost between sessions.
How do you capture working memory that persists across sessions without polluting your structured context files?
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#tldr","level":2,"title":"TL;DR","text":"
ctx pad add \"check DNS propagation after deploy\"\nctx pad # list entries\nctx pad show 1 # print entry (pipe-friendly)\n
Entries are encrypted at rest and travel with git.
Use the /ctx-pad skill to manage entries from inside your AI session.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx pad CLI command List all scratchpad entries ctx pad show N CLI command Output raw text of entry N (pipe-friendly) ctx pad add CLI command Add a new entry ctx pad edit CLI command Replace, append to, or prepend to an entry ctx pad add --file CLI command Ingest a file as a blob entry ctx pad show N --out CLI command Extract a blob entry to a file ctx pad rm CLI command Remove an entry ctx pad mv CLI command Reorder entries ctx pad import CLI command Bulk-import lines from a file (or stdin) ctx pad export CLI command Export all blob entries to a directory /ctx-pad Skill Natural language interface to all pad commands","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-1-add-a-note-naturally","level":3,"title":"Step 1: Add a Note Naturally","text":"
You don't need to remember any syntax. Just tell your assistant what to jot down:
You: \"jot down: check DNS propagation after deploy\"\n\nAgent: \"Added to scratchpad:\n 1. check DNS propagation after deploy\"\n
Behind the scenes, the agent runs ctx pad add \"check DNS propagation after deploy\".
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-2-view-your-scratchpad","level":3,"title":"Step 2: View Your Scratchpad","text":"
You: \"show my scratchpad\"\n\nAgent: \"Your scratchpad has 3 entries:\n 1. check DNS propagation after deploy\n 2. staging API endpoint: api.staging.example.com/v2\n 3. retry limit should be 5, not 3\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-3-edit-an-entry","level":3,"title":"Step 3: Edit an Entry","text":"
You: \"update entry 2 to include the new endpoint\"\n\nAgent: \"Updated entry 2:\n was: staging API endpoint: api.staging.example.com/v2\n now: staging API endpoint: api.prod.example.com/v2\"\n
For partial edits, the agent can use --append or --prepend:
You: \"append the port number to entry 2\"\n\nAgent: \"Updated entry 2:\n staging API endpoint: api.prod.example.com/v2 :8443\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-4-compose-entries","level":3,"title":"Step 4: Compose Entries","text":"
Use ctx pad show N to pipe entry content into other operations:
You: \"append the content of entry 3 to entry 1\"\n\nAgent: [runs ctx pad edit 1 --append \"$(ctx pad show 3)\"]\n \"Done. Entry 1 is now:\n check DNS propagation after deploy retry limit should be 5, not 3\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-5-reorder-by-priority","level":3,"title":"Step 5: Reorder by Priority","text":"
You: \"move entry 4 to the top\"\n\nAgent: [runs ctx pad mv 4 1]\n \"Moved entry 4 to position 1. Scratchpad reordered.\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-6-clean-up","level":3,"title":"Step 6: Clean Up","text":"
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-7-store-a-file-as-a-blob","level":3,"title":"Step 7: Store a File as a Blob","text":"
The scratchpad can hold small files (up to 64 KB) as encrypted blob entries. The file is base64-encoded and stored alongside a label you provide:
# Ingest a file: the first argument is the label\nctx pad add \"deploy config\" --file ./deploy.yaml\n\n# List shows the label with a [BLOB] marker\nctx pad\n# 1. check DNS propagation after deploy\n# 2. deploy config [BLOB]\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-8-extract-a-blob","level":3,"title":"Step 8: Extract a Blob","text":"
Use show --out to write the decoded file back to disk:
# Write blob entry to a file\nctx pad show 2 --out ./recovered-deploy.yaml\n\n# Or print to stdout (for piping)\nctx pad show 2 | head -5\n
Blob entries are encrypted identically to text entries: They're just base64-encoded before encryption. The --out flag decodes and writes the raw bytes.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-9-bulk-import-notes","level":3,"title":"Step 9: Bulk Import Notes","text":"
When you have a file with many notes (one per line), import them in bulk instead of adding one at a time:
# Import from a file: Each non-empty line becomes an entry\nctx pad import notes.txt\n\n# Or pipe from stdin\ngrep TODO *.go | ctx pad import -\n
All entries are written in a single encrypt/write cycle, regardless of how many lines the file contains.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-10-export-blobs-to-disk","level":3,"title":"Step 10: Export Blobs to Disk","text":"
Export all blob entries to a directory as individual files. Each blob's label becomes the filename:
# Export to a directory (created if needed)\nctx pad export ./ideas\n\n# Preview what would be exported\nctx pad export --dry-run ./ideas\n\n# Force overwrite existing files\nctx pad export --force ./backup\n
When a file already exists, a unix timestamp is prepended to the filename to avoid collisions. Use --force to overwrite instead.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#using-ctx-pad-in-a-session","level":2,"title":"Using /ctx-pad in a Session","text":"
Invoke the /ctx-pad skill first, then describe what you want in natural language. Without the skill prefix, the agent may route your request to TASKS.md or another context file instead of the scratchpad.
You: /ctx-pad jot down: check DNS after deploy\nYou: /ctx-pad show my scratchpad\nYou: /ctx-pad delete entry 3\n
Once the skill is active, it translates intent into commands:
You say (after /ctx-pad) What the agent does \"jot down: check DNS after deploy\" ctx pad add \"check DNS after deploy\" \"remember this: retry limit is 5\" ctx pad add \"retry limit is 5\" \"show my scratchpad\" / \"what's on my pad\" ctx pad \"show me entry 3\" ctx pad show 3 \"delete the third one\" / \"remove entry 3\" ctx pad rm 3 \"change entry 2 to ...\" ctx pad edit 2 \"new text\" \"append ' +important' to entry 3\" ctx pad edit 3 --append \" +important\" \"prepend 'URGENT:' to entry 1\" ctx pad edit 1 --prepend \"URGENT: \" \"prioritize entry 4\" / \"move to the top\" ctx pad mv 4 1 \"import my notes from notes.txt\" ctx pad import notes.txt \"export all blobs to ./ideas\" ctx pad export ./ideas
When in Doubt, Use the CLI Directly
The ctx pad commands work the same whether you run them yourself or let the skill invoke them.
If the agent misroutes a request, fall back to ctx pad add \"...\" in your terminal.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#when-to-use-scratchpad-vs-context-files","level":2,"title":"When to Use Scratchpad vs Context Files","text":"Situation Use Temporary reminders (\"check X after deploy\") Scratchpad Session-start reminders (\"remind me next session\") ctx remind Working values during debugging (ports, endpoints, counts) Scratchpad Sensitive tokens or API keys (short-term storage) Scratchpad Quick notes that don't fit anywhere else Scratchpad Work items with completion tracking TASKS.md Trade-offs between alternatives with rationale DECISIONS.md Reusable lessons with context/lesson/application LEARNINGS.md Codified patterns and standards CONVENTIONS.md
Decision Guide
If it has structured fields (context, rationale, lesson, application), it belongs in a context file like DECISIONS.md or LEARNINGS.md.
If it's a work item you'll mark done, it belongs in TASKS.md.
If you want a message relayed VERBATIM at the next session start, it belongs in ctx remind.
If it's a quick note, reminder, or working value (especially if it's sensitive or ephemeral) it belongs on the scratchpad.
Scratchpad Is Not a Junk Drawer
The scratchpad is for working memory, not long-term storage.
If a note is still relevant after several sessions, promote it:
A persistent reminder becomes a task, a recurring value becomes a convention, a hard-won insight becomes a learning.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#tips","level":2,"title":"Tips","text":"
Entries persist across sessions: The scratchpad is committed (encrypted) to git, so entries survive session boundaries. Pick up where you left off.
Entries are numbered and reorderable: Use ctx pad mv to put high-priority items at the top.
ctx pad show N enables unix piping: Output raw entry text with no numbering prefix. Compose with --append, --prepend, or other shell tools.
Never mention the key file contents to the AI: The agent knows how to use ctx pad commands but should never read or print the encryption key (~/.ctx/.ctx.key) directly.
Encryption is transparent: You interact with plaintext; the encryption/decryption happens automatically on every read/write.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#next-up","level":2,"title":"Next Up","text":"
Syncing Scratchpad Notes Across Machines →: Distribute encryption keys and scratchpad data across environments.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#see-also","level":2,"title":"See Also","text":"
Scratchpad: feature overview, all commands, encryption details, plaintext override
Persisting Decisions, Learnings, and Conventions: for structured knowledge that outlives the scratchpad
The Complete Session: full session lifecycle showing how the scratchpad fits into the broader workflow
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/session-archaeology/","level":1,"title":"Browsing and Enriching Past Sessions","text":"","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#the-problem","level":2,"title":"The Problem","text":"
After weeks of AI-assisted development you have dozens of sessions scattered across JSONL files in ~/.claude/projects/. Finding the session where you debugged the Redis connection pool, or remembering what you decided about the caching strategy three Tuesdays ago, often means grepping raw JSON.
There is no table of contents, no search, and no summaries.
This recipe shows how to turn that raw session history into a browsable, searchable, and enriched journal site you can navigate in your browser.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#tldr","level":2,"title":"TL;DR","text":"
Export and Generate
ctx journal import --all\nctx journal site --serve\n
Enrich
/ctx-journal-enrich-all\n
Rebuild
ctx journal site --serve\n
Read on for what each stage does and why.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx journal source Command List parsed sessions with metadata ctx journal source --show Command Inspect a specific session in detail ctx journal import Command Import sessions to editable journal Markdown ctx journal site Command Generate a static site from journal entries ctx journal obsidian Command Generate an Obsidian vault from journal entries ctx serve Command Serve any zensical directory (default: journal) /ctx-history Skill Browse sessions inside your AI assistant /ctx-journal-enrich Skill Add frontmatter metadata to a single entry /ctx-journal-enrich-all Skill Full pipeline: import if needed, then batch-enrich","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#the-workflow","level":2,"title":"The Workflow","text":"
The session journal follows a four-stage pipeline.
Each stage is idempotent and safe to re-run:
By default, each stage skips entries that have already been processed.
import -> enrich -> rebuild\n
Stage Tool What it does Skips if Where Import ctx journal import --all Converts session JSONL to Markdown File already exists (safe default) CLI or agent Enrich /ctx-journal-enrich-all Adds frontmatter, summaries, topic tags Frontmatter already present Agent only Rebuild ctx journal site --build Generates browsable static HTML N/A CLI only Obsidian ctx journal obsidian Generates Obsidian vault with wikilinks N/A CLI only
Where Do You Run Each Stage?
Import (Steps 1 to 3) works equally well from the terminal or inside your AI assistant via /ctx-history. The CLI is fine here: the agent adds no special intelligence, it just runs the same command.
Enrich (Step 4) requires the agent: it reads conversation content and produces structured metadata.
Rebuild and serve (Step 5) is a terminal operation that starts a long-running server.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-1-list-your-sessions","level":3,"title":"Step 1: List Your Sessions","text":"
Start by seeing what sessions exist for the current project:
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-2-inspect-a-specific-session","level":3,"title":"Step 2: Inspect a Specific Session","text":"
Before exporting everything, inspect a single session to see its metadata and conversation summary:
ctx journal source --show --latest\n
Or look up a specific session by its slug, partial ID, or UUID:
Add --full to see the complete message content instead of the summary view:
ctx journal source --show --latest --full\n
This is useful for checking what happened before deciding whether to export and enrich it.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-3-import-sessions-to-the-journal","level":3,"title":"Step 3: Import Sessions to the Journal","text":"
Import converts raw session data into editable Markdown files in .context/journal/:
# Import all sessions from the current project\nctx journal import --all\n\n# Import a single session\nctx journal import gleaming-wobbling-sutherland\n\n# Include sessions from all projects\nctx journal import --all --all-projects\n
--keep-frontmatter=false Discards Enrichments
--keep-frontmatter=false discards enriched YAML frontmatter during regeneration.
Back up your journal before using this flag.
Each imported file contains session metadata (date, time, duration, model, project, git branch), a tool usage summary, and the full conversation transcript.
Re-importing is safe. Running ctx journal import --all only imports new sessions: Existing files are never touched. Use --dry-run to preview what would be imported without writing anything.
To re-import existing files (e.g., after a format improvement), use --regenerate: Conversation content is regenerated while preserving any YAML frontmatter you or the enrichment skill has added. You'll be prompted before any files are overwritten.
--regenerate Replaces the Markdown Body
--regenerate preserves YAML frontmatter but replaces the entire Markdown body with freshly generated content from the source JSONL.
If you manually edited the conversation transcript (added notes, redacted sensitive content, restructured sections), those edits will be lost.
BACK UP YOUR JOURNAL FIRST.
To protect entries you've hand-edited, you can explicitly lock them:
ctx journal lock <pattern>\n
Locked entries are always skipped, regardless of flags.
If you prefer to add locked: true directly in frontmatter during enrichment, run ctx journal sync to propagate the lock state to .state.json:
ctx journal sync\n
See ctx journal lock --help and ctx journal sync --help for details.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-4-enrich-with-metadata","level":3,"title":"Step 4: Enrich with Metadata","text":"
Raw imports have timestamps and transcripts but lack the semantic metadata that makes sessions searchable: topics, technology tags, outcome status, and summaries. The /ctx-journal-enrich* skills add this structured frontmatter.
Locked entries are skipped by enrichment skills, just as they are by import. Lock entries you want to protect before running batch enrichment.
Batch enrichment (recommended):
/ctx-journal-enrich-all\n
The skill finds all unenriched entries, filters out noise (suggestion sessions, very short sessions, multipart continuations), and processes each one by extracting titles, topics, technologies, and summaries from the conversation.
It shows you a grouped summary before applying changes so you can scan quickly rather than reviewing one by one.
For large backlogs (20+ entries), the skill can spawn subagents to process entries in parallel.
The skill also generates a summary and can extract decisions, learnings, and tasks mentioned during the session.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-5-generate-and-serve-the-site","level":3,"title":"Step 5: Generate and Serve the Site","text":"
With imported and enriched journal files, generate the static site:
# Generate site structure only\nctx journal site\n\n# Generate and build static HTML\nctx journal site --build\n\n# Generate, build, and serve locally\nctx journal site --serve\n
Then open http://localhost:8000 to browse.
The site includes a date-sorted index, individual session pages with full conversations, search (press /), dark mode, and enriched titles in the navigation when frontmatter exists.
You can also serve an already-generated site without regenerating using ctx serve (serve-only, no regeneration).
The site generator requires zensical (pipx install zensical).
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#where-does-agent-add-value","level":2,"title":"Where Does Agent Add Value?","text":"
Export, list, and show are mechanical: The agent runs the same CLI commands you would, so you can stay in your terminal for those.
The agent earns its keep in enrich. It reads conversation content, understands what happened, and produces structured metadata. That is agent work, not CLI work.
You can also ask your agent to browse sessions conversationally instead of remembering flags:
What did we work on last week?\nShow me the session about Redis.\nImport everything to the journal.\n
This is convenient but not required: ctx journal source gives you the same inventory.
Where the agent genuinely helps is chaining the pipeline:
You: What happened last Tuesday?\nAgent: Last Tuesday you worked on two sessions:\n - bright-dancing-hopper (2h 10m): refactored the middleware\n pipeline and added Redis caching\n - quiet-flowing-dijkstra (18m): quick fix for a nil pointer\n in the config loader\n Want me to export and enrich them?\nYou: Yes, do it.\nAgent: Exports both, enriches, then proposes frontmatter.\n
The value is staying in one context while the agent runs import -> enrich without you manually switching tools.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
A typical pipeline from raw sessions to a browsable site:
# Terminal: import and generate\nctx journal import --all\nctx journal site --serve\n
# AI assistant: enrich\n/ctx-journal-enrich-all\n
# Terminal: rebuild with enrichments\nctx journal site --serve\n
If your project includes Makefile.ctx (deployed by ctx init), use make journal to combine import and rebuild stages. Then enrich inside Claude Code, then make journal again to pick up enrichments.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#session-retention-and-cleanup","level":2,"title":"Session Retention and Cleanup","text":"
Claude Code does not keep JSONL transcripts forever. Understanding its cleanup behavior helps you avoid losing session history.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#default-behavior","level":3,"title":"Default Behavior","text":"
Claude Code retains session transcripts for approximately 30 days. After that, JSONL files are automatically deleted during cleanup. Once deleted, ctx journal can no longer see those sessions - the data is gone.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#the-cleanupperioddays-setting","level":3,"title":"The cleanupPeriodDays Setting","text":"
Claude Code exposes a cleanupPeriodDays setting in its configuration (~/.claude/settings.json) that controls retention:
Value Behavior 30 (default) Transcripts older than 30 days are deleted 60, 90, etc. Extends the retention window 0 Disables writing new transcripts entirely - not \"keep forever\"
Setting cleanupPeriodDays to 0
Setting this to 0 does not mean \"never delete.\" It disables transcript creation altogether. No new JSONL files are written, which means ctx journal sees nothing new. This is rarely what you want.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#why-journal-import-matters","level":3,"title":"Why Journal Import Matters","text":"
The journal import pipeline (Steps 1-4 above) is your archival mechanism. Imported Markdown files in .context/journal/ persist independently of Claude Code's cleanup cycle. Even after the source JSONL files are deleted, your journal entries remain.
Recommendation: import regularly - weekly, or after any session worth revisiting. A quick ctx journal import --all takes seconds and ensures nothing falls through the 30-day window.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#quick-archival-checklist","level":3,"title":"Quick Archival Checklist","text":"
Run ctx journal import --all at least weekly
Enrich high-value sessions with /ctx-journal-enrich before the details fade from your own memory
Lock enriched entries (ctx journal lock <pattern>) to protect them from accidental regeneration
Rebuild the journal site periodically to keep it current
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#tips","level":2,"title":"Tips","text":"
Start with /ctx-history inside your AI assistant. If you want to quickly check what happened in a recent session without leaving your editor, /ctx-history lets you browse interactively without importing.
Large sessions may be split automatically. Sessions with 200+ messages can be split into multiple parts (session-abc123.md, session-abc123-p2.md, session-abc123-p3.md) with navigation links between them. The site generator can handle this.
Suggestion sessions can be separated. Claude Code can generate short suggestion sessions for autocomplete. These may appear under a separate section in the site index, so they do not clutter your main session list.
Your agent is a good session browser. You do not need to remember slugs, dates, or flags. Ask \"what did we do yesterday?\" or \"find the session about Redis\" and it can map the question to recall commands.
Journal Files Are Sensitive
Journal files MUST be .gitignored.
Session transcripts can contain sensitive data such as file contents, commands, error messages with stack traces, and potentially API keys.
Add .context/journal/, .context/journal-site/, and .context/journal-obsidian/ to your .gitignore.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#next-up","level":2,"title":"Next Up","text":"
Persisting Decisions, Learnings, and Conventions →: Record decisions, learnings, and conventions so they survive across sessions.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#see-also","level":2,"title":"See Also","text":"
The Complete Session: where session saving fits in the daily workflow
Turning Activity into Content: generating blog posts from session history
Session Journal: full documentation of the journal system
CLI Reference: ctx journal: all journal subcommands and flags
CLI Reference: ctx serve: serve-only (no regeneration)
Context Files: the .context/ directory structure
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-ceremonies/","level":1,"title":"Session Ceremonies","text":"","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#the-problem","level":2,"title":"The Problem","text":"
Sessions have two critical moments: the start and the end.
At the start, you need the agent to load context and confirm it knows what is going on.
At the end, you need to capture whatever the session produced before the conversation disappears.
Most ctx skills work conversationally: \"jot down: check DNS after deploy\" is as good as /ctx-pad add \"check DNS after deploy\". But session boundaries are different. They are well-defined moments with specific requirements, and partial execution is costly.
If the agent only half-loads context at the start, it works from stale assumptions. If it only half-persists at the end, learnings and decisions are lost.
This Is One of the Few Times Being Explicit Matters
Session ceremonies are the two bookend skills that mark these boundaries.
They are the exception to the conversational rule:
Invoke /ctx-remember and /ctx-wrap-up explicitly as slash commands.
Most ctx skills encourage natural language. These two are different:
Well-defined moments: Sessions have clear boundaries. A slash command marks the boundary unambiguously.
Ambiguity risk: \"Do you remember?\" could mean many things. /ctx-remember means exactly one thing: load context and present a structured readback.
Completeness: Conversational triggers risk partial execution. The agent might load some files but skip the session history, or persist one learning but forget to check for uncommitted changes. The slash command runs the full ceremony.
Muscle memory: Typing /ctx-remember at session start and /ctx-wrap-up at session end becomes a habit, like opening and closing braces.
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-remember Skill Load context and present structured readback /ctx-wrap-up Skill Gather session signal, propose and persist context /ctx-commit Skill Commit with context capture (offered by wrap-up) ctx agent CLI Load token-budgeted context packet ctx journal source CLI List recent sessions ctx add CLI Persist learnings, decisions, conventions, tasks","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#session-start-ctx-remember","level":2,"title":"Session Start: /ctx-remember","text":"
Invoke at the beginning of every session:
/ctx-remember\n
The skill silently:
Loads the context packet via ctx agent --budget 4000
Reads TASKS.md, DECISIONS.md, LEARNINGS.md
Checks recent sessions via ctx journal source --limit 3
Then presents a structured readback with four sections:
Last session: topic, date, what was accomplished
Active work: pending and in-progress tasks
Recent context: 1-2 relevant decisions or learnings
Next step: suggestion or question about what to focus on
The readback should feel like recall, not a file system tour. If the agent says \"Let me check if there are files...\" instead of a confident summary, the skill is not working correctly.
What About 'do you remember?'
The conversational trigger still works. But /ctx-remember guarantees the full ceremony runs:
After persisting, the skill marks the session as wrapped up via ctx system mark-wrapped-up. This suppresses context checkpoint nudges for 2 hours so the wrap-up ceremony itself does not trigger noisy reminders.
If there are uncommitted changes, offers to run /ctx-commit. Does not auto-commit.
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#when-to-skip","level":2,"title":"When to Skip","text":"
Not every session needs ceremonies.
Skip /ctx-remember when:
You are doing a quick one-off lookup (reading a file, checking a value)
Context was already loaded this session via /ctx-agent
You are continuing immediately after a previous session and context is still fresh
Skip /ctx-wrap-up when:
Nothing meaningful happened (only read files, answered a question)
You already persisted everything manually during the session
The session was trivial (typo fix, quick config change)
A good heuristic: if the session produced something a future session should know about, run /ctx-wrap-up. If not, just close.
# Session start\n/ctx-remember\n\n# ... do work ...\n\n# Session end\n/ctx-wrap-up\n
That is the complete ceremony. Two commands, bookending your session.
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#relationship-to-other-skills","level":2,"title":"Relationship to Other Skills","text":"Skill When Purpose /ctx-remember Session start Load and confirm context /ctx-reflect Mid-session breakpoints Checkpoint at milestones /ctx-wrap-up Session end Full session review and persist /ctx-commit After completing work Commit with context capture
/ctx-reflect is for mid-session checkpoints. /ctx-wrap-up is for end-of-session: it is more thorough, covers the full session arc, and includes the commit offer. If you already ran /ctx-reflect recently, /ctx-wrap-up avoids proposing the same candidates again.
Make it a habit: The value of ceremonies compounds over sessions. Each /ctx-wrap-up makes the next /ctx-remember richer.
Trust the candidates: The agent scans the full conversation. It often catches learnings you forgot about.
Edit before approving: If a proposed candidate is close but not quite right, tell the agent what to change. Do not settle for a vague learning when a precise one is possible.
Do not force empty ceremonies: If /ctx-wrap-up finds nothing worth persisting, that is fine. A session that only read files and answered questions does not need artificial learnings.
The Complete Session: the full session workflow that ceremonies bookend
Persisting Decisions, Learnings, and Conventions: deep dive on what gets persisted during wrap-up
Detecting and Fixing Drift: keeping context files accurate between ceremonies
Pausing Context Hooks: skip ceremonies entirely for quick tasks that don't need them
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-changes/","level":1,"title":"Reviewing Session Changes","text":"","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#what-changed-while-you-were-away","level":2,"title":"What Changed While You Were Away?","text":"
Between sessions, teammates commit code, context files get updated, and decisions pile up. ctx change gives you a single-command summary of everything that moved since your last session.
# Auto-detects your last session and shows what changed\nctx change\n\n# Check what changed in the last 48 hours\nctx change --since 48h\n\n# Check since a specific date\nctx change --since 2026-03-10\n
","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#how-reference-time-works","level":2,"title":"How Reference Time Works","text":"
ctx change needs a reference point to compare against. It tries these sources in order:
--since flag: explicit duration (24h, 72h) or date (2026-03-10, RFC3339 timestamp)
Session markers: ctx-loaded-* files in .context/state/; picks the second-most-recent (your previous session start)
Event log: last context-load-gate event from .context/state/events.jsonl
Fallback: 24 hours ago
The marker-based detection means ctx change usually just works without any flags: it knows when you last loaded context and shows everything after that.
","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#what-it-reports","level":2,"title":"What It Reports","text":"","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#context-file-changes","level":3,"title":"Context file changes","text":"
Any .md file in .context/ modified after the reference time:
No changes? If nothing shows up, the reference time might be wrong. Use --since 48h to widen the window.
Works without git. Context file changes are detected by filesystem mtime, not git. Code changes require git.
Hook integration. The context-load-gate hook writes the session marker that ctx change uses for auto-detection. If you're not using the ctx plugin, markers won't exist and it falls back to the event log or 24h window.
\"What does a full ctx session look like from start to finish?\"
You have ctx installed and your .context/ directory initialized, but the individual commands and skills feel disconnected.
How do they fit together into a coherent workflow?
This recipe walks through a complete session, from opening your editor to persisting context before you close it, so you can see how each piece connects.
Load: /ctx-remember: load context, get structured readback.
Orient: /ctx-status: check file health and token usage.
Pick: /ctx-next: choose what to work on.
Work: implement, test, iterate.
Commit: /ctx-commit: commit and capture decisions/learnings.
Reflect: /ctx-reflect: identify what to persist (at milestones)
Wrap up: /ctx-wrap-up: end-of-session ceremony.
Read on for the full walkthrough with examples.
What is a Readback?
A readback is a structured summary where the agent plays back what it knows:
last session,
active tasks,
recent decisions.
This way, you can confirm it loaded the right context.
The term \"readback\" comes from aviation, where pilots repeat instructions back to air traffic control to confirm they heard correctly.
Same idea in ctx: The agent tells you what it \"thinks\" is going on, and you correct anything that's off before the work begins.
Last session: topic, date, what was accomplished
Active work: pending and in-progress tasks
Recent context: 1-2 decisions or learnings that matter now
Next step: suggestion or question about what to focus on
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx status CLI command Quick health check on context files ctx agent CLI command Load token-budgeted context packet ctx journal source CLI command List previous sessions ctx journal source --show CLI command Inspect a specific session in detail /ctx-remember Skill Recall project context with structured readback /ctx-agent Skill Load full context packet inside the assistant /ctx-status Skill Show context summary with commentary /ctx-next Skill Suggest what to work on with rationale /ctx-commit Skill Commit code and prompt for context capture /ctx-reflect Skill Structured reflection checkpoint /ctx-history Skill Browse session history inside your AI assistant","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#the-workflow","level":2,"title":"The Workflow","text":"
The session lifecycle has seven steps. You will not always use every step (for example, a quick bugfix might skip reflection, and a research session might skip committing), but the full arc looks like this:
Load context > Orient > Pick a Task > Work > Commit > Reflect
Start every session by loading what you know. The fastest way is a single prompt:
Do you remember what we were working on?\n
This triggers the /ctx-remember skill. Behind the scenes, the assistant runs ctx agent --budget 4000, reads the files listed in the context packet (TASKS.md, DECISIONS.md, LEARNINGS.md, CONVENTIONS.md), checks ctx journal source --limit 3 for recent sessions, and then presents a structured readback.
The readback should feel like a recall, not a file system tour. If you see \"Let me check if there are files...\" instead of a confident summary, the context system is not loaded properly.
As an alternative, if you want raw data instead of a readback, run ctx status in your terminal or invoke /ctx-status for a summarized health check showing file counts, token usage, and recent activity.
After loading context, verify you understand the current state.
/ctx-status\n
The status output shows which context files are populated, how many tokens they consume, and which files were recently modified. Look for:
Empty core files: TASKS.md or CONVENTIONS.md with no content means the context is sparse
High token count (over 30k): the context is bloated and might need ctx compact
No recent activity: files may be stale and need updating
If the status looks healthy and the readback from Step 1 gave you enough context, skip ahead.
If something seems off (stale tasks, missing decisions...), spend a minute reading the relevant file before proceeding.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-3-pick-what-to-work-on","level":3,"title":"Step 3: Pick What to Work On","text":"
With context loaded, choose a task. You can pick one yourself, or ask the assistant to recommend:
/ctx-next\n
The skill reads TASKS.md, checks recent sessions to avoid re-suggesting completed work, and presents 1-3 ranked recommendations with rationale.
It prioritizes in-progress tasks over new starts (finishing is better than starting), respects explicit priority tags, and favors momentum: continuing a thread from a recent session is cheaper than context-switching.
If you already know what you want to work on, state it directly:
Let's work on the session enrichment feature.\n
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-4-do-the-work","level":3,"title":"Step 4: Do the Work","text":"
This is the main body of the session: write code, fix bugs, refactor, research: whatever the task requires.
During this phase, a few ctx-specific patterns help:
Check decisions before choosing: when you face a design choice, check if a prior decision covers it.
Is this consistent with our decisions?\n
Constrain scope: keep the assistant focused on the task at hand.
Only change files in internal/cli/session/. Nothing else.\n
Use /ctx-implement for multistep plans: if the task has multiple steps, this skill executes them one at a time with build/test verification between each step.
Context monitoring runs automatically: the check-context-size hook monitors context capacity at adaptive intervals. Early in a session it stays silent. After 16+ prompts it starts monitoring, and past 30 prompts it checks frequently. If context capacity is running high, it will suggest saving unsaved work. No manual invocation is needed.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-5-commit-with-context","level":3,"title":"Step 5: Commit with Context","text":"
When the work is ready, use the context-aware commit instead of raw git commit:
/ctx-commit\n
The Agent May Recommend Committing
You do not always need to invoke /ctx-commit explicitly.
After a commit, the agent may proactively offer to capture context:
\"We just made a trade-off there. Want me to record it as a decision?\"
This is normal: The Agent Playbook encourages persisting at milestones, and a commit is a natural milestone.
As an alternative, you can ask the assistant \"can we commit this?\" and it will pick up the /ctx-commit skill for you.
The skill runs a pre-commit build check (for Go projects, go build), reviews the staged changes, drafts a commit message focused on \"why\" rather than \"what\", and then commits.
After the commit succeeds, it prompts you:
**Any context to capture?**\n\n- **Decision**: Did you make a design choice or trade-off?\n- **Learning**: Did you hit a gotcha or discover something?\n- **Neither**: No context to capture; we are done.\n
If you made a decision, the skill records it with ctx add decision. If you learned something, it records it with ctx add learning including context, lesson, and application fields. This is the bridge between committing code and remembering why the code looks the way it does.
If source code changed in areas that affect documentation, the skill also offers to check for doc drift.
At natural breakpoints (after finishing a feature, resolving a complex bug, or before switching tasks) pause to reflect:
/ctx-reflect\n
Agents Reflect at Milestones
Agents often reflect without explicit invocation.
After completing a significant piece of work, the agent may naturally surface items worth persisting:
\"We discovered that $PPID resolves differently inside hooks. Should I save that as a learning?\"
This is the agent following the Work-Reflect-Persist cycle from the Agent Playbook.
You do not need to say /ctx-reflect for this to happen; the agent treats milestones as reflection triggers on its own.
The skill works through a checklist: learnings discovered, decisions made, tasks completed or created, and whether there are items worth persisting. It then presents a summary with specific items to persist, each with the exact command to run:
I would suggest persisting:\n\n- **Learning**: `$PPID` in PreToolUse hooks resolves to the Claude Code PID\n `ctx add learning --context \"...\" --lesson \"...\" --application \"...\"`\n- **Task**: mark \"Add cooldown to ctx agent\" as done\n- **Decision**: tombstone-based cooldown with 10m default\n `ctx add decision \"...\"`\n\nWant me to persist any of these?\n
The skill asks before persisting anything. You choose what to keep.
Not every commit needs reflection. A typo fix does not. But when you have been debugging for an hour and finally understand the root cause, that is worth a reflection checkpoint.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-7-persist-before-ending","level":3,"title":"Step 7: Persist Before Ending","text":"
Before ending the session, run the wrap-up ceremony to capture outstanding learnings, decisions, conventions, and tasks:
/ctx-wrap-up\n
Ceremony Skills
/ctx-remember at session start and /ctx-wrap-up at session end are ceremony skills: Invoke them as explicit slash commands for completeness, not conversationally.
See Session Ceremonies for details on why these two skills are the exception to the conversational rule.
The skill gathers signal from git diff, recent commits, and the conversation, then proposes structured candidates for your approval. After you select which to keep, it persists them via ctx add and offers /ctx-commit if uncommitted changes remain.
Session transcripts are automatically captured by Claude Code and can be browsed later with ctx journal source and ctx journal source --show.
Last session (Feb 7, caching-layer): we implemented the in-memory cache with\nTTL expiration and wrote tests for the eviction policy. \n\nYou have 3 pending tasks; the biggest one is adding Redis as a backend.\n\nRecent decision:<br>\nwe chose LRU over LFU because the access pattern is mostly recency-based.\n\nWant to continue with the Redis backend, or tackle something else?\n
You: Let's work on the Redis caching backend.
Agent (picks up the task, begins implementation):
Got it. I see the task in TASKS.md: \"Add Redis backend for cache layer.\"\n\nI will start with the connection pool setup. Only touching\n`internal/cache/redis/` for now.\n
(The agent works... writes code, runs tests, iterates.)
Agent (after implementing the connection pool and choosing a retry strategy):
The connection pool is working. I went with exponential backoff with jitter\nfor retries instead of fixed intervals; it handles thundering herd better\nunder reconnection storms. That is a design choice worth recording. \n\nWant me to save it as a decision?\n
You: Yes, go ahead.
Agent:
Done. Added to DECISIONS.md: \"Exponential backoff with jitter for Redis\nreconnection, to avoid thundering herd on pool recovery.\"\n
(More work happens... the agent implements the cache read/write paths.)
You: What else did we learn during this session?
Agent:
A few things worth capturing:\n\n* **Learning**: Redis `SET` with `NX` and `PX` flags handles atomic\n set-if-not-exists with TTL in a single call. No need for a separate\n `EXPIRE` command. We initially did it in two calls and hit a race condition.\n* **Learning**: The `go-redis` library's `PoolSize` default is 10 per CPU.\n For our workload, explicit sizing at 20 was more predictable.\n* **Task update**: The connection pool subtask is done; the read/write path\n is implemented but needs integration tests.\n\nWant me to persist these?\n
You: Yes. And let's wrap up.
Agent (reflects and persists... all in one flow):
Persisted:\n\n* 2 learnings added to LEARNINGS.md\n* Marked \"connection pool setup\" done in TASKS.md\n* Added \"write integration tests for Redis cache\" as a new task\n\nContext is up to date. You are good to close.\n
Notice What Happened
In the above workflow, the user never typed /ctx-reflect or ctx add learning.
The agent moved through Load, Orient, Pick, Work, Commit, and Reflect driven by natural conversation.
\"Let's wrap up\" was enough to trigger the full reflect-and-persist flow.
The agent surfaced persist-worthy items at milestones: after a design choice, after discovering a gotcha: without waiting to be asked.
This is the intended experience.
The commands and skills still exist for when you want precise control, but the agent is a proactive partner in the lifecycle, not a passive executor of slash commands.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
Quick-reference checklist for a complete session:
Load: /ctx-remember: load context and confirm readback
Orient: /ctx-status: check file health and token usage
Pick: /ctx-next: choose what to work on
Work: implement, test, iterate (scope with \"only change X\")
Commit: /ctx-commit: commit and capture decisions/learnings
Reflect: /ctx-reflect: identify what to persist (at milestones)
Wrap up: /ctx-wrap-up: end-of-session ceremony
Conversational equivalents: you can drive the same lifecycle with plain language:
Step Slash command Natural language Load /ctx-remember \"Do you remember?\" / \"What were we working on?\" Orient /ctx-status \"How's our context looking?\" Pick /ctx-next \"What should we work on?\" / \"Let's do the caching task\" Work -- \"Only change files in internal/cache/\" Commit /ctx-commit \"Commit this\" / \"Ship it\" Reflect /ctx-reflect \"What did we learn?\" / (agent offers at milestones) Wrap up /ctx-wrap-up (use the slash command for completeness)
The agent understands both columns.
In practice, most sessions use a mix:
Explicit Commands when you want precision;
Natural Language when you want flow and agentic autonomy.
The agent will also initiate steps on its own (particularly \"Reflect\") when it recognizes a milestone.
Short sessions (quick bugfix) might only use: Load, Work, Commit.
Long sessions should Reflect after each major milestone and persist learnings and decisions before ending.
Persist early if context is running low. A hook monitors context capacity and notifies you when it gets high, but do not wait for the notification. If you have been working for a while and have unpersisted learnings, persist proactively.
Browse previous sessions by topic. If you need context from a prior session, ctx journal source --show auth will match by keyword. You do not need to remember the exact date or slug.
Reflection is optional but valuable. You can skip /ctx-reflect for small changes, but always persist learnings and decisions before ending a session where you did meaningful work. These are what the next session loads.
Let the hook handle context loading. The PreToolUse hook runs ctx agent automatically with a cooldown, so context loads on first tool use without you asking. The /ctx-remember prompt at session start is for your benefit (to get a readback), not because the assistant needs it.
The agent is a proactive partner, not a passive tool. A ctx-aware agent follows the Agent Playbook: it watches for milestones (completed tasks, design decisions, discovered gotchas) and offers to persist them without being asked. If you finish a tricky debugging session, it may say \"That root cause is worth saving as a learning. Want me to record it?\" before you think to ask. This is by design.
Not every session needs the full ceremony. Quick investigations, one-off questions, small fixes unrelated to active project work: These tasks don't benefit from persistence nudges, ceremony reminders, or knowledge checks. Every hook still fires, consuming tokens and attention on work that won't produce learnings or decisions worth capturing.
","path":["Recipes","Sessions","Pausing Context Hooks"],"tags":[]},{"location":"recipes/session-pause/#tldr","level":2,"title":"TL;DR","text":"Command What it does ctx pause or /ctx-pause Silence all nudge hooks for this session ctx resume or /ctx-resume Restore normal hook behavior
Pause is session-scoped: It only affects the current session. Other sessions (same project, different terminal) are unaffected.
","path":["Recipes","Sessions","Pausing Context Hooks"],"tags":[]},{"location":"recipes/session-pause/#what-still-fires","level":2,"title":"What Still Fires","text":"
Security hooks always run, even when paused:
block-non-path-ctx: prevents ./ctx invocations
block-dangerous-commands: blocks sudo, force push, etc.
# 1. Session starts: Context loads normally.\n\n# 2. You realize this is a quick task\nctx pause\n\n# 3. Work without interruption: hooks are silent\n\n# 4. Session evolves into real work? Resume first\nctx resume\n\n# 5. Now wrap up normally\n# /ctx-wrap-up\n
Resume before wrapping up. If your quick task turns into real work, resume hooks before running /ctx-wrap-up. The wrap-up ceremony needs active hooks to capture learnings properly.
Initial context load is unaffected. The ~8k token startup injection (CLAUDE.md, playbook, constitution) happens before any command runs. Pause only affects hooks that fire during the session.
Use for quick investigations. Debugging a stack trace? Checking a git log? Answering a colleague's question? Pause, do the work, close the session. No ceremony needed.
Don't use for real work. If you're implementing features, fixing bugs, or making decisions: keep hooks active. The nudges exist to prevent context loss.
You're deep in a session and realize: \"I need to refactor the swagger definitions next time.\" You could add a task, but this isn't a work item: it's a note to future-you. You could jot it on the scratchpad, but scratchpad entries don't announce themselves.
How do you leave a message that your next session opens with?
Reminders surface automatically at session start: VERBATIM, every session, until you dismiss them.
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx remind CLI command Add a reminder (default action) ctx remind list CLI command Show all pending reminders ctx remind dismiss CLI command Remove a reminder by ID (or --all) /ctx-remind Skill Natural language interface to reminders","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#step-1-leave-a-reminder","level":3,"title":"Step 1: Leave a Reminder","text":"
Tell your agent what to remember, or run it directly:
You: \"remind me to refactor the swagger definitions\"\n\nAgent: [runs ctx remind \"refactor the swagger definitions\"]\n \"Reminder set:\n + [1] refactor the swagger definitions\"\n
Or from the terminal:
ctx remind \"refactor the swagger definitions\"\n
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#step-2-set-a-date-gate-optional","level":3,"title":"Step 2: Set a Date Gate (Optional)","text":"
If the reminder shouldn't fire until a specific date:
You: \"remind me to check the deploy logs after Tuesday\"\n\nAgent: [runs ctx remind \"check the deploy logs\" --after 2026-02-25]\n \"Reminder set:\n + [2] check the deploy logs (after 2026-02-25)\"\n
The reminder stays silent until that date, then fires every session.
The agent converts natural language dates (\"tomorrow\", \"next week\", \"after the release on Friday\") to YYYY-MM-DD. If it's ambiguous, it asks.
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#step-3-start-a-new-session","level":3,"title":"Step 3: Start a New Session","text":"
Next session, the reminder appears automatically before anything else:
[1] refactor the swagger definitions\n [3] review auth token expiry logic\n [4] check deploy logs (after 2026-02-25, not yet due)\n
Date-gated reminders that haven't reached their date show (not yet due).
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#using-ctx-remind-in-a-session","level":2,"title":"Using /ctx-remind in a Session","text":"
Invoke the /ctx-remind skill, then describe what you want:
You: /ctx-remind remind me to update the API docs\nYou: /ctx-remind what reminders do I have?\nYou: /ctx-remind dismiss reminder 3\n
You say (after /ctx-remind) What the agent does \"remind me to update the API docs\" ctx remind \"update the API docs\" \"remind me next week to check staging\" ctx remind \"check staging\" --after 2026-03-02 \"what reminders do I have?\" ctx remind list \"dismiss reminder 3\" ctx remind dismiss 3 \"clear all reminders\" ctx remind dismiss --all","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#reminders-vs-scratchpad-vs-tasks","level":2,"title":"Reminders vs Scratchpad vs Tasks","text":"You want to... Use Leave a note that announces itself next session ctx remind Jot down a quick value or sensitive token ctx pad Track work with status and completion TASKS.md Record a decision or lesson for all sessions Context files
Decision guide:
If it should announce itself at session start → ctx remind
If it's a quiet note you'll check manually → ctx pad
If it's a work item you'll mark done → TASKS.md
Reminders Are Sticky Notes, Not Tasks
A reminder has no status, no priority, no lifecycle. It's a message to \"future you\" that fires until dismissed.
Reminders fire every session: Unlike nudges (which throttle to once per day), reminders repeat until you dismiss them. This is intentional: You asked to be reminded.
Date gating is session-scoped, not clock-scoped: --after 2026-02-25 means \"don't show until sessions on or after Feb 25.\" It does not mean \"alarm at midnight on Feb 25.\"
The agent handles date parsing: Say \"next week\" or \"after Friday\": The agent converts it to YYYY-MM-DD. The CLI only accepts the explicit date format.
Reminders are committed to git: They travel with the repo. If you switch machines, your reminders follow.
IDs never reuse: After dismissing reminder 3, the next reminder gets ID 4 (or higher). No confusion from recycled numbers.
Every session creates tombstone files in .context/state/ - small markers that suppress repeat hook nudges (\"already checked context size\", \"already sent persistence reminder\"). Over days and weeks, these accumulate into hundreds of files from long-dead sessions.
The files are harmless individually, but the clutter makes it harder to reason about state, and stale global tombstones can suppress nudges across sessions entirely.
ctx system prune --dry-run # preview what would be removed\nctx system prune # prune files older than 7 days\nctx system prune --days 1 # more aggressive: keep only today\n
","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/state-maintenance/#commands-used","level":2,"title":"Commands Used","text":"Tool Type Purpose ctx system prune Command Remove old per-session state files ctx status Command Quick health overview including state dir","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/state-maintenance/#understanding-state-files","level":2,"title":"Understanding State Files","text":"
State files fall into two categories:
Session-scoped (contain a UUID in the filename): Created per-session to suppress repeat nudges. Safe to prune once the session ends. Examples:
Global (no UUID): Persist across sessions. ctx system prune preserves these automatically. Some are legitimate state (events.jsonl, memory-import.json); others may be stale tombstones that need manual review.
ctx system prune # older than 7 days\nctx system prune --days 3 # older than 3 days\nctx system prune --days 1 # older than 1 day (aggressive)\n
","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/state-maintenance/#step-3-review-global-files","level":3,"title":"Step 3: Review Global Files","text":"
After pruning, check what prune preserved:
ls .context/state/ | grep -v '[0-9a-f]\\{8\\}-[0-9a-f]\\{4\\}'\n
Legitimate global files (keep):
events.jsonl - event log
memory-import.json - import tracking state
Stale global tombstones (safe to delete):
Files like backup-reminded, ceremony-reminded, version-checked with no session UUID are one-shot markers. If they are from a previous session, they are stale and can be removed manually.
Pruning active sessions is safe but noisy: If you prune a file belonging to a still-running session, the corresponding hook will re-fire its nudge on the next prompt. Minor UX annoyance, not data loss.
No context files are stored in state: The state directory contains only tombstones, counters, and diagnostic data. Nothing in .context/state/ affects your decisions, learnings, tasks, or conventions.
Test artifacts sneak in: Files like context-check-statstest or heartbeat-unknown are artifacts from development or testing. They lack UUIDs so prune preserves them. Delete manually.
Detecting and Fixing Drift: broader context maintenance including drift detection and archival
Troubleshooting: diagnostic workflow using ctx doctor and event logs
CLI Reference: system: full flag documentation for ctx system prune and related commands
","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/system-hooks-audit/","level":1,"title":"Auditing System Hooks","text":"","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#the-problem","level":2,"title":"The Problem","text":"
ctx runs 14 system hooks behind the scenes: nudging your agent to persist context, warning about resource pressure, gating commits on QA. But these hooks are invisible by design. You never see them fire. You never know if they stopped working.
How do you verify your hooks are actually running, audit what they do, and get alerted when they go silent?
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#tldr","level":2,"title":"TL;DR","text":"
ctx system check-resources # run a hook manually\nls -la .context/logs/ # check hook execution logs\nctx notify setup # get notified when hooks fire\n
Or ask your agent: \"Are our hooks running?\"
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx system <hook> CLI command Run a system hook manually ctx system resources CLI command Show system resource status ctx system stats CLI command Stream or dump per-session token stats ctx notify setup CLI command Configure webhook for audit trail ctx notify test CLI command Verify webhook delivery .ctxrcnotify.events Configuration Subscribe to relay for full hook audit .context/logs/ Log files Local hook execution ledger","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#what-are-system-hooks","level":2,"title":"What Are System Hooks?","text":"
System hooks are plumbing commands that ctx registers with your AI tool (Claude Code, Cursor, etc.) via the plugin's hooks.json. They fire automatically at specific events during your AI session:
Event When Hooks UserPromptSubmit Before the agent sees your prompt 10 check hooks + heartbeat PreToolUse Before the agent uses a tool block-non-path-ctx, qa-reminderPostToolUse After a tool call succeeds post-commit
You never run these manually. Your AI tool runs them for you: That's the point.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#the-complete-hook-catalog","level":2,"title":"The Complete Hook Catalog","text":"","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#prompt-time-checks-userpromptsubmit","level":3,"title":"Prompt-Time Checks (UserPromptSubmit)","text":"
These fire before every prompt, but most are throttled to avoid noise.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-context-size-context-capacity-warning","level":4,"title":"check-context-size: Context Capacity Warning","text":"
What: Adaptive prompt counter. Silent for the first 15 prompts, then nudges with increasing frequency (every 5th, then every 3rd).
Why: Long sessions lose coherence. The nudge reminds both you and the agent to persist context before the window fills up.
Output: VERBATIM relay box with prompt count.
┌─ Context Checkpoint (prompt #20) ────────────────\n│ This session is getting deep. Consider wrapping up\n│ soon. If there are unsaved learnings, decisions, or\n│ conventions, now is a good time to persist them.\n│ ⏱ Context window: ~45k tokens (~22% of 200k)\n└──────────────────────────────────────────────────\n
Stats: Every prompt records token usage to .context/state/stats-{session}.jsonl. Monitor live with ctx system stats --follow or query with ctx system stats --json. Stats are recorded even during wrap-up suppression (event: suppressed).
Billing guard: When billing_token_warn is set in .ctxrc, a one-shot warning fires if session tokens exceed the threshold. This warning is independent of all other triggers - it fires even during wrap-up suppression.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-persistence-context-staleness-nudge","level":4,"title":"check-persistence: Context Staleness Nudge","text":"
What: Tracks when .context/*.md files were last modified. If too many prompts pass without a write, nudges the agent to persist.
Why: Sessions produce insights that evaporate if not recorded. This catches the \"we talked about it but never wrote it down\" failure mode.
Output: VERBATIM relay after 20+ prompts without a context file change.
┌─ Persistence Checkpoint (prompt #20) ───────────\n│ No context files updated in 20+ prompts.\n│ Have you discovered learnings, made decisions,\n│ established conventions, or completed tasks\n│ worth persisting?\n│\n│ Run /ctx-wrap-up to capture session context.\n└──────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-ceremonies-session-ritual-adoption","level":4,"title":"check-ceremonies: Session Ritual Adoption","text":"
What: Scans your last 3 journal entries for /ctx-remember and /ctx-wrap-up usage. Nudges once per day if missing.
Why: Session ceremonies are the highest-leverage habit in ctx. This hook bootstraps the habit until it becomes automatic.
Output: Tailored nudge depending on which ceremony is missing.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-journal-unimported-session-reminder","level":4,"title":"check-journal: Unimported Session Reminder","text":"
What: Detects unimported Claude Code sessions and unenriched journal entries. Fires once per day.
Why: Exported sessions become searchable history. Unenriched entries lack metadata for filtering. Both decay in value over time.
Output: VERBATIM relay with counts and exact commands.
┌─ Journal Reminder ─────────────────────────────\n│ You have 3 new session(s) not yet exported.\n│ 5 existing entries need enrichment.\n│\n│ Export and enrich:\n│ ctx journal import --all\n│ /ctx-journal-enrich-all\n└────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-resources-system-resource-pressure","level":4,"title":"check-resources: System Resource Pressure","text":"
What: Monitors memory, swap, disk, and CPU load. Only fires at DANGER severity (memory >= 90%, swap >= 75%, disk >= 95%, load >= 1.5x CPU count).
Why: Resource exhaustion mid-session can corrupt work. This provides early warning to persist and exit.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-knowledge-knowledge-file-growth","level":4,"title":"check-knowledge: Knowledge File Growth","text":"
What: Counts entries in LEARNINGS.md, DECISIONS.md, and lines in CONVENTIONS.md. Fires once per day when thresholds are exceeded.
Why: Large knowledge files dilute agent context. 35 learnings compete for attention; 15 focused ones get applied. Thresholds are configurable in .ctxrc.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-version-binaryplugin-version-drift","level":4,"title":"check-version: Binary/Plugin Version Drift","text":"
What: Compares the ctx binary version against the plugin version. Fires once per day. Also checks encryption key age for rotation nudge.
Why: Version drift means hooks reference features the binary doesn't have. The key rotation nudge prevents indefinite key reuse.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-reminders-pending-reminder-relay","level":4,"title":"check-reminders: Pending Reminder Relay","text":"
What: Reads .context/reminders.json and surfaces any due reminders via VERBATIM relay. No throttle: fires every session until dismissed.
Why: Reminders are sticky notes to future-you. Unlike nudges (which throttle to once per day), reminders repeat deliberately until the user dismisses them.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-freshness-technology-constant-staleness","level":4,"title":"check-freshness: Technology Constant Staleness","text":"
What: Stats files listed in .ctxrcfreshness_files and warns if any haven't been modified in over 6 months. Daily throttle. Silent when no files are configured (opt-in via .ctxrc).
Why: Model capabilities evolve - token budgets, attention limits, and context window sizes that were accurate 6 months ago may no longer reflect best practices. This hook reminds you to review and touch the file to confirm values are still current.
Config (.ctxrc):
freshness_files:\n - path: config/thresholds.yaml\n desc: Model token limits and batch sizes\n review_url: https://docs.example.com/limits # optional\n
Each entry has a path (relative to project root), desc (what constants live there), and optional review_url (where to check current values). When review_url is set, the nudge includes \"Review against: {url}\". When absent, just \"Touch the file to mark it as reviewed.\"
Output: VERBATIM relay listing stale files, silent otherwise.
┌─ Technology Constants Stale ──────────────────────\n│ config/thresholds.yaml (210 days ago)\n│ - Model token limits and batch sizes\n│ Review against: https://docs.example.com/limits\n│ Touch each file to mark it as reviewed.\n└───────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-map-staleness-architecture-map-drift","level":4,"title":"check-map-staleness: Architecture Map Drift","text":"
What: Checks whether map-tracking.json is older than 30 days and there are commits touching internal/ since the last map refresh. Daily throttle prevents repeated nudges.
Why: Architecture documentation drifts silently as code evolves. This hook detects structural changes that the map hasn't caught up with and suggests running /ctx-architecture to refresh.
Output: VERBATIM relay when stale and modules changed, silent otherwise.
┌─ Architecture Map Stale ────────────────────────────\n│ ARCHITECTURE.md hasn't been refreshed since 2026-01-15\n│ and there are commits touching 12 modules.\n│ /ctx-architecture keeps architecture docs drift-free.\n│\n│ Want me to run /ctx-architecture to refresh?\n└─────────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#heartbeat-session-heartbeat-webhook","level":4,"title":"heartbeat: Session Heartbeat Webhook","text":"
What: Fires on every prompt. Sends a webhook notification with prompt count, session ID, context modification status, and token usage telemetry. Never produces stdout.
Why: Other hooks only send webhooks when they \"speak\" (nudge/relay). When silent, you have no visibility into session activity. The heartbeat provides a continuous session-alive signal with token consumption data for observability dashboards or liveness monitoring.
Token fields (tokens, context_window, usage_pct) are included when usage data is available from the session JSONL file.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#tool-time-hooks-pretooluse-posttooluse","level":3,"title":"Tool-Time Hooks (PreToolUse / PostToolUse)","text":"","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#block-non-path-ctx-path-enforcement-hard-gate","level":4,"title":"block-non-path-ctx: PATH Enforcement (Hard Gate)","text":"
What: Blocks any Bash command that invokes ./ctx, ./dist/ctx, go run ./cmd/ctx, or an absolute path to ctx. Only PATH invocations are allowed.
Why: Enforces CONSTITUTION.md's invocation invariant. Running a dev-built binary in production context causes version confusion and silent behavior drift.
Output: Block response (prevents the tool call):
{\"decision\": \"block\", \"reason\": \"Use 'ctx' from PATH, not './ctx'...\"}\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#qa-reminder-pre-commit-qa-gate","level":4,"title":"qa-reminder: Pre-Commit QA Gate","text":"
What: Fires on every Edit tool use. Reminds the agent to lint and test the entire project before committing.
Why: Agents tend to \"I'll test later\" and then commit untested code. Repetition is intentional: the hook reinforces the habit on every edit, not just before commits.
Output: Agent directive with hard QA gate instructions.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#post-commit-context-capture-after-commit","level":4,"title":"post-commit: Context Capture After Commit","text":"
What: Fires after any git commit (excludes --amend). Prompts the agent to offer context capture (decision? learning?) and suggest running lints/tests before pushing.
Why: Commits are natural reflection points. The nudge converts mechanical git operations into context-capturing opportunities.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#auditing-hooks-via-the-local-event-log","level":2,"title":"Auditing Hooks via the Local Event Log","text":"
If you don't need an external audit trail, enable the local event log for a self-contained record of hook activity:
# .ctxrc\nevent_log: true\n
Once enabled, every hook that fires writes an entry to .context/state/events.jsonl. Query it with ctx system events:
ctx system events # last 50 events\nctx system events --hook qa-reminder # filter by hook\nctx system events --session <id> # filter by session\nctx system events --json | jq '.' # raw JSONL for processing\n
The event log is local, queryable, and doesn't require any external service. For a full diagnostic workflow combining event logs with structural health checks, see Troubleshooting.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#auditing-hooks-via-webhooks","level":2,"title":"Auditing Hooks via Webhooks","text":"
The most powerful audit setup pipes all hook output to a webhook, giving you a real-time external record of what your agent is being told.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#step-1-set-up-the-webhook","level":3,"title":"Step 1: Set Up the Webhook","text":"
ctx notify setup\n# Enter your webhook URL (Slack, Discord, ntfy.sh, IFTTT, etc.)\n
See Webhook Notifications for service-specific setup.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#step-2-subscribe-to-relay-events","level":3,"title":"Step 2: Subscribe to relay Events","text":"
# .ctxrc\nnotify:\n events:\n - relay # all hook output: VERBATIM relays, directives, blocks\n - nudge # just the user-facing VERBATIM relays\n
The relay event fires for every hook that produces output. This includes:
Hook Event sent check-context-sizerelay + nudgecheck-persistencerelay + nudgecheck-ceremoniesrelay + nudgecheck-journalrelay + nudgecheck-resourcesrelay + nudgecheck-knowledgerelay + nudgecheck-versionrelay + nudgecheck-remindersrelay + nudgecheck-freshnessrelay + nudgecheck-map-stalenessrelay + nudgeheartbeatheartbeat only block-non-path-ctxrelay only post-commitrelay only qa-reminderrelay only","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#step-3-cross-reference","level":3,"title":"Step 3: Cross-Reference","text":"
With relay enabled, your webhook receives a JSON payload every time a hook fires:
{\n \"event\": \"relay\",\n \"message\": \"check-persistence: No context updated in 20+ prompts\",\n \"session_id\": \"b854bd9c\",\n \"timestamp\": \"2026-02-22T14:30:00Z\",\n \"project\": \"my-project\"\n}\n
This creates an external audit trail independent of the agent. You can now cross-verify: did the agent actually relay the checkpoint the hook told it to relay?
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#verifying-hooks-actually-fire","level":2,"title":"Verifying Hooks Actually Fire","text":"
Hooks are invisible. An invisible thing that breaks is indistinguishable from an invisible thing that never existed. Three verification methods, from simplest to most robust:
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#method-1-ask-the-agent","level":3,"title":"Method 1: Ask the Agent","text":"
The simplest check. After a few prompts into a session:
\"Did you receive any hook output this session? Print the last\ncontext checkpoint or persistence nudge you saw.\"\n
The agent should be able to recall recent hook output from its context window. If it says \"I haven't received any hook output\", either:
The hooks aren't firing (check installation);
The session is too short (hooks throttle early);
The hooks fired but the agent absorbed them silently.
Limitation: You are trusting the agent to report accurately. Agents sometimes confabulate or miss context. Use this as a quick smoke test, not definitive proof.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#method-2-check-the-webhook-trail","level":3,"title":"Method 2: Check the Webhook Trail","text":"
If you have relay events enabled, check your webhook receiver. Every hook that fires sends a timestamped notification. No notification = no fire.
This is the ground truth. The webhook is called directly by the ctx binary, not by the agent. The agent cannot fake, suppress, or modify webhook deliveries.
Compare what the webhook received against what the agent claims to have relayed. Discrepancies mean the agent is absorbing nudges instead of surfacing them.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#method-3-read-the-local-logs","level":3,"title":"Method 3: Read the Local Logs","text":"
Hooks that support logging write to .context/logs/:
Logs are append-only and written by the ctx binary, not the agent.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#detecting-silent-hook-failures","level":2,"title":"Detecting Silent Hook Failures","text":"
The hardest failure mode: hooks that stop firing without error. The plugin config changes, a binary update drops a hook, or a PATH issue silently breaks execution. Nothing errors: The hook just never runs.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#the-staleness-signal","level":3,"title":"The Staleness Signal","text":"
If .context/logs/check-context-size.log has no entries newer than 5 days but you've been running sessions daily, something is wrong. The absence of evidence is evidence of absence: but only if you control for inactivity.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#false-positive-protection","level":3,"title":"False Positive Protection","text":"
A naive \"hooks haven't fired in N days\" alert fires incorrectly when you simply haven't used ctx. The correct check needs two inputs:
Last hook fire time: from .context/logs/ or webhook history
Last session activity: from journal entries or ctx journal source
If sessions are happening but hooks aren't firing, that's a real problem. If neither sessions nor hooks are happening, that's a vacation.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#what-to-check","level":3,"title":"What to Check","text":"
When you suspect hooks aren't firing:
# 1. Verify the plugin is installed\nls ~/.claude/plugins/\n\n# 2. Check hook registration\ncat ~/.claude/plugins/ctx/hooks.json | head -20\n\n# 3. Run a hook manually to see if it errors\necho '{\"session_id\":\"test\"}' | ctx system check-context-size\n\n# 4. Check for PATH issues\nwhich ctx\nctx --version\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#tips","level":2,"title":"Tips","text":"
Start with nudge, graduate to relay: The nudge event covers user-facing VERBATIM relays. Add relay when you want full visibility into agent directives and hard gates.
Webhooks are your trust anchor: The agent can ignore a nudge, but it can't suppress the webhook. If the webhook fired and the agent didn't relay, you have proof of a compliance gap.
Hooks are throttled by design: Most check hooks fire once per day or use adaptive frequency. Don't expect a notification every prompt: Silence usually means the throttle is working, not that the hook is broken.
Daily markers live in .context/state/: Throttle files are stored in .context/state/ alongside other project-scoped state. If you need to force a hook to re-fire during testing, delete the corresponding marker file.
The QA reminder is intentionally noisy: Unlike other hooks, qa-reminder fires on every Edit call with no throttle. This is deliberate: The commit quality degrades when the reminder fades from salience.
Log files are safe to commit: .context/logs/ contains only timestamps, session IDs, and status keywords. No secrets, no code.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#next-up","level":2,"title":"Next Up","text":"
Detecting and Fixing Drift →: Keep context files accurate as your codebase evolves.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#see-also","level":2,"title":"See Also","text":"
Troubleshooting: full diagnostic workflow using ctx doctor, event logs, and /ctx-doctor
Customizing Hook Messages: override what hooks say without changing what they do
Webhook Notifications: setting up and configuring the webhook system
Hook Output Patterns: understanding VERBATIM relays, agent directives, and hard gates
Detecting and Fixing Drift: structural checks that complement runtime hook auditing
CLI Reference: full ctx system command reference
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/task-management/","level":1,"title":"Tracking Work Across Sessions","text":"","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-problem","level":2,"title":"The Problem","text":"
You have work that spans multiple sessions. Tasks get added during one session, partially finished in another, and completed days later.
Without a system, follow-up items fall through the cracks, priorities drift, and you lose track of what was done versus what still needs doing. TASKS.md grows cluttered with completed checkboxes that obscure the remaining work.
How do you manage work items that span multiple sessions without losing context?
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#tldr","level":2,"title":"TL;DR","text":"
Read on for the full workflow and conversational patterns.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx add task Command Add a new task to TASKS.mdctx task complete Command Mark a task as done by number or text ctx task snapshot Command Create a point-in-time backup of TASKS.mdctx task archive Command Move completed tasks to archive file /ctx-add-task Skill AI-assisted task creation with validation /ctx-archive Skill AI-guided archival with safety checks /ctx-next Skill Pick what to work on based on priorities","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-1-add-tasks-with-priorities","level":3,"title":"Step 1: Add Tasks with Priorities","text":"
Every piece of follow-up work gets a task. Use ctx add task from the terminal or /ctx-add-task from your AI assistant. Tasks should start with a verb and be specific enough that someone unfamiliar with the session could act on them.
# High-priority bug found during code review\nctx add task \"Fix race condition in session cooldown\" --priority high\n\n# Medium-priority feature work\nctx add task \"Add --format json flag to ctx status for CI integration\" --priority medium\n\n# Low-priority cleanup\nctx add task \"Remove deprecated --raw flag from ctx load\" --priority low\n
The /ctx-add-task skill validates your task before recording it. It checks that the description is actionable, not a duplicate, and specific enough for someone else to pick up.
If you say \"fix the bug,\" it will ask you to clarify which bug and where.
Tasks Are Often Created Proactively
In practice, many tasks are created proactively by the agent rather than by explicit CLI commands.
After completing a feature, the agent will often identify follow-up work: tests, docs, edge cases, error handling, and offer to add them as tasks.
You do not need to dictate ctx add task commands; the agent picks up on work context and suggests tasks naturally.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-2-organize-with-phase-sections","level":3,"title":"Step 2: Organize with Phase Sections","text":"
Tasks live in phase sections inside TASKS.md.
Phases provide logical groupings that preserve order and enable replay.
A task does not move between sections. It stays in its phase permanently, and status is tracked via checkboxes and inline tags.
## Phase 1: Core CLI\n\n- [x] Implement ctx add command `#done:2026-02-01-143022`\n- [x] Implement ctx task complete command `#done:2026-02-03-091544`\n- [ ] Add --section flag to ctx add task `#priority:medium`\n\n## Phase 2: AI Integration\n\n- [ ] Implement ctx agent cooldown `#priority:high` `#in-progress`\n- [ ] Add ctx watch XML parsing `#priority:medium`\n - Blocked by: Need to finalize agent output format\n\n## Backlog\n\n- [ ] Performance optimization for large TASKS.md files `#priority:low`\n- [ ] Add metrics dashboard to ctx status `#priority:deferred`\n
Use --section when adding a task to a specific phase:
ctx add task \"Add ctx watch XML parsing\" --priority medium --section \\\n \"Phase 2: AI Integration\"\n
Without --section, the task is inserted before the first unchecked task in TASKS.md.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-3-pick-what-to-work-on","level":3,"title":"Step 3: Pick What to Work On","text":"
At the start of a session, or after finishing a task, use /ctx-next to get prioritized recommendations.
The skill reads TASKS.md, checks recent sessions, and ranks candidates using explicit priority, blocking status, in-progress state, momentum from recent work, and phase order.
You can also ask naturally: \"what should we work on?\" or \"what's the highest priority right now?\"
/ctx-next\n
The output looks like this:
**1. Implement ctx agent cooldown** `#priority:high`\n\n Still in-progress from yesterday's session. The tombstone file approach is\n half-built. Finishing is cheaper than context-switching.\n\n**2. Add --section flag to ctx add task** `#priority:medium`\n\n Last Phase 1 item. Quick win that unblocks organized task entry.\n\n---\n\n*Based on 8 pending tasks across 3 phases.\n\nLast session: agent-cooldown (2026-02-06).*\n
In-progress tasks almost always come first:
Finishing existing work takes priority over starting new work.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-4-complete-tasks","level":3,"title":"Step 4: Complete Tasks","text":"
When a task is done, mark it complete by number or partial text match:
# By task number (as shown in TASKS.md)\nctx task complete 3\n\n# By partial text match\nctx task complete \"agent cooldown\"\n
The task's checkbox changes from [ ] to [x] and a #done timestamp is added. Tasks are never deleted: they stay in their phase section so history is preserved.
Be Conversational
You rarely need to run ctx task complete yourself during an interactive session.
When you say something like \"the rate limiter is done\" or \"we finished that,\" the agent marks the task complete and moves on to suggesting what is next.
The CLI commands are most useful for manual housekeeping, scripted workflows, or when you want precision.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-5-snapshot-before-risky-changes","level":3,"title":"Step 5: Snapshot Before Risky Changes","text":"
Before a major refactor or any change that might break things, snapshot your current task state. This creates a copy of TASKS.md in .context/archive/ without modifying the original.
# Default snapshot\nctx task snapshot\n\n# Named snapshot (recommended before big changes)\nctx task snapshot \"before-refactor\"\n
This creates a file like .context/archive/tasks-before-refactor-2026-02-08-1430.md. If the refactor goes sideways, and you need to confirm what the task state looked like before you started, the snapshot is there.
Snapshots are cheap: Take them before any change you might want to undo or review later.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-6-archive-when-tasksmd-gets-cluttered","level":3,"title":"Step 6: Archive When TASKS.md Gets Cluttered","text":"
After several sessions, TASKS.md accumulates completed tasks that make it hard to see what is still pending.
Use ctx task archive to move all [x] items to a timestamped archive file.
Start with a dry run to preview what will be moved:
ctx task archive --dry-run\n
Then archive:
ctx task archive\n
Completed tasks move to .context/archive/tasks-2026-02-08.md. Phase headers are preserved in the archive for traceability. Pending tasks ([ ]) remain in TASKS.md.
The /ctx-archive skill adds two safety checks before archiving: it verifies that completed tasks are genuinely cluttering the view and that nothing was marked [x] prematurely.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#letting-the-agent-drive","level":2,"title":"Letting the Agent Drive","text":"
The steps above show the CLI commands for task management.
In practice, most task management happens conversationally:
An agent that has loaded the context files does not need you to type ctx add task. It tracks work naturally and offers the right operations.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#use-natural-language-instead-of-commands","level":3,"title":"Use Natural Language Instead of Commands","text":"
These conversational prompts replace explicit commands during interactive sessions:
Instead of typing... Just say... ctx add task \"Write tests for X\" \"We should add tests for this: track that?\" /ctx-next \"What should we work on?\" ctx task complete \"rate limiting\" \"The rate limiter is done, what's next?\" ctx task archive \"TASKS.md is getting long, can you clean it up?\" ctx add task ... && ctx add task ... \"Add follow-ups for what we just built.\"
The agent translates these into the right ctx operations behind the scenes.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-agent-creates-tasks-proactively","level":3,"title":"The Agent Creates Tasks Proactively","text":"
After completing a piece of work, the agent does not wait for you to dictate follow-up items. It identifies them from the work context and offers:
**Agent**: \"I've finished implementing the rate limiter middleware. I noticed\nthree follow-up items:\n\n1. Write integration tests for the rate limiter (the unit tests pass but\n there's no end-to-end coverage)\n2. Add rate limit headers to API responses (X-RateLimit-Remaining, etc.)\n3. Document the rate limit configuration in the ops runbook\n\nWant me to add these as tasks?\"\n
This is how an agent behaves once it has read the Agent Playbook.
The playbook's \"Work, Reflect, Persist\" cycle means the agent asks itself after every milestone: Did I just complete a task? Did new work emerge?
When the answer is yes, it acts.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#asking-the-agent-what-to-capture","level":3,"title":"Asking the Agent What to Capture","text":"
You don't always know what's worth persisting.
Instead of reviewing the session yourself and typing specific commands, ask the agent to do the thinking:
You say What happens \"What tasks came out of this session?\" Agent reviews the work, identifies follow-ups, and offers to add them \"Are there any tasks, learnings, or decisions worth adding?\" Agent scans the full session for all three types and proposes a batch \"What should we remember from today?\" Agent surfaces everything persist-worthy: not just tasks \"Anything we should track before we move on?\" Agent checks for loose ends specific to the work just completed
This is especially powerful after long or exploratory sessions where multiple threads were touched.
The agent has the full conversation in context and can spot follow-ups you might miss:
You: \"Are there any tasks, learnings, or decisions worth adding\n from this session?\"\n\nAgent: \"Yes. Here's what I'd suggest:\n\n Tasks:\n - Add retry logic to the webhook delivery (we discussed it when\n the timeout test failed but never tracked it)\n - Update the API docs to reflect the new rate limit headers\n\n Learnings:\n - net/http's default client has no timeout: we burned 15 minutes\n on that before adding an explicit 30s timeout\n\n Decisions:\n - We chose token-bucket over sliding-window for rate limiting\n because it handles bursts better and the library is simpler\n\n Want me to add all of these?\"\n\nYou: \"All of them.\"\n
The agent fills in structured fields from what it observed during the session. You get the same quality entries as if you typed verbose commands, without needing to remember what happened or how to format it.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-agent-completes-tasks-and-moves-on","level":3,"title":"The Agent Completes Tasks and Moves On","text":"
When you confirm that work is finished, or when the agent can see that it is, it marks the task done and suggests the next step:
You: \n\"The cooldown logic is working now.\"\n\nAgent: \n\"Marked 'Implement ctx agent cooldown' as done. Your highest-\npriority remaining task is 'Add --section flag to ctx add task'.\n\nIt's the last item in Phase 1 and a quick win. Want to start on that?\"\n
The agent chains completion into next-task selection.
One sentence from you triggers two operations.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#end-of-session-capture","level":3,"title":"End-of-Session Capture","text":"
At the end of a session, the agent reviews what happened and catches loose ends:
You: \n\"I think we're good for today.\"\n\nAgent: \n\"Before we wrap up: during the middleware work you mentioned\nwanting to add metrics for rate limit hits. That never made it into TASKS.md.\n\nWant me to add it as a task so it's there next session?\"\n
This is the proactive persistence model at work.
The agent monitors the conversation for mentioned-but-not-tracked work and surfaces it before context is lost.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#a-conversational-session-example","level":3,"title":"A Conversational Session Example","text":"
Here is what task management looks like in a real session when the agent is driving:
You: What's left to do?\nAgent: You have 5 pending tasks across 2 phases. The highest priority is\n \"Implement ctx agent cooldown\": it's marked in-progress from\n yesterday and half-built. Want to pick that up?\n\nYou: Yeah, let's finish it.\n [... 20 minutes of implementation work ...]\n\nYou: That's working. Tests pass.\nAgent: Nice. I'll mark \"Implement ctx agent cooldown\" as done.\n I noticed two things from the implementation:\n 1. The tombstone file cleanup should happen on a timer, not just\n at startup. Want me to add that as a task?\n 2. We learned that file locks don't work reliably on NFS. I'll\n add that to LEARNINGS.md.\n\nYou: Yes to both. What's next?\nAgent: Added the cleanup task as high priority. Next up is \"Add --section\n flag to ctx add task\": last item in Phase 1. It should be\n straightforward since the flag parsing pattern already exists\n in the codebase. Ready?\n
It's All Conversational
Notice what did not happen: The user never typed a ctx command.
The agent handled task completion, follow-up creation, learning capture, and next-task selection: all from natural conversation.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
# Add a task\nctx add task \"Implement rate limiting for API endpoints\" --priority high\n\n# Add to a specific phase\nctx add task \"Write integration tests for rate limiter\" --section \"Phase 2\"\n\n# See what to work on\n# (from AI assistant) /ctx-next\n\n# Mark done by text\nctx task complete \"rate limiting\"\n\n# Mark done by number\nctx task complete 5\n\n# Snapshot before a risky refactor\nctx task snapshot \"before-middleware-rewrite\"\n\n# Archive completed tasks when the list gets long\nctx task archive --dry-run # preview first\nctx task archive # then archive\n
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#tips","level":2,"title":"Tips","text":"
Start tasks with a verb: \"Add,\" \"Fix,\" \"Implement,\" \"Investigate\": not just a topic like \"Authentication.\"
Include the why in the task description. Future sessions lack the context of why you added the task. \"Add rate limiting\" is worse than \"Add rate limiting to prevent abuse on the public API after the load test showed 10x traffic spikes.\"
Use #in-progress sparingly. Only one or two tasks should carry this tag at a time. If everything is in-progress, nothing is.
Snapshot before, not after. The point of a snapshot is to capture the state before a change, not to celebrate what you just finished.
Archive regularly. Once completed tasks outnumber pending ones, it is time to archive. A clean TASKS.md helps both you and your AI assistant focus.
Never delete tasks. Mark them [x] (completed) or [-] (skipped with a reason). Deletion breaks the audit trail.
Trust the agent's task instincts. When the agent suggests follow-up items after completing work, it is drawing on the full context of what just happened.
Conversational prompts beat commands in interactive sessions. Saying \"what should we work on?\" is faster and more natural than running /ctx-next. Save explicit commands for scripts, CI, and unattended runs.
Let the agent chain operations. A single statement like \"that's done, what's next?\" can trigger completion, follow-up identification, and next-task selection in one flow.
Review proactive task suggestions before moving on. The best follow-ups come from items spotted in-context right after the work completes.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#next-up","level":2,"title":"Next Up","text":"
Using the Scratchpad →: Store short-lived sensitive notes in an encrypted scratchpad.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#see-also","level":2,"title":"See Also","text":"
The Complete Session: full session lifecycle including task management in context
Persisting Decisions, Learnings, and Conventions: capturing the \"why\" behind your work
Detecting and Fixing Drift: keeping TASKS.md accurate over time
CLI Reference: full documentation for ctx add, ctx task complete, ctx task
Context Files: TASKS.md: format and conventions for TASKS.md
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/troubleshooting/","level":1,"title":"Troubleshooting","text":"","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#the-problem","level":2,"title":"The Problem","text":"
Something isn't working: a hook isn't firing, nudges are too noisy, context seems stale, or the agent isn't following instructions. The information to diagnose it exists (across status, drift, event logs, hook config, and session history), but assembling it manually is tedious.
ctx doctor # structural health check\nctx system events --last 20 # recent hook activity\n# or ask: \"something seems off, can you diagnose?\"\n
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx doctor CLI command Structural health report ctx doctor --json CLI command Machine-readable health report ctx system events CLI command Query local event log /ctx-doctor Skill Agent-driven diagnosis with analysis","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#quick-check-ctx-doctor","level":3,"title":"Quick Check: ctx doctor","text":"
Run ctx doctor for an instant structural health report. It checks context initialization, required files, drift, hook configuration, event logging, webhooks, reminders, task completion ratio, and context token size: all in one pass:
ctx doctor\n
ctx doctor\n==========\n\nStructure\n ✓ Context initialized (.context/)\n ✓ Required files present (4/4)\n\nQuality\n ⚠ Drift: 2 warnings (stale path in ARCHITECTURE.md, high entry count in LEARNINGS.md)\n\nHooks\n ✓ hooks.json valid (14 hooks registered)\n ○ Event logging disabled (enable with event_log: true in .ctxrc)\n\nState\n ✓ No pending reminders\n ⚠ Task completion ratio high (18/22 = 82%): consider archiving\n\nSize\n ✓ Context size: ~4200 tokens (budget: 8000)\n\nSummary: 2 warnings, 0 errors\n
Warnings are non-critical but worth fixing. Errors need attention. Informational notes (○) flag optional features that aren't enabled.
For power users: ctx system events with filters gives direct access to the event log.
# Last 50 events (default)\nctx system events\n\n# Events from a specific session\nctx system events --session eb1dc9cd-0163-4853-89d0-785fbfaae3a6\n\n# Only QA reminder events\nctx system events --hook qa-reminder\n\n# Raw JSONL for jq processing\nctx system events --json | jq '.message'\n\n# Include rotated (older) events\nctx system events --all --last 100\n
Filters use AND logic: --hook qa-reminder --session abc123 returns only QA reminder events from that specific session.
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#common-problems","level":2,"title":"Common Problems","text":"","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#ctx-not-initialized","level":3,"title":"\"ctx: not initialized\"","text":"
Symptoms: Any ctx command fails with ctx: not initialized - run \"ctx init\" first.
Cause: You're running ctx in a directory without an initialized .context/ directory. This guard runs on all user-facing commands to prevent confusing downstream errors.
Fix:
ctx init # create .context/ with template files\nctx init --minimal # or just the essentials (CONSTITUTION, TASKS, DECISIONS)\n
Commands that work without initialization: ctx init, ctx setup, ctx doctor, and help-only grouping commands (ctx, ctx system).
Symptoms: No nudges appearing, webhook silent, event log shows no entries for the expected hook.
Diagnosis:
# 1. Check if ctx is installed and on PATH\nwhich ctx && ctx --version\n\n# 2. Check if the hook is registered\ngrep \"check-persistence\" ~/.claude/plugins/ctx/hooks.json\n\n# 3. Run the hook manually to see if it errors\necho '{\"session_id\":\"test\"}' | ctx system check-persistence\n\n# 4. Check event log for the hook (if enabled)\nctx system events --hook check-persistence\n
Common causes:
Plugin is not installed: run ctx init --claude to reinstall
PATH issue: the hook invokes ctx from PATH; ensure it resolves
Throttle active: most hooks fire once per day: check .context/state/ for daily marker files
Hook silenced: a custom message override may be an empty file: check ctx system message list for overrides
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#too-many-nudges","level":3,"title":"\"Too many nudges\"","text":"
Symptoms: The agent is overwhelmed with hook output. Context checkpoints, persistence reminders, and QA gates fire constantly.
Diagnosis:
# Check how often hooks fired recently\nctx system events --last 50\n\n# Count fires per hook\nctx system events --json | jq -r '.detail.hook // \"unknown\"' \\\n | sort | uniq -c | sort -rn\n
Common causes:
QA reminder is noisy by design: it fires on every Edit call with no throttle. This is intentional. If it's too much, silence it with an empty override: ctx system message edit qa-reminder gate, then empty the file
Long session: context checkpoint fires with increasing frequency after prompt 15. This is the system telling you the session is getting long: consider wrapping up
Short throttle window: if you deleted marker files in .context/state/, daily-throttled hooks will re-fire
Outdated Claude Code plugin: Update the plugin using Claude Code → /plugin → \"Marketplace\"
ctx version mismatch: Build (or download) and install the latest ctx vesion.
Symptoms: The agent references outdated information, paths that don't exist, or decisions that were reversed.
Diagnosis:
# Structural drift check\nctx drift\n\n# Full doctor check (includes drift + more)\nctx doctor\n\n# Check when context files were last modified\nctx status --verbose\n
Common causes:
Drift accumulated: stale path references in ARCHITECTURE.md or CONVENTIONS.md. Fix with ctx drift --fix or ask the agent to clean up.
Task backlog: too many completed tasks diluting active context. Archive with ctx task archive or ctx compact --archive.
Large context files: LEARNINGS.md with 40+ entries competes for attention. Consolidate with /ctx-consolidate.
Missing session ceremonies: if /ctx-remember and /ctx-wrap-up aren't being used, context doesn't get refreshed. See Session Ceremonies.
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#the-agent-isnt-following-instructions","level":3,"title":"\"The agent isn't following instructions\"","text":"
Symptoms: The agent ignores conventions, forgets decisions, or acts contrary to CONSTITUTION.md rules.
Diagnosis:
# Check context token size: Is it too large for the model?\nctx doctor --json | jq '.results[] | select(.name == \"context_size\")'\n\n# Check if context is actually being loaded\nctx system events --hook context-load-gate\n
Common causes:
Context too large: if total tokens exceed the model's effective attention, instructions get diluted. Check ctx doctor for the size check. Compact with ctx compact --archive.
Context not loading: if context-load-gate hasn't fired, the agent may not have received context. Verify the hook is registered.
Conflicting instructions: CONVENTIONS.md says one thing, AGENT_PLAYBOOK.md says another. Review both files for consistency.
Agent drift: the agent's behavior diverges from instructions over long sessions. This is normal. Use /ctx-reflect to re-anchor, or start a new session.
Event logging (optional but recommended): event_log: true in .ctxrc
ctx initialized: ctx init
Event logging is not required for ctx doctor or /ctx-doctor to work. Both degrade gracefully: structural checks run regardless, and the skill notes when event data is unavailable.
Start with ctx doctor: It's the fastest way to get a comprehensive health picture. Save event log inspection for when you need to understand when and how often something happened.
Enable event logging early: The log is opt-in and low-cost (~250 bytes per event, 1MB rotation cap). Enable it before you need it: Diagnosing a problem without historical data is much harder.
Use the skill for correlation: ctx doctor tells you what is wrong. /ctx-doctor tells you why by correlating structural findings with event patterns. The agent can spot connections that individual commands miss.
Event log is gitignored: It's machine-local diagnostic data, not project context. Different machines produce different event streams.
Auditing System Hooks: the complete hook catalog and webhook-based audit trails
Detecting and Fixing Drift: structural and semantic drift detection and repair
Webhook Notifications: push notifications for hook activity
ctx doctor CLI: full command reference
ctx system events CLI: event log query reference
/ctx-doctor skill: agent-driven diagnosis
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/webhook-notifications/","level":1,"title":"Webhook Notifications","text":"","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#the-problem","level":2,"title":"The Problem","text":"
Your agent runs autonomously (loops, implements, releases) while you are away from the terminal. You have no way to know when it finishes, hits a limit, or when a hook fires a nudge.
How do you get notified about agent activity without watching the terminal?
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#tldr","level":2,"title":"TL;DR","text":"
ctx notify setup # configure webhook URL (encrypted)\nctx notify test # verify delivery\n# Hooks auto-notify on: session-end, loop-iteration, resource-danger\n
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx notify setup CLI command Configure and encrypt webhook URL ctx notify test CLI command Send a test notification ctx notify --event <name> \"msg\" CLI command Send a notification from scripts/skills .ctxrcnotify.events Configuration Filter which events reach your webhook","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-1-get-a-webhook-url","level":3,"title":"Step 1: Get a Webhook URL","text":"
Any service that accepts HTTP POST with JSON works. Common options:
Service How to get a URL IFTTT Create an applet with the \"Webhooks\" trigger Slack Create an Incoming Webhook Discord Channel Settings > Integrations > Webhooks ntfy.sh Use https://ntfy.sh/your-topic (no signup) Pushover Use API endpoint with your user key
The URL contains auth tokens. ctx encrypts it; it never appears in plaintext in your repo.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-2-configure-the-webhook","level":3,"title":"Step 2: Configure the Webhook","text":"
This encrypts the URL with AES-256-GCM using the same key as the scratchpad (~/.ctx/.ctx.key). The encrypted file (.context/.notify.enc) is safe to commit. The key lives outside the project and is never committed.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-3-test-it","level":3,"title":"Step 3: Test It","text":"
If you see No webhook configured, run ctx notify setup first.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-4-configure-events","level":3,"title":"Step 4: Configure Events","text":"
Notifications are opt-in: no events are sent unless you configure an event list in .ctxrc:
# .ctxrc\nnotify:\n events:\n - loop # loop completion or max-iteration hit\n - nudge # VERBATIM relay hooks (context checkpoint, persistence, etc.)\n - relay # all hook output (verbose, for debugging)\n - heartbeat # every-prompt session-alive signal with metadata\n
Only listed events fire. Omitting an event silently drops it.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-5-use-in-your-own-skills","level":3,"title":"Step 5: Use in Your Own Skills","text":"
Add ctx notify calls to any skill or script:
# In a release skill\nctx notify --event release \"v1.2.0 released successfully\" 2>/dev/null || true\n\n# In a backup script\nctx notify --event backup \"Nightly backup completed\" 2>/dev/null || true\n
The 2>/dev/null || true suffix ensures the notification never breaks your script: If there's no webhook or the HTTP call fails, it's a silent noop.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#event-types","level":2,"title":"Event Types","text":"
ctx fires these events automatically:
Event Source When loop Loop script Loop completes or hits max iterations nudge System hooks VERBATIM relay nudge is emitted (context checkpoint, persistence, ceremonies, journal, resources, knowledge, version) relay System hooks Any hook output (VERBATIM relays, agent directives, block responses) heartbeat System hook Every prompt: session-alive signal with prompt count and context modification status testctx notify test Manual test notification (custom) Your skills You wire ctx notify --event <name> in your own scripts
nudge vs relay: The nudge event fires only for VERBATIM relay hooks (the ones the agent is instructed to show verbatim). The relay event fires for all hook output: VERBATIM relays, agent directives, and hard gates. Subscribe to relay for debugging (\"did the agent get the post-commit nudge?\"), nudge for user-facing assurance (\"was the checkpoint emitted?\").
Webhooks as a Hook Audit Trail
Subscribe to relay events and you get an external record of every hook that fires, independent of the agent.
This lets you verify hooks are running and catch cases where the agent absorbs a nudge instead of surfacing it.
See Auditing System Hooks for the full workflow.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#payload-format","level":2,"title":"Payload Format","text":"
The detail field is a structured template reference containing the hook name, variant, and any template variables. This lets receivers filter by hook or variant without parsing rendered text. The field is omitted when no template reference applies (e.g. custom ctx notify calls).
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#heartbeat-payload","level":3,"title":"Heartbeat Payload","text":"
The heartbeat event fires on every prompt with session metadata and token usage telemetry:
The tokens, context_window, and usage_pct fields are included when token data is available from the session JSONL file. They are omitted when no usage data has been recorded yet (e.g. first prompt).
Unlike other events, heartbeat fires every prompt (not throttled). Use it for observability dashboards or liveness monitoring of long-running sessions.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#security-model","level":2,"title":"Security Model","text":"Component Location Committed? Permissions Encryption key ~/.ctx/.ctx.key No (user-level) 0600 Encrypted URL .context/.notify.enc Yes (safe) 0600 Webhook URL Never on disk in plaintext N/A N/A
The key is shared with the scratchpad. If you rotate the encryption key, re-run ctx notify setup to re-encrypt the webhook URL with the new key.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#key-rotation","level":2,"title":"Key Rotation","text":"
ctx checks the age of the encryption key once per day. If it's older than 90 days (configurable via key_rotation_days), a VERBATIM nudge is emitted suggesting rotation.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#worktrees","level":2,"title":"Worktrees","text":"
The webhook URL is encrypted with the same encryption key (~/.ctx/.ctx.key). Because the key lives at the user level, it is shared across all worktrees on the same machine - notifications work in worktrees automatically.
This means agents running in worktrees cannot send webhook alerts. For autonomous runs where worktree agents are opaque, monitor them from the terminal rather than relying on webhooks. Enrich journals and review results on the main branch after merging.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#event-log-the-local-complement","level":2,"title":"Event Log: The Local Complement","text":"
Don't need a webhook but want diagnostic visibility? Enable event_log: true in .ctxrc. The event log writes the same payload as webhooks to a local JSONL file (.context/state/events.jsonl) that you can query without any external service:
ctx system events --last 20 # recent hook activity\nctx system events --hook qa-reminder # filter by hook\n
Webhooks and event logging are independent: you can use either, both, or neither. Webhooks give you push notifications and an external audit trail. The event log gives you local queryability and ctx doctor integration.
See Troubleshooting for how they work together.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#tips","level":2,"title":"Tips","text":"
Fire-and-forget: Notifications never block. HTTP errors are silently ignored. No retry, no response parsing.
No webhook = no cost: When no webhook is configured, ctx notify exits immediately. System hooks that call notify.Send() add zero overhead.
Multiple projects: Each project has its own .notify.enc. You can point different projects at different webhooks.
Event filter is per-project: Configure notify.events in each project's .ctxrc independently.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#next-up","level":2,"title":"Next Up","text":"
Auditing System Hooks →: Verify your hooks are running, audit what they do, and get alerted when they go silent.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#see-also","level":2,"title":"See Also","text":"
CLI Reference: ctx notify: full command reference
Configuration: .ctxrc settings including notify options
Running an Unattended AI Agent: how loops work and how notifications fit in
Hook Output Patterns: understanding VERBATIM relays, agent directives, and hard gates
Auditing System Hooks: using webhooks as an external audit trail for hook execution
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/","level":1,"title":"When to Use a Team of Agents","text":"","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-problem","level":2,"title":"The Problem","text":"
You have a task, and you are wondering: \"should I throw more agents at it?\"
More agents can mean faster results, but they also mean coordination overhead, merge conflicts, divergent mental models, and wasted tokens re-reading context.
The wrong setup costs more than it saves.
This recipe is a decision framework: It helps you choose between a single agent, parallel worktrees, and a full agent team, and explains what ctx provides at each level.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#tldr","level":2,"title":"TL;DR","text":"
Single agent for most work;
Parallel worktrees when tasks touch disjoint file sets;
Agent teams only when tasks need real-time coordination. When in doubt, start with one agent.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-spectrum","level":2,"title":"The Spectrum","text":"
There are three modes, ordered by complexity:
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#1-single-agent-default","level":3,"title":"1. Single Agent (Default)","text":"
One agent, one session, one branch. This is correct for most work.
Use this when:
The task has linear dependencies (step 2 needs step 1's output);
Changes touch overlapping files;
You need tight feedback loops (review each change before the next);
The task requires deep understanding of a single area;
Total effort is less than a few hours of agent time.
ctx provides: Full .context/: tasks, decisions, learnings, conventions, all in one session.
The agent builds a coherent mental model and persists it as it goes.
Example tasks: Bug fixes, feature implementation, refactoring a module, writing documentation for one area, debugging.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#2-parallel-worktrees-independent-tracks","level":3,"title":"2. Parallel Worktrees (Independent Tracks)","text":"
2-4 agents, each in a separate git worktree on its own branch, working on non-overlapping parts of the codebase.
Use this when:
You have 5+ independent tasks in the backlog;
Tasks group cleanly by directory or package;
File overlap between groups is zero or near-zero;
Each track can be completed and merged independently;
You want parallelism without coordination complexity.
ctx provides: Shared .context/ via git (each worktree sees the same tasks, decisions, conventions). /ctx-worktree skill for setup and teardown. TASKS.md as a lightweight work queue.
Example tasks: Docs + new package + test coverage (three tracks that don't touch the same files). Parallel recipe writing. Independent module development.
See: Parallel Agent Development with Git Worktrees
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#3-agent-team-coordinated-swarm","level":3,"title":"3. Agent Team (Coordinated Swarm)","text":"
Multiple agents communicating via messages, sharing a task list, with a lead agent coordinating. Claude Code's team/swarm feature.
Use this when:
Tasks have dependencies but can still partially overlap;
You need research and implementation happening simultaneously;
The work requires different roles (researcher, implementer, tester);
A lead agent needs to review and integrate others' work;
The task is large enough that coordination cost is justified.
ctx provides: .context/ as shared state that all agents can read. Task tracking for work assignment. Decisions and learnings as team memory that survives individual agent turnover.
Example tasks: Large refactor across modules where a lead reviews merges. Research and implementation where one agent explores options while another builds. Multi-file feature that needs integration testing after parallel implementation.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-decision-framework","level":2,"title":"The Decision Framework","text":"
Ask these questions in order:
Can one agent do this in a reasonable time?\n YES → Single agent. Stop here.\n NO ↓\n\nCan the work be split into non-overlapping file sets?\n YES → Parallel worktrees (2-4 tracks)\n NO ↓\n\nDo the subtasks need to communicate during execution?\n YES → Agent team with lead coordination\n NO → Parallel worktrees with a merge step\n
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-file-overlap-test","level":3,"title":"The File Overlap Test","text":"
This is the critical decision point. Before choosing multi-agent, list the files each subtask would touch. If two subtasks modify the same file, they belong in the same track (or the same single-agent session).
You: \"I want to parallelize these tasks. Which files would each one touch?\"\n\nAgent: [reads `TASKS.md`, analyzes codebase]\n \"Task A touches internal/config/ and internal/cli/initialize/\n Task B touches docs/ and site/\n Task C touches internal/config/ and internal/cli/status/\n\n Tasks A and C overlap on internal/config/ # they should be\n in the same track. Task B is independent.\"\n
When in doubt, keep things in one track. A merge conflict in a critical file costs more time than the parallelism saves.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#when-teams-make-things-worse","level":2,"title":"When Teams Make Things Worse","text":"
\"More agents\" is not always better. Watch for these patterns:
Merge hell: If you are spending more time resolving conflicts than the parallel work saved, you split wrong: Re-group by file overlap.
Context divergence: Each agent builds its own mental model. After 30 minutes of independent work, agent A might make assumptions that contradict agent B's approach. Shorter tracks with frequent merges reduce this.
Coordination theater: A lead agent spending most of its time assigning tasks, checking status, and sending messages instead of doing work. If the task list is clear enough, worktrees with no communication are cheaper.
Re-reading overhead: Every agent reads .context/ on startup. A team of 4 agents each reading 4000 tokens of context = 16000 tokens before anyone does any work. For small tasks, that overhead dominates.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#what-ctx-gives-you-at-each-level","level":2,"title":"What ctx Gives You at Each Level","text":"ctx Feature Single Agent Worktrees Team .context/ files Full access Shared via git Shared via filesystem TASKS.md Work queue Split by track Assigned by lead Decisions/Learnings Persisted in session Persisted per branch Persisted by any agent /ctx-next Picks next task Picks within track Lead assigns /ctx-worktree N/A Setup + teardown Optional /ctx-commit Normal commits Per-branch commits Per-agent commits","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#team-composition-recipes","level":2,"title":"Team Composition Recipes","text":"
Four practical team compositions for common workflows.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#feature-development-3-agents","level":3,"title":"Feature Development (3 agents)","text":"Role Responsibility Architect Writes spec in specs/, breaks work into TASKS.md phases Implementer Picks tasks from TASKS.md, writes code, marks [x] done Reviewer Runs tests, ctx drift, lint; files issues as new tasks
Coordination: TASKS.md checkboxes. Architect writes tasks before implementer starts. Reviewer runs after each implementer commit.
Anti-pattern: All three agents editing the same file simultaneously. Sequence the work so only one agent touches a file at a time.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#consolidation-sprint-3-4-agents","level":3,"title":"Consolidation Sprint (3-4 agents)","text":"Role Responsibility Auditor Runs ctx drift, identifies stale paths and broken refs Code Fixer Updates source code to match context (or vice versa) Doc Writer Updates ARCHITECTURE.md, CONVENTIONS.md, and docs/ Test Fixer (Optional) Fixes tests broken by the fixer's changes
Coordination: Auditor's ctx drift output is the shared work queue. Each agent claims a subset of issues by adding #in-progress labels.
Anti-pattern: Fixer and doc writer both editing ARCHITECTURE.md. Assign file ownership explicitly.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#release-prep-2-agents","level":3,"title":"Release Prep (2 agents)","text":"Role Responsibility Release Notes Generates changelog from commits, writes release notes Validation Runs full test suite, lint, build across platforms
Coordination: Both read TASKS.md to identify what shipped. Release notes agent works from git log; validation agent works from make audit.
Anti-pattern: Release notes agent running tests \"to verify.\" Each agent stays in its lane.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#documentation-sprint-3-agents","level":3,"title":"Documentation Sprint (3 agents)","text":"Role Responsibility Content Writes new pages, expands existing docs Cross-linker Adds nav entries, cross-references, \"See Also\" sections Verifier Builds site, checks broken links, validates rendering
Coordination: Content agent writes files first. Cross-linker updates zensical.toml and index pages after content lands. Verifier builds after each batch.
Antipattern: Content and cross-linker both editing zensical.toml. Batch nav updates into the cross-linker's pass.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#tips","level":2,"title":"Tips","text":"
Start with one agent: Only add parallelism when you have identified the bottleneck. \"This would go faster with more agents\" is usually wrong for tasks under 2 hours.
The 3-4 agent ceiling is real: Coordination overhead grows quadratically. 2 agents = 1 communication pair. 4 agents = 6 pairs. Beyond 4, you are managing agents more than doing work.
Worktrees > teams for most parallelism needs: If agents don't need to talk to each other during execution, worktrees give you parallelism with zero coordination overhead.
Use ctx as the shared brain: Whether it's one agent or four, the .context/ directory is the single source of truth. Decisions go in DECISIONS.md, not in chat messages between agents.
Merge early, merge often: Long-lived parallel branches diverge. Merge a track as soon as it's done rather than waiting for all tracks to finish.
TASKS.md conflicts are normal: Multiple agents completing different tasks will conflict on merge. The resolution is always additive: accept all [x] completions from both sides.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#next-up","level":2,"title":"Next Up","text":"
Parallel Agent Development with Git Worktrees →: Run multiple agents on independent task tracks using git worktrees.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#go-deeper","level":2,"title":"Go Deeper","text":"
CLI Reference: all commands and flags
Integrations: setup for Claude Code, Cursor, Aider
Session Journal: browse and search session history
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#see-also","level":2,"title":"See Also","text":"
Parallel Agent Development with Git Worktrees: the mechanical \"how\" for worktree-based parallelism
Running an Unattended AI Agent: serial autonomous loops: a different scaling strategy
Tracking Work Across Sessions: managing the task backlog that feeds into any multi-agent setup
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"reference/","level":1,"title":"Reference","text":"
Technical reference for ctx commands, skills, and internals.
","path":["Reference"],"tags":[]},{"location":"reference/#the-system-explains-itself","level":3,"title":"The System Explains Itself","text":"
The 12 properties that must hold for any valid ctx implementation. Not features: constraints. The system's contract with its users and contributors.
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#how-ctx-differs-from-similar-tools","level":2,"title":"How ctx Differs from Similar Tools","text":"
There are many tools in the AI ecosystem that touch parts of the context problem:
Some manage prompts.
Some retrieve data.
Some provide runtime context objects.
Some offer enterprise platforms.
ctx focuses on a different layer entirely.
This page explains where ctx fits, and where it intentionally does not.
That single difference explains nearly all of ctx's design choices.
Question Most tools ctx Where does context live? In prompts or APIs In files How long does it last? One request / one session Across time Who can read it? The model Humans and tools How is it updated? Implicitly Explicitly Is it inspectable? Rarely Always","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#prompt-management-tools","level":2,"title":"Prompt Management Tools","text":"
Examples include:
prompt templates;
reusable system prompts;
prompt libraries;
prompt versioning tools.
These tools help you start a session.
They do not help you continue one.
Prompt tools:
inject text at session start;
are ephemeral by design;
do not evolve with the project.
ctx:
persists knowledge over time;
accumulates decisions and learnings;
makes the context part of the repository itself.
Prompt tooling and ctx are complementary; not competing. Yet, they operate in different layers.
Users often evaluate ctx against specific tools they already use. These comparisons clarify where responsibilities overlap, where they diverge, and where the tools are genuinely complementary.
Anthropic's auto-memory is tool-managed memory (L2): the model decides what to remember, stores it automatically, and retrieves it implicitly. ctx is system memory (L3): humans and agents explicitly curate decisions, learnings, and tasks in inspectable files.
Auto-memory is convenient - you do not configure anything. But it is also opaque: you cannot see what was stored, edit it precisely, or share it across tools. ctx files are plain Markdown in your repository, visible in diffs and code review.
The two are complementary. ctx can absorb auto-memory as an input source (importing what the model remembered into structured context files) while providing the durable, inspectable layer that auto-memory lacks.
Static rule files (.cursorrules, .claude/rules/) declare conventions: coding style, forbidden patterns, preferred libraries. They are effective for what to do and load automatically at session start.
ctx adds dimensions that rule files do not cover: architectural decisions with rationale, learnings discovered during development, active tasks, and a constitution that governs agent behavior. Critically, ctx context accumulates - each session can add to it, and token budgeting ensures only the most relevant context is injected.
Use rule files for static conventions. Use ctx for evolving project memory.
Aider's --read flag injects file contents at session start; --watch reloads them on change. The concept is similar to ctx's \"load\" step: make the agent aware of specific files.
The differences emerge beyond loading. Aider has no persistence model -- nothing the agent learns during a session is written back. There is no token budgeting (large files consume the full context window), no priority ordering across file types, and no structured format for decisions or learnings. ctx provides the full lifecycle: load, accumulate, persist, and budget.
GitHub Copilot's @workspace performs workspace-wide code search. It answers \"what code exists?\" - finding function definitions, usages, and file structure across the repository.
ctx answers a different question: \"what did we decide?\" It stores architectural intent, not code indices. Copilot's workspace search and ctx's project memory are orthogonal; one finds code, the other preserves the reasoning behind it.
Cline's memory bank stores session context within the Cline extension. The motivation is similar to ctx: help the agent remember across sessions.
The key difference is portability. Cline memory is tied to Cline - it does not transfer to Claude Code, Cursor, Aider, or any other tool. ctx is tool-agnostic: context lives in plain files that any editor, agent, or script can read. Switching tools does not mean losing memory.
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#when-ctx-is-a-good-fit","level":2,"title":"When ctx Is a Good Fit","text":"
ctx works best when:
you want AI work to compound over time;
architectural decisions matter;
context must be inspectable;
humans and AI must share the same source of truth;
Git history should include why, not just what.
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#when-ctx-is-not-the-right-tool","level":2,"title":"When ctx Is Not the Right Tool","text":"
ctx is probably not what you want if:
you only need one-off prompts;
you rely exclusively on RAG;
you want autonomous agents without a human-readable state;
You Can't Import Expertise: why project-specific context matters more than generic best practices
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/design-invariants/","level":1,"title":"Invariants","text":"","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#the-system-explains-itself","level":1,"title":"The System Explains Itself","text":"
These are the properties that must hold for any valid ctx implementation.
These are not features.
These are constraints.
A change that violates an invariant is a category error, not an improvement.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#cognitive-state-tiers","level":2,"title":"Cognitive State Tiers","text":"
ctx distinguishes between three forms of state:
Authoritative state: Versioned, inspectable artifacts that define intent and survive time.
Delivery views: Deterministic assemblies of the authoritative state for a specific budget or workflow.
Ephemeral working state: Local, transient, or sensitive data that assists interaction but does not define system truth.
The invariants below apply primarily to the authoritative cognitive state.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#1-cognitive-state-is-explicit","level":2,"title":"1. Cognitive State Is Explicit","text":"
All authoritative context lives in artifacts that can be inspected, reviewed, and versioned.
If something is important, it must exist as a file: Not only in a prompt, a chat, or a model's hidden memory.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#2-assembly-is-reproducible","level":2,"title":"2. Assembly Is Reproducible","text":"
Given the same:
repository state,
configuration,
and inputs,
context assembly produces the same result.
Heuristics may rank or filter for delivery under constraints.
They do not alter the authoritative state.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#3-the-authoritative-state-is-human-readable","level":2,"title":"3. The Authoritative State Is Human-Readable","text":"
The authoritative cognitive state must be stored in formats that a human can:
read,
diff,
review,
and edit directly.
Sensitive working memory may be encrypted at rest. However, encryption must not become the only representation of authoritative knowledge.
Reasoning, decisions, and outcomes must remain available after the interaction that produced them has ended.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#5-authority-is-user-defined","level":2,"title":"5. Authority Is User-Defined","text":"
What enters the authoritative context is an explicit human decision.
Models may suggest.
Automation may assist.
Selection is never implicit.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#6-operation-is-local-first","level":2,"title":"6. Operation Is Local-First","text":"
The core system must function without requiring network access or a remote service.
External systems may extend ctx.
They must not be required for its operation.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#7-versioning-is-the-memory-model","level":2,"title":"7. Versioning Is the Memory Model","text":"
The evolution of the authoritative cognitive state must be:
preserved,
inspectable,
and branchable.
Ephemeral and sensitive working state may use different retention and diff strategies by design.
Understanding includes understanding how we arrived here.
Authoritative cognitive state must have a defined layout that:
communicates intent,
supports navigation,
and prevents drift.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#9-verification-is-the-scoreboard","level":2,"title":"9. Verification Is the Scoreboard","text":"
Claims without recorded outcomes are noise.
Reality (observed and captured) is the only signal that compounds.
This invariant defines a required direction:
The authoritative state must be able to record expectation and result.
Work that has already produced understanding must not be re-derived from scratch.
Explored paths, rejected options, and validated conclusions are permanent assets.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#11-policies-are-encoded-not-remembered","level":2,"title":"11. Policies Are Encoded, not Remembered","text":"
Alignment must not depend on recall or goodwill.
Constraints that matter must exist in machine-readable form and participate in context assembly.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#12-the-system-explains-itself","level":2,"title":"12. The System Explains Itself","text":"
From the repository state alone it must be possible to determine:
To avoid category errors, ctx does not attempt to be:
a skill,
a prompt management tool,
a chat history viewer,
an autonomous agent runtime,
a vector database,
a hosted memory service.
Such systems may integrate with ctx.
They do not define it.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#implications-for-contributions","level":1,"title":"Implications for Contributions","text":"
Valid contributions:
strengthen an invariant,
reduce the cost of maintaining an invariant,
or extend the system without violating invariants.
Invalid contributions:
introduce hidden authoritative state,
replace reproducible assembly with non-reproducible behavior,
make core operation depend on external services,
reduce human inspectability of authoritative state,
or bypass explicit user authority over what becomes authoritative.
Everything else (commands, skills, layouts, integrations, optimizations) is an implementation detail.
These invariants are the system.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/scratchpad/","level":1,"title":"Scratchpad","text":"","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#what-is-ctx-scratchpad","level":2,"title":"What Is ctx Scratchpad?","text":"
A one-liner scratchpad, encrypted at rest, synced via git.
Quick notes that don't fit decisions, learnings, or tasks: reminders, intermediate values, sensitive tokens, working memory during debugging. Entries are numbered, reorderable, and persist across sessions.
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#encrypted-by-default","level":2,"title":"Encrypted by Default","text":"
Scratchpad entries are encrypted with AES-256-GCM before touching the disk.
Component Path Git status Encryption key ~/.ctx/.ctx.key User-level, 0600 permissions Encrypted data .context/scratchpad.enc Committed
The key is generated automatically during ctx init (256-bit via crypto/rand) and stored at ~/.ctx/.ctx.key. One key per machine, shared across all projects.
The ciphertext format is [12-byte nonce][ciphertext+tag]. No external dependencies: Go stdlib only.
Because the key is .gitignored and the data is committed, you get:
At-rest encryption: the .enc file is opaque without the key
Git sync: push/pull the encrypted file like any other tracked file
Key separation: the key never leaves the machine unless you copy it
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#commands","level":2,"title":"Commands","text":"Command Purpose ctx pad List all entries (numbered 1-based) ctx pad show N Output raw text of entry N (no prefix, pipe-friendly) ctx pad add \"text\" Append a new entry ctx pad rm N Remove entry at position N ctx pad edit N \"text\" Replace entry N with new text ctx pad edit N --append \"text\" Append text to the end of entry N ctx pad edit N --prepend \"text\" Prepend text to the beginning of entry N ctx pad add TEXT --file PATH Ingest a file as a blob entry (TEXT is the label) ctx pad show N --out PATH Write decoded blob content to a file ctx pad mv N M Move entry from position N to position M ctx pad resolve Show both sides of a merge conflict for resolution ctx pad import FILE Bulk-import lines from a file (or stdin with -) ctx pad import --blob DIR Import directory files as blob entries ctx pad export [DIR] Export all blob entries to a directory as files ctx pad merge FILE... Merge entries from other scratchpad files into current
All commands decrypt on read, operate on plaintext in memory, and re-encrypt on write. The key file is never printed to stdout.
# Add a note\nctx pad add \"check DNS propagation after deploy\"\n\n# List everything\nctx pad\n# 1. check DNS propagation after deploy\n# 2. staging API key: sk-test-abc123\n\n# Show raw text (for piping)\nctx pad show 2\n# sk-test-abc123\n\n# Compose entries\nctx pad edit 1 --append \"$(ctx pad show 2)\"\n\n# Reorder\nctx pad mv 2 1\n\n# Clean up\nctx pad rm 2\n
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#bulk-import-and-export","level":2,"title":"Bulk Import and Export","text":"
Import lines from a file in bulk (each non-empty line becomes an entry):
# Import from a file\nctx pad import notes.txt\n\n# Import from stdin\ngrep TODO *.go | ctx pad import -\n
Export all blob entries to a directory as files:
# Export to a directory\nctx pad export ./ideas\n\n# Preview without writing\nctx pad export --dry-run\n\n# Overwrite existing files\nctx pad export --force ./backup\n
Combine entries from other scratchpad files into your current pad. Useful when merging work from parallel worktrees, other machines, or teammates:
# Merge from a worktree's encrypted scratchpad\nctx pad merge worktree/.context/scratchpad.enc\n\n# Merge from multiple sources (encrypted and plaintext)\nctx pad merge pad-a.enc notes.md\n\n# Merge a foreign encrypted pad using its key\nctx pad merge --key /other/.ctx.key foreign.enc\n\n# Preview without writing\nctx pad merge --dry-run pad-a.enc pad-b.md\n
Each input file is auto-detected as encrypted or plaintext: decryption is attempted first, and on failure the file is parsed as plain text. Entries are deduplicated by exact content, so running merge twice with the same file is safe.
The scratchpad can store small files (up to 64 KB) as blob entries. Files are base64-encoded and stored with a human-readable label.
# Ingest a file: first argument is the label\nctx pad add \"deploy config\" --file ./deploy.yaml\n\n# Listing shows label with a [BLOB] marker\nctx pad\n# 1. check DNS propagation after deploy\n# 2. deploy config [BLOB]\n\n# Extract to a file\nctx pad show 2 --out ./recovered.yaml\n\n# Or print decoded content to stdout\nctx pad show 2\n
Blob entries are encrypted identically to text entries. The internal format is label:::base64data: You never need to construct this manually.
Constraint Value Max file size (pre-encoding) 64 KB Storage format label:::base64(content) Display label [BLOB] in listings
When Should You Use Blobs
Blobs are for small files you want encrypted and portable: config snippets, key fragments, deployment manifests, test fixtures. For anything larger than 64 KB, use the filesystem directly.
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#using-with-ai","level":2,"title":"Using with AI","text":"
Use Natural Language
As in many ctx features, the ctx scratchpad can also be used with natural langauge. You don't have to memorize the CLI commands.
CLI gives you \"precision\", whereas natural language gives you flow.
The /ctx-pad skill maps natural language to ctx pad commands. You don't need to remember the syntax:
You say What happens \"jot down: check DNS after deploy\" ctx pad add \"check DNS after deploy\" \"show my scratchpad\" ctx pad \"delete the third entry\" ctx pad rm 3 \"update entry 2 to include the new endpoint\" ctx pad edit 2 \"...\" \"move entry 4 to the top\" ctx pad mv 4 1 \"import my notes from notes.txt\" ctx pad import notes.txt \"export all blobs to ./backup\" ctx pad export ./backup \"merge the scratchpad from the worktree\" ctx pad merge worktree/.context/scratchpad.enc
The skill handles the translation. You describe what you want in plain English; the agent picks the right command.
The encryption key lives at ~/.ctx/.ctx.key (outside the project directory). Because all worktrees on the same machine share this path, ctx pad works in worktrees automatically - no special setup needed.
For projects where encryption is unnecessary, disable it in .ctxrc:
scratchpad_encrypt: false\n
In plaintext mode:
Entries are stored in .context/scratchpad.md instead of .enc.
No key is generated or required.
All ctx pad commands work identically.
The file is human-readable and diffable.
When Should You Use Plaintext
Plaintext mode is useful for non-sensitive projects, solo work where encryption adds friction, or when you want scratchpad entries visible in git diff.
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#when-should-you-use-scratchpad-versus-context-files","level":2,"title":"When Should You Use Scratchpad versus Context Files","text":"Use case Where it goes Temporary reminders (\"check X after deploy\") Scratchpad Working values during debugging Scratchpad Sensitive tokens or API keys (short-term) Scratchpad Quick notes that don't fit anywhere else Scratchpad Items that are not directly relevant to the project Scratchpad Things that you want to keep near, but also hidden Scratchpad Work items with completion tracking TASKS.md Trade-offs with rationale DECISIONS.md Reusable lessons with context/lesson/application LEARNINGS.md Codified patterns and standards CONVENTIONS.md
Rule of thumb:
If it needs structure or will be referenced months later, use a context file (i.e. DECISIONS.md, LEARNINGS.md, TASKS.md).
If it is working memory for the current session or week, use the scratchpad.
Session journals contain sensitive data such as file contents, commands, API keys, internal discussions, error messages with stack traces, and more.
The .context/journal-site/ and .context/journal-obsidian/ directories MUST be .gitignored.
DO NOT host your journal publicly.
DO NOT commit your journal files to version control.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#browse-your-session-history","level":2,"title":"Browse Your Session History","text":"
ctx's Session Journal turns your AI coding sessions into a browsable, searchable, and editable archive.
After using ctx for a couple of sessions, you can generate a journal site with:
# Import all sessions to markdown\nctx journal import --all\n\n# Generate and serve the journal site\nctx journal site --serve\n
Then open http://localhost:8000 to browse your sessions.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#what-you-get","level":2,"title":"What You Get","text":"
The Session Journal gives you:
Browsable history: Navigate through all your AI sessions by date
Full conversations: See every message, tool use, and result
Token usage: Track how many tokens each session consumed
Search: Find sessions by content, project, or date
Dark mode: Easy on the eyes for late-night archaeology
Each session page includes the following sections:
Section Content Metadata Date, time, duration, model, project, git branch Summary Space for your notes (editable) Tool Usage Which tools were used and how often Conversation Full transcript with timestamps","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#1-import-sessions","level":3,"title":"1. Import Sessions","text":"
# Import all sessions from current project (only new files)\nctx journal import --all\n\n# Import sessions from all projects\nctx journal import --all --all-projects\n\n# Import a specific session by ID (always writes)\nctx journal import abc123\n\n# Preview what would be imported\nctx journal import --all --dry-run\n\n# Re-import existing (regenerates conversation, preserves YAML frontmatter)\nctx journal import --all --regenerate\n\n# Discard frontmatter during regeneration\nctx journal import --all --regenerate --keep-frontmatter=false -y\n
Imported sessions go to .context/journal/ as editable Markdown files.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#2-generate-the-site","level":3,"title":"2. Generate the Site","text":"
# Generate site structure\nctx journal site\n\n# Generate and build static HTML\nctx journal site --build\n\n# Generate and serve locally\nctx journal site --serve\n\n# Custom output directory\nctx journal site --output ~/my-journal\n
The site is generated in .context/journal-site/ by default.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#3-browse-and-search","level":3,"title":"3. Browse and Search","text":"
Imported sessions are plain Markdown in .context/journal/. You can:
Add summaries: Fill in the ## Summary section
Add notes: Insert your own commentary anywhere
Highlight key moments: Use Markdown formatting
Delete noise: Remove irrelevant tool outputs
After editing, regenerate the site:
ctx journal site --serve\n
Safe by Default
Running ctx journal import --all only imports new sessions. Existing files are skipped entirely (your edits and enrichments are never touched).
Use --regenerate to re-import existing files. Conversation content is regenerated, but YAML frontmatter (topics, type, outcome, etc.) is preserved. You'll be prompted before any existing files are overwritten; add -y to skip the prompt.
Use --keep-frontmatter=false to discard enriched frontmatter during regeneration.
Locked entries (via ctx journal lock) are always skipped, regardless of flags. If you prefer to add locked: true to frontmatter during enrichment, run ctx journal sync to propagate the lock state to .state.json.
Claude Code generates \"suggestion\" sessions for auto-complete prompts. These are separated in the index under a \"Suggestions\" section to keep your main session list focused.
Raw imported sessions contain basic metadata (date, time, project) but lack the structured information needed for effective search, filtering, and analysis. Journal enrichment adds semantic metadata that transforms a flat archive into a searchable knowledge base.
Field Required Description title Yes Descriptive title (not the session slug) date Yes Session date (YYYY-MM-DD) type Yes Session type (see below) outcome Yes How the session ended (see below) topics No Subject areas discussed technologies No Languages, databases, frameworks libraries No Specific packages or libraries used key_files No Important files created or modified
Type values:
Type When to use feature Building new functionality bugfix Fixing broken behavior refactor Restructuring without behavior change exploration Research, learning, experimentation debugging Investigating issues documentation Writing docs, comments, README
Outcome values:
Outcome Meaning completed Goal achieved partial Some progress, work continues abandoned Stopped pursuing this approach blocked Waiting on external dependency","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#using-ctx-journal-enrich","level":3,"title":"Using /ctx-journal-enrich","text":"
The /ctx-journal-enrich skill automates enrichment by analyzing conversation content and proposing metadata.
Extract decisions, learnings, and tasks mentioned;
Show a diff and ask for confirmation before writing.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#before-and-after","level":3,"title":"Before and After","text":"
Before enrichment:
# twinkly-stirring-kettle\n\n**ID**: abc123-def456\n**Date**: 2026-01-24\n**Time**: 14:30:00\n...\n\n## Summary\n\n[Add your summary of this session]\n\n## Conversation\n...\n
After enrichment:
---\ntitle: \"Add Redis caching to API endpoints\"\ndate: 2026-01-24\ntype: feature\noutcome: completed\ntopics:\n - caching\n - api-performance\ntechnologies:\n - go\n - redis\nkey_files:\n - internal/api/middleware/cache.go\n - internal/cache/redis.go\n---\n\n# twinkly-stirring-kettle\n\n**ID**: abc123-def456\n**Date**: 2026-01-24\n**Time**: 14:30:00\n...\n\n## Summary\n\nImplemented Redis-based caching middleware for frequently accessed API endpoints.\nAdded cache invalidation on writes and configurable TTL per route. Reduced\n the average response time from 200ms to 15ms for cached routes.\n\n## Decisions\n\n* Used Redis over in-memory cache for horizontal scaling\n* Chose per-route TTL configuration over global setting\n\n## Learnings\n\n* Redis WATCH command prevents race conditions during cache invalidation\n\n## Conversation\n...\n
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#enrichment-and-site-generation","level":3,"title":"Enrichment and Site Generation","text":"
The journal site generator uses enriched metadata for better organization:
Titles appear in navigation instead of slugs
Summaries provide context in the index
Topics enable filtering (when using search)
Types allow grouping by work category
Future improvements will add topic-based navigation and outcome filtering to the generated site.
Use ctx journal site when you want a web-browsable archive with search and dark mode. Use ctx journal obsidian when you want graph view, backlinks, and tag-based navigation inside Obsidian. Both use the same enriched source entries: you can generate both.
The complete journal workflow has four stages. Each is idempotent: safe to re-run, and stages skip already-processed entries.
import → enrich → rebuild\n
Stage Command / Skill What it does Skips if Import ctx journal import --all Converts session JSONL to Markdown File already exists (safe default) Enrich /ctx-journal-enrich Adds frontmatter, summaries, topics Frontmatter already present Rebuild ctx journal site --build Generates static HTML site -- Obsidian ctx journal obsidian Generates Obsidian vault with wikilinks --
One-command pipeline
/ctx-journal-enrich-all handles import automatically - it detects unimported sessions and imports them before enriching. You only need to run ctx journal site --build afterward.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#using-make-journal","level":3,"title":"Using make journal","text":"
If your project includes Makefile.ctx (deployed by ctx init), the first and last stages are combined:
make journal # import + rebuild\n
After it runs, it reminds you to enrich in Claude Code:
Next steps (in Claude Code):\n /ctx-journal-enrich-all # imports if needed + adds metadata per entry\n\nThen re-run: make journal\n
Rendering Issues?
If individual entries have rendering problems (broken fences, malformed lists), check the programmatic normalization in the import pipeline. Most cases are handled automatically during ctx journal import.
# Import, browse, then enrich in Claude Code\nmake journal && make journal-serve\n# Then in Claude Code: /ctx-journal-enrich <session>\n
After a productive session:
# Import just that session and add notes\nctx journal import <session-id>\n# Edit .context/journal/<session>.md\n# Regenerate: ctx journal site\n
Searching across all sessions:
# Use grep on the journal directory\ngrep -r \"authentication\" .context/journal/\n
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#requirements","level":2,"title":"Requirements","text":"Use pipx for zensical
pip install zensical may install a non-functional stub on system Python. Using venv has other issues too.
These issues especially happen on Mac OSX.
Use pipx install zensical, which creates an isolated environment and handles Python version management automatically.
The journal site uses zensical for static site generation:
Skills are slash commands that run inside your AI assistant (e.g., /ctx-next), as opposed to CLI commands that run in your terminal (e.g., ctx status).
Skills give your agent structured workflows: It knows what to read, what to run, and when to ask. Most wrap one or more ctx CLI commands with opinionated behavior on top.
Skills Are Best Used Conversationally
The beauty of ctx is that it's designed to be intuitive and conversational, allowing you to interact with your AI assistant naturally. That's why you don't have to memorize many of these skills.
See the Prompting Guide for natural-language triggers that invoke these skills conversationally.
However, when you need a more precise control, you have the option to invoke the relevant skills directly.
","path":["Reference","Skills"],"tags":[]},{"location":"reference/skills/#all-skills","level":2,"title":"All Skills","text":"Skill Description Type /ctx-remember Recall project context and present structured readback user-invocable /ctx-wrap-up End-of-session context persistence ceremony user-invocable /ctx-status Show context summary with interpretation user-invocable /ctx-agent Load full context packet for AI consumption user-invocable /ctx-next Suggest 1-3 concrete next actions with rationale user-invocable /ctx-commit Commit with integrated context persistence user-invocable /ctx-reflect Pause and reflect on session progress user-invocable /ctx-add-task Add actionable task to TASKS.md user-invocable /ctx-add-decision Record architectural decision with rationale user-invocable /ctx-add-learning Record gotchas and lessons learned user-invocable /ctx-add-convention Record coding convention for consistency user-invocable /ctx-archive Archive completed tasks from TASKS.md user-invocable /ctx-pad Manage encrypted scratchpad entries user-invocable /ctx-history Browse and import AI session history user-invocable /ctx-journal-enrich Enrich single journal entry with metadata user-invocable /ctx-journal-enrich-all Full journal pipeline: export if needed, then batch-enrich user-invocable /ctx-blog Generate blog post draft from project activity user-invocable /ctx-blog-changelog Generate themed blog post from a commit range user-invocable /ctx-consolidate Consolidate redundant learnings or decisions user-invocable /ctx-drift Detect and fix context drift user-invocable /ctx-prompt Apply, list, and manage saved prompt templates user-invocable /ctx-prompt-audit Analyze prompting patterns for improvement user-invocable /ctx-check-links Audit docs for dead internal and external links user-invocable /ctx-sanitize-permissions Audit Claude Code permissions for security risks user-invocable /ctx-brainstorm Structured design dialogue before implementation user-invocable /ctx-spec Scaffold a feature spec from a project template user-invocable /ctx-import-plans Import Claude Code plan files into project specs user-invocable /ctx-implement Execute a plan step-by-step with verification user-invocable /ctx-loop Generate autonomous loop script user-invocable /ctx-worktree Manage git worktrees for parallel agents user-invocable /ctx-architecture Build and maintain architecture maps user-invocable /ctx-remind Manage session-scoped reminders user-invocable /ctx-doctor Troubleshoot ctx behavior with health checks and event analysis user-invocable /ctx-skill-audit Audit skills against Anthropic prompting best practices user-invocable /ctx-skill-creator Create, improve, and test skills user-invocable /ctx-pause Pause context hooks for this session user-invocable /ctx-resume Resume context hooks after a pause user-invocable","path":["Reference","Skills"],"tags":[]},{"location":"reference/skills/#session-lifecycle","level":2,"title":"Session Lifecycle","text":"
Skills for starting, running, and ending a productive session.
Session Ceremonies
Two skills in this group are ceremony skills: /ctx-remember (session start) and /ctx-wrap-up (session end). Unlike other skills that work conversationally, these should be invoked as explicit slash commands for completeness. See Session Ceremonies.
Commit code with integrated context persistence: pre-commit checks, staged files, Co-Authored-By trailer, and a post-commit prompt to capture decisions and learnings.
Wraps: git add, git commit, optionally chains to /ctx-add-decision and /ctx-add-learning
End-of-session context persistence ceremony. Gathers signal from git diff, recent commits, and conversation themes. Proposes candidates (learnings, decisions, conventions, tasks) with complete structured fields for user approval, then persists via ctx add. Offers /ctx-commit if uncommitted changes remain. Ceremony skill: invoke explicitly at session end.
Record a project-specific gotcha, bug, or unexpected behavior. Filters for insights that are searchable, project-specific, and required real effort to discover.
Full journal pipeline: imports unimported sessions first, then batch-enriches all unenriched entries. Filters out short sessions and continuations. Can spawn subagents for large backlogs.
Generate a blog post draft from recent project activity: git history, decisions, learnings, tasks, and journal entries. Requires a narrative arc (problem, approach, outcome).
Consolidate redundant entries in LEARNINGS.md or DECISIONS.md. Groups overlapping entries by keyword similarity, presents candidates, and (with user approval) merges groups into denser combined entries. Originals are archived, not deleted.
Detect and fix context drift: stale paths, missing files, file age staleness, task accumulation, entry count warnings, and constitution violations via ctx drift. Also detects skill drift against canonical templates.
Analyze recent prompting patterns to identify vague or ineffective prompts. Reviews 3-5 journal entries and suggests rewrites with positive observations.
Troubleshoot ctx behavior. Runs structural health checks via ctx doctor, analyzes event log patterns via ctx system events, and presents findings with suggested actions. The CLI provides the structural baseline; the agent adds semantic analysis of event patterns and correlations.
Wraps: ctx doctor --json, ctx system events --json --last 100, ctx remind list, ctx system message list, reads .ctxrc
Graceful degradation: If event_log is not enabled, the skill still works but with reduced capability. It runs structural checks and notes: \"Enable event_log: true in .ctxrc for hook-level diagnostics.\"
See also: Troubleshooting, ctx doctor CLI, ctx system events CLI
Scan all markdown files under docs/ for broken links. Three passes: internal links (verify file targets exist on disk), external links (HTTP HEAD with timeout, report failures as warnings), and image references. Resolves relative paths, strips anchors before checking, and skips localhost/example URLs.
Wraps: Glob + Grep to scan, curl for external checks
Audit .claude/settings.local.json for dangerous permissions across four risk categories: hook bypass (Critical), destructive commands (High), config injection vectors (High), and overly broad patterns (Medium). Reports findings by severity and offers specific fix actions with user confirmation.
Wraps: reads .claude/settings.local.json, edits with confirmation
Transform raw ideas into clear, validated designs through structured dialogue before any implementation begins. Follows a gated process: understand context, clarify the idea (one question at a time), surface non-functional requirements, lock understanding with user confirmation, explore 2-3 design approaches with trade-offs, stress-test the chosen approach, and present the detailed design.
Wraps: reads DECISIONS.md, relevant source files; chains to /ctx-add-decision for recording design choices
Trigger phrases: \"let's brainstorm\", \"design this\", \"think through\", \"before we build\", \"what approach should we take?\"
Scaffold a feature spec from the project template and walk through each section with the user. Covers: problem, approach, happy path, edge cases, validation rules, error handling, interface, implementation, configuration, testing, and non-goals. Spends extra time on edge cases and error handling.
Wraps: reads specs/tpl/spec-template.md, writes to specs/, optionally chains to /ctx-add-task
Trigger phrases: \"spec this out\", \"write a spec\", \"create a spec\", \"design document\"
Import Claude Code plan files (~/.claude/plans/*.md) into the project's specs/ directory. Lists plans with dates and H1 titles, supports filtering (--today, --since, --all), slugifies headings for filenames, and optionally creates tasks referencing each imported spec.
Wraps: reads ~/.claude/plans/*.md, writes to specs/, optionally chains to /ctx-add-task
See also: Importing Claude Code Plans, Tracking Work Across Sessions
Execute a multi-step plan with build and test verification at each step. Loads a plan from a file or conversation context, breaks it into atomic steps, and checkpoints after every 3-5 steps.
Wraps: reads plan file, runs verification commands (go build, go test, etc.)
Generate a ready-to-run shell script for autonomous AI iteration. Supports Claude Code, Aider, and generic tool templates with configurable completion signals.
Manage git worktrees for parallel agent development. Create sibling worktrees on dedicated branches, analyze task blast radius for grouping, and tear down with merge.
Build and maintain architecture maps incrementally. Creates or refreshes ARCHITECTURE.md (succinct project map, loaded at session start) and DETAILED_DESIGN.md (deep per-module reference, consulted on-demand). Coverage is tracked in map-tracking.json so each run extends the map rather than re-analyzing everything.
Manage session-scoped reminders via natural language. Translates user intent (\"remind me to refactor swagger\") into the corresponding ctx remind command. Handles date conversion for --after flags.
Audit one or more skills against Anthropic prompting best practices. Checks audit dimensions: positive framing, motivation, phantom references, examples, subagent guards, scope, and descriptions. Reports findings by severity with concrete fix suggestions.
Wraps: reads internal/assets/claude/skills/*/SKILL.md or .claude/skills/*/SKILL.md, references anthropic-best-practices.md
Trigger phrases: \"audit this skill\", \"check skill quality\", \"review the skills\", \"are our skills any good?\"
Create, improve, and test skills. Guides the full lifecycle: capture intent, interview for edge cases, draft the SKILL.md, test with realistic prompts, review results with the user, and iterate. Applies core principles: the agent is already smart (only add what it does not know), the description is the trigger (make it specific and \"pushy\"), and explain the why instead of rigid directives.
Wraps: reads/writes .claude/skills/ and internal/assets/claude/skills/
Trigger phrases: \"create a skill\", \"turn this into a skill\", \"make a slash command\", \"this should be a skill\", \"improve this skill\", \"the skill isn't triggering\"
Pause all context nudge and reminder hooks for the current session. Security hooks still fire. Use for quick investigations or tasks that don't need ceremony overhead.
The ctx plugin ships the skills listed above. Teams can add their own project-specific skills to .claude/skills/ in the project root: These are separate from plugin-shipped skills and are scoped to the project.
Project-specific skills follow the same format and are invoked the same way.
MCP server for tool-agnostic AI integration. Memory bridge connecting Claude Code auto-memory to .context/. Complete CLI restructuring into cmd/ + core/ taxonomy. All user-facing strings externalized to YAML. fatih/color removed; two direct dependencies remain.
","path":["Reference","Version History"],"tags":[]},{"location":"reference/versions/#v060-the-integration-release","level":3,"title":"v0.6.0: The Integration Release","text":"
Plugin architecture: hooks and skills converted from shell scripts to Go subcommands, shipped as a Claude Code marketplace plugin. Multi-tool hook generation for Cursor, Aider, Copilot, and Windsurf. Webhook notifications with encrypted URL storage.
","path":["Reference","Version History"],"tags":[]},{"location":"reference/versions/#v030-the-discipline-release","level":3,"title":"v0.3.0: The Discipline Release","text":"
Journal static site generation via zensical. 49-skill audit and fix pass (positive framing, phantom reference removal, scope tightening). Context consolidation skill. golangci-lint v2 migration.
","path":["Reference","Version History"],"tags":[]},{"location":"reference/versions/#v020-the-archaeology-release","level":3,"title":"v0.2.0: The Archaeology Release","text":"
Session journal system: ctx journal import converts Claude Code JSONL transcripts to browsable Markdown. Constants refactor with semantic prefixes (Dir*, File*, Filename*). CRLF handling for Windows compatibility.
Trust model, vulnerability reporting, permission hygiene, and security design principles.
","path":["Security"],"tags":[]},{"location":"security/agent-security/","level":1,"title":"Securing AI Agents","text":"","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#defense-in-depth-securing-ai-agents","level":1,"title":"Defense in Depth: Securing AI Agents","text":"","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#the-problem","level":2,"title":"The Problem","text":"
An unattended AI agent with unrestricted access to your machine is an unattended shell with unrestricted access to your machine.
This is not a theoretical concern. AI coding agents execute shell commands, write files, make network requests, and modify project configuration. When running autonomously (overnight, in a loop, without a human watching), the attack surface is the full capability set of the operating system user account.
The risk is not that the AI is malicious. The risk is that the AI is controllable: it follows instructions from context, and context can be poisoned.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#threat-model","level":2,"title":"Threat Model","text":"","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#how-agents-get-compromised","level":3,"title":"How Agents Get Compromised","text":"
AI agents follow instructions from multiple sources: system prompts, project files, conversation history, and tool outputs. An attacker who can inject content into any of these sources can redirect the agent's behavior.
Vector How it works Prompt injection via dependencies A malicious package includes instructions in its README, changelog, or error output. The agent reads these during installation or debugging and follows them. Prompt injection via fetched content The agent fetches a URL (documentation, API response, Stack Overflow answer) containing embedded instructions. Poisoned project files A contributor adds adversarial instructions to CLAUDE.md, .cursorrules, or .context/ files. The agent loads these at session start. Self-modification between iterations In an autonomous loop, the agent modifies its own configuration files. The next iteration loads the modified config with no human review. Tool output injection A command's output (error messages, log lines, file contents) contains instructions the agent interprets and follows.","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#what-can-a-compromised-agent-do","level":3,"title":"What Can a Compromised Agent Do","text":"
Depends entirely on what permissions and access the agent has:
Access level Potential impact Unrestricted shell Execute any command, install software, modify system files Network access Exfiltrate source code, credentials, or context files to external servers Docker socket Escape container isolation by spawning privileged sibling containers SSH keys Pivot to other machines, push to remote repositories, access production systems Write access to own config Disable its own guardrails for the next iteration","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#the-defense-layers","level":2,"title":"The Defense Layers","text":"
No single layer is sufficient. Each layer catches what the others miss.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-1-soft-instructions-probabilistic","level":3,"title":"Layer 1: Soft Instructions (Probabilistic)","text":"
Markdown files like CONSTITUTION.md and the Agent Playbook tell the agent what to do and what not to do. These are probabilistic: the agent usually follows them, but there is no enforcement mechanism.
What it catches: Most common mistakes. An agent that has been told \"never delete production data\" will usually not delete production data.
What it misses: Prompt injection. A sufficiently crafted injection can override soft instructions. Long context windows dilute attention on rules stated early. Edge cases where instructions are ambiguous.
Verdict: Necessary but not sufficient. Good for the common case. Do not rely on it for security boundaries.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-2-application-controls-deterministic-at-runtime-mutable-across-iterations","level":3,"title":"Layer 2: Application Controls (Deterministic at Runtime, Mutable Across Iterations)","text":"
AI tool runtimes (Claude Code, Cursor, etc.) provide permission systems: tool allowlists, command restrictions, confirmation prompts.
For Claude Code, ctx init writes both an allowlist and an explicit deny list into .claude/settings.local.json. The golden images live in internal/assets/permissions/:
Allowlist (allow.txt): only these tools run without confirmation:
Bash(ctx:*)\nSkill(ctx-add-convention)\nSkill(ctx-add-decision)\n... # all bundled ctx-* skills\n
Deny list (deny.txt): these are blocked even if the agent requests them:
What it catches: The agent cannot run commands outside the allowlist, and the deny list blocks dangerous operations even if a future allowlist change were to widen access. If rm, curl, sudo, or docker are not allowed and sudo/curl/wget are explicitly denied, the agent cannot invoke them regardless of what any prompt says.
What it misses: The agent can modify the allowlist itself. In an autonomous loop, if the agent writes to .claude/settings.local.json, and the next iteration loads the modified config, then the protection is effectively lost. The application enforces the rules, but the application reads the rules from files the agent can write.
Verdict: Strong first layer. Must be combined with self-modification prevention (Layer 3).
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-3-os-level-isolation-deterministic-and-unbypassable","level":3,"title":"Layer 3: OS-Level Isolation (Deterministic and Unbypassable)","text":"
The operating system enforces access controls that no application-level trick can override. An unprivileged user cannot read files owned by root. A process without CAP_NET_RAW cannot open raw sockets. These are kernel boundaries.
Control Purpose Dedicated user account No sudo, no privileged group membership (docker, wheel, adm). The agent cannot escalate privileges. Filesystem permissions Project directory writable; everything else read-only or inaccessible. Agent cannot reach other projects, home directories, or system config. Immutable config files CLAUDE.md, .claude/settings.local.json, and .context/CONSTITUTION.md owned by a different user or marked immutable (chattr +i on Linux). The agent cannot modify its own guardrails.
What it catches: Privilege escalation, self-modification, lateral movement to other projects or users.
What it misses: Actions within the agent's legitimate scope. If the agent has write access to source code (which it needs to do its job), it can introduce vulnerabilities in the code itself.
Verdict: Essential. This is the layer that makes the other layers trustworthy.
OS-level isolation does not make the agent safe; it makes the other layers meaningful.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-4-network-controls","level":3,"title":"Layer 4: Network Controls","text":"
An agent that cannot reach the internet cannot exfiltrate data. It also cannot ingest new instructions mid-loop from external documents, API responses, or hostile content.
Scenario Recommended control Agent does not need the internet --network=none (container) or outbound firewall drop-all Agent needs to fetch dependencies Allow specific registries (npmjs.com, proxy.golang.org, pypi.org) via firewall rules. Block everything else. Agent needs API access Allow specific API endpoints only. Use an HTTP proxy with allowlisting.
What it catches: Data exfiltration, phone-home payloads, downloading additional tools, and instruction injection via fetched content.
What it misses: Nothing, if the agent genuinely does not need the network. The tradeoff is that many real workloads need dependency resolution, so a full airgap requires pre-populated caches.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-5-infrastructure-isolation","level":3,"title":"Layer 5: Infrastructure Isolation","text":"
The strongest boundary is a separate machine (or something that behaves like one).
The moment you stop arguing about prompts and start arguing about kernels, you are finally doing security.
Critical: never mount the Docker socket (/var/run/docker.sock).
An agent with socket access can spawn sibling containers with full host access, effectively escaping the sandbox.
Use rootless Docker or Podman to eliminate this escalation path.
Virtual machines: The strongest isolation. The guest kernel has no visibility into the host OS. No shared folders, no filesystem passthrough, no SSH keys to other machines.
Resource limits: CPU, memory, and disk quotas prevent a runaway agent from consuming all resources. Use ulimit, cgroup limits, or container resource constraints.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
A defense-in-depth setup for overnight autonomous runs:
Layer Implementation Stops Soft instructions CONSTITUTION.md with \"never delete tests\", \"always run tests before committing\" Common mistakes (probabilistic) Application allowlist .claude/settings.local.json with explicit tool permissions Unauthorized commands (deterministic within runtime) Immutable config chattr +i on CLAUDE.md, .claude/, CONSTITUTION.md Self-modification between iterations Unprivileged user Dedicated user, no sudo, no docker group Privilege escalation Container --cap-drop=ALL --network=none, rootless, no socket mount Host escape, network exfiltration Resource limits --memory=4g --cpus=2, disk quotas Resource exhaustion
Each layer is straightforward: The strength is in the combination.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#common-mistakes","level":2,"title":"Common Mistakes","text":"
\"I'll just use --dangerously-skip-permissions\": This disables Layer 2 entirely. Without Layers 3-5, you have no protection at all. Only use this flag inside a properly isolated container or VM.
\"The agent is sandboxed in Docker\": A Docker container with the Docker socket mounted, running as root, with --privileged, and full network access is not sandboxed. It is a root shell with extra steps.
\"CONSTITUTION.md says not to do that\": Markdown is a suggestion. It works most of the time. It is not a security boundary. Do not use it as one.
\"I reviewed the CLAUDE.md, it's fine\": The agent can modify CLAUDE.md during iteration N. Iteration N+1 loads the modified version. Unless the file is immutable, your review is stale.
\"The agent only has access to this one project\": Does the project directory contain .env files, SSH keys, API tokens, or credentials? Does it have a .git/config with push access to a remote? Filesystem isolation means isolating what is in the directory too.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#team-security-considerations","level":2,"title":"Team Security Considerations","text":"
When multiple developers share a .context/ directory, security considerations extend beyond single-agent hardening.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#code-review-for-context-files","level":3,"title":"Code Review for Context Files","text":"
Treat .context/ changes like code changes. Context files influence agent behavior (a modified CONSTITUTION.md or CONVENTIONS.md changes what every agent on the team will do next session). Review them in PRs with the same scrutiny you apply to production code.
New decisions that contradict existing ones without acknowledging it
Learnings that encode incorrect assumptions
Task additions that bypass the team's prioritization process
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#gitignore-patterns","level":3,"title":"Gitignore Patterns","text":"
ctx init configures .gitignore automatically, but verify these patterns are in place:
Team decision: scratchpad.enc (encrypted, safe to commit for shared scratchpad state); .gitignore if scratchpads are personal
Never committed: .env, credentials, API keys (enforced by drift secret detection)
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#multi-developer-context-sharing","level":3,"title":"Multi-Developer Context Sharing","text":"
CONSTITUTION.md is the shared contract. All team members and their agents inherit it. Changes require team consensus, not unilateral edits.
When multiple agents write to the same context files concurrently (e.g., two developers adding learnings simultaneously), git merge conflicts are expected. Resolution is typically additive: accept both additions. Destructive resolution (dropping one side) loses context.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#team-conventions-for-context-management","level":3,"title":"Team Conventions for Context Management","text":"
Establish and document:
Who reviews context changes: Same reviewers as code, or a designated context owner?
How to resolve conflicting decisions: If two sessions record contradictory decisions, which wins? Default: the later one must explicitly supersede the earlier one with rationale.
Frequency of context maintenance: Weekly ctx drift checks, monthly consolidation passes, archival after each milestone.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#checklist","level":2,"title":"Checklist","text":"
Before running an unattended AI agent:
Agent runs as a dedicated unprivileged user (no sudo, no docker group)
Agent's config files are immutable or owned by a different user
Permission allowlist restricts tools to the project's toolchain
Container drops all capabilities (--cap-drop=ALL)
Docker socket is NOT mounted
Network is disabled or restricted to specific domains
Resource limits are set (memory, CPU, disk)
No SSH keys, API tokens, or credentials are accessible to the agent
Project directory does not contain .env or secrets files
Iteration cap is set (--max-iterations)
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#further-reading","level":2,"title":"Further Reading","text":"
Running an Unattended AI Agent: the ctx recipe for autonomous loops, including step-by-step permissions and isolation setup
Security: ctx's own trust model and vulnerability reporting
Autonomous Loops: full documentation of the loop pattern, prompt templates, and troubleshooting
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/reporting/","level":1,"title":"Security Policy","text":"","path":["Security","Security Policy"],"tags":[]},{"location":"security/reporting/#reporting-vulnerabilities","level":2,"title":"Reporting Vulnerabilities","text":"
At ctx we take security very seriously.
If you discover a security vulnerability in ctx, please report it responsibly.
Do NOT open a public issue for security vulnerabilities.
If your report contains sensitive details (proof-of-concept exploits, credentials, or internal system information), you can encrypt your message with our PGP key:
In-repo: SECURITY_KEY.asc
Keybase: keybase.io/alekhinejose
# Import the key\ngpg --import SECURITY_KEY.asc\n\n# Encrypt your report\ngpg --armor --encrypt --recipient security@ctx.ist report.txt\n
Encryption is optional. Unencrypted reports to security@ctx.ist or via GitHub Private Reporting are perfectly fine.
","path":["Security","Security Policy"],"tags":[]},{"location":"security/reporting/#what-to-include","level":3,"title":"What to Include","text":"
We appreciate responsible disclosure and will acknowledge security researchers who report valid vulnerabilities (unless they prefer to remain anonymous).
ctx is a volunteer-maintained open source project.
The timelines below are guidelines, not guarantees, and depend on contributor availability.
We will address security reports on a best-effort basis and prioritize them by severity.
Stage Timeframe Acknowledgment Within 48 hours Initial assessment Within 7 days Resolution target Within 30 days (depending on severity)","path":["Security","Security Policy"],"tags":[]},{"location":"security/reporting/#trust-model","level":2,"title":"Trust Model","text":"
ctx operates within a single trust boundary: the local filesystem.
The person who authors .context/ files is the same person who runs the agent that reads them. There is no remote input, no shared state, and no server component.
This means:
ctx does not sanitize context files for prompt injection. This is a deliberate design choice, not an oversight. The files are authored by the developer who owns the machine: Sanitizing their own instructions back to them would be counterproductive.
If you place adversarial instructions in your own .context/ files, your agent will follow them. This is expected behavior. You control the context; the agent trusts it.
Shared Repositories
In shared repositories, .context/ files should be reviewed in code review (the same way you would review CI/CD config or Makefiles). A malicious contributor could add harmful instructions to CONSTITUTION.md or TASKS.md.
No secrets in context: The constitution explicitly forbids storing secrets, tokens, API keys, or credentials in .context/ files
Local only: ctx runs entirely locally with no external network calls
No code execution: ctx reads and writes Markdown files only; it does not execute arbitrary code
Git-tracked: Core context files are meant to be committed, so they should never contain sensitive data. Exception: sessions/ and journal/ contain raw conversation data and should be gitignored
Claude Code evaluates permissions in deny → ask → allow order. ctx init automatically populates permissions.deny with rules that block dangerous operations before the allow list is ever consulted.
Hook state files (throttle markers, prompt counters, pause markers) are stored in .context/state/, which is project-scoped and gitignored. State files are automatically managed by the hooks that create them; no manual cleanup is needed.
Review before committing: Always review .context/ files before committing
Use .gitignore: If you must store sensitive notes locally, add them to .gitignore
Drift detection: Run ctx drift to check for potential issues
Permission audit: Review .claude/settings.local.json after busy sessions
","path":["Security","Security Policy"],"tags":[]},{"location":"thesis/","level":1,"title":"Context as State","text":"","path":["The Thesis"],"tags":[]},{"location":"thesis/#a-persistence-layer-for-human-ai-cognition","level":2,"title":"A Persistence Layer for Human-AI Cognition","text":"
As AI tools evolve from code-completion utilities into reasoning collaborators, the knowledge that governs their behavior becomes as important as the code they produce; yet, that knowledge is routinely discarded at the end of every session.
AI-assisted development systems assemble context at prompt time using heuristic retrieval from mutable sources: recent files, semantic search results, session history. These approaches optimize relevance at the moment of generation but do not persist the cognitive state that produced decisions. Reasoning is not reproducible, intent is lost across sessions, and teams cannot audit the knowledge that constrains automated behavior.
This paper argues that context should be treated as deterministic, version-controlled state rather than as a transient query result. We ground this argument in three sources of evidence: a landscape analysis of 17 systems spanning AI coding assistants, agent frameworks, and knowledge stores; a taxonomy of five primitive categories that reveals irrecoverable architectural trade-offs; and an experience report from ctx, a persistence layer for AI-assisted development, which developed itself using its own persistence model across 389 sessions over 33 days. We define a three-tier model for cognitive state: authoritative knowledge, delivery views, and ephemeral state. Then we present six design invariants empirically validated by 56 independent rejection decisions observed across the analyzed landscape. We show that context determinism applies to assembly, not to model output, and that the curation cost this model requires is offset by compounding returns in reproducibility, auditability, and team cognition.
The introduction of large language models into software development has shifted the primary interface from code execution to interactive reasoning. In this environment, the correctness of an output depends not only on source code but on the context supplied to the model: the conventions, decisions, architectural constraints, and domain knowledge that bound the space of acceptable responses.
Current systems treat context as a query result assembled at the moment of interaction. A developer begins a session; the tool retrieves what it estimates to be relevant from chat history, recent files, and vector stores; the model generates output conditioned on this transient assembly; the session ends, and the context evaporates. The next session begins the cycle again.
This model has improved substantially over the past year. CLAUDE.md files, Cursor rules, Copilot's memory system, and tools such as Mem0, Letta, and Kindex each address aspects of the persistence problem. Yet across 17 systems we analyzed spanning AI coding assistants, agent frameworks, autonomous coding agents, and purpose-built knowledge stores, no system provides all five of the following properties simultaneously: deterministic context assembly, human-readable file-based persistence, token-budgeted delivery, zero runtime dependencies, and local-first operation.
This paper does not propose a universal replacement for retrieval-centric workflows. It defines a persistence layer (embodied in ctx (https://ctx.ist)) whose advantages emerge under specific operational conditions: when reproducibility is a requirement, when knowledge must outlive sessions and individuals, when teams require shared cognitive authority, or when offline operation is necessary.
The trade-offs (manual curation cost, reduced automatic recall, coarser granularity) are intentional and mirror the trade-offs accepted by systems that favor reproducibility over convenience, such as reproducible builds and immutable infrastructure 16.
The contribution is threefold: a three-tier model for cognitive state that resolves the ambiguity between authoritative knowledge and ephemeral session artifacts; six design invariants empirically grounded in a cross-system landscape analysis; and an experience report demonstrating that the model produces compounding returns when applied to its own development.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#2-the-limits-of-prompt-time-context","level":2,"title":"2. The Limits of Prompt-Time Context","text":"
Prompt-time assembly pipelines typically consist of corpus selection, retrieval, ranking, and truncation. These pipelines are probabilistic and time-dependent, producing three failure modes that compound over the lifetime of a project.
If context is derived from mutable sources using heuristic ranking, identical requests at different times receive different inputs. A developer who asks \"What is our authentication strategy?\" on Tuesday may receive a different context window than the same question on Thursday: Not because the strategy changed, but because the retrieval heuristic surfaced different fragments.
Reproducibility (the ability to reconstruct the exact inputs that produced a given output) is a foundational property of reliable systems. Its loss in AI-assisted development mirrors the historical evolution from ad-hoc builds to deterministic build systems 12. The build community learned that when outputs depend on implicit state (environment variables, system clocks, network-fetched dependencies), debugging becomes archaeology. The same principle applies when AI outputs depend on non-deterministic context retrieval.
Embedding-based memory increases recall but reduces inspectability. When a vector store determines that a code snippet is \"similar\" to the current query, the ranking function is opaque: the developer cannot inspect why that snippet was chosen, whether a more relevant artifact was excluded, or whether the ranking will remain stable. This prevents deterministic debugging, policy auditing, and causal attribution (properties that information retrieval theory identifies as fundamental trade-offs of probabilistic ranking) 3.
In practice, this opacity manifests as a compliance ceiling. In our experience developing a context management system (detailed in Section 7), soft instructions (directives that ask an AI agent to read specific files or follow specific procedures) achieve approximately 75-85% compliance. The remaining 15-25% represents cases where the agent exercises judgment about whether the instruction applies, effectively applying a second ranking function on top of the explicit directive. When 100% compliance is required, instruction is insufficient; the content must be injected directly, removing the agent's option to skip it.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#23-loss-of-intent","level":3,"title":"2.3 Loss of Intent","text":"
Session transcripts record interaction but not cognition. A transcript captures what was said but not which assumptions were accepted, which alternatives were rejected, or which constraints governed the decision. The distinction matters: a decision to use PostgreSQL recorded as a one-line note (\"Use PostgreSQL\") teaches a model what was decided; a structured record with context, rationale, and consequences teaches it why (and why is what prevents the model from unknowingly reversing the decision in a future session) 4.
Session transcripts provide history. Cognitive state requires something more: the persistent, structured representation of the knowledge required for correct decision-making.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#3-cognitive-state-a-three-tier-model","level":2,"title":"3. Cognitive State: A Three-Tier Model","text":"","path":["The Thesis"],"tags":[]},{"location":"thesis/#31-definitions","level":3,"title":"3.1 Definitions","text":"
We define cognitive state as the authoritative, persistent representation of the knowledge required for correct decision-making within a project. It is human-authored or human-ratified, versioned, inspectable, and reproducible. It is distinct from logs, transcripts, retrieval results, and model-generated summaries.
Previous formulations of this idea have treated cognitive state as a monolithic concept. In practice, a three-tier model better captures the operational reality:
Tier 1: Authoritative State: The canonical knowledge that the system treats as ground truth. In a concrete implementation, this corresponds to a set of human-curated files with defined schemas: a constitution (inviolable rules), conventions (code patterns), an architecture document (system structure), decision records (choices with rationale), learnings (captured experience), a task list (current work), a glossary (domain terminology), and an agent playbook (operating instructions). Each file has a single purpose, a defined lifecycle, and a distinct update frequency. Authoritative state is version-controlled alongside code and reviewed through the same mechanisms (diffs, pull requests, blame annotations).
Tier 2: Delivery Views: Derived representations of authoritative state, assembled for consumption by a model. A delivery view is produced by a deterministic assembly function that takes the authoritative state, a token budget, and an inclusion policy as inputs and produces a context window as output. The same authoritative state, budget, and policy must always produce the same delivery view. Delivery views are ephemeral (they exist only for the duration of a session), but their construction is reproducible.
Tier 3: Ephemeral State: Session transcripts, scratchpad notes, draft journal entries, and other artifacts that exist during or immediately after a session but are not authoritative. Ephemeral state is the raw material from which authoritative state may be extracted through human review, but it is never consumed directly by the assembly function.
This three-tier model resolves confusion present in earlier formulations: the claim that AI output is a deterministic function of the repository state. The corrected claim is that context selection is deterministic (the delivery view is a function of authoritative state), but model output remains stochastic, conditioned on the deterministic context. Formally:
The persistence layer's contribution is making assemble reproducible, not making model deterministic.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#32-separation-of-concerns","level":3,"title":"3.2 Separation of Concerns","text":"
The decision to separate authoritative state into distinct files with distinct purposes is not cosmetic. Different types of knowledge have different lifecycles:
Knowledge Type Update Frequency Read Frequency Load Priority Example Constitution Rarely Every session Always \"Never commit secrets to git\" Tasks Every session Session start Always \"Implement token budget CLI flag\" Conventions Weekly Before coding High \"All errors use structured logging with severity levels\" Decisions When decided When questioning Medium \"Use PostgreSQL over MySQL (see ADR-003)\" Learnings When learned When stuck Medium \"Hook scripts >50ms degrade interactive UX\" Architecture When changed When designing On demand \"Three-layer pipeline: ingest → enrich → assemble\" Journal Every session Rarely Never auto \"Session 247: Removed dead-end session copy layer\"
A monolithic context file would force the assembly function to load everything or nothing. Separation enables progressive disclosure: the minimum context that matters for the current moment, with the option to load more when needed. A normal session loads the constitution, tasks, and conventions; a deep investigation loads decision history and journal entries from specific dates.
The budget mechanism is the constraint that makes separation valuable. Without a budget, the default behavior is to load everything, which destroys the attention density that makes loaded context useful. With a budget, the assembly function must prioritize ruthlessly: constitution first (always full), then tasks and conventions (budget-capped), then decisions and learnings (scored by recency). Entries that do not fit receive title-only summaries rather than being silently dropped (an application of the \"tell me what you don't know\" pattern identified independently by four systems in our landscape analysis).
The following six invariants define the constraints that a cognitive state persistence layer must satisfy. They are not axioms chosen a priori; they are empirically grounded properties whose violation was independently identified as producing complexity costs across the 17 systems we analyzed.
Context files must be human-readable, git-diffable, and editable with any text editor. No database. No binary storage.
Validation: 11 independent rejection decisions across the analyzed landscape protected this property. Systems that adopted embedded records, binary serialization, or knowledge graphs as their core primitive consistently traded away the ability for a developer to run cat DECISIONS.md and understand the system's knowledge. The inspection cost of opaque storage compounds over the lifetime of a project: every debugging session, every audit, every onboarding conversation requires specialized tooling to access knowledge that could have been a text file.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#invariant-2-zero-runtime-dependencies","level":3,"title":"Invariant 2: Zero Runtime Dependencies","text":"
The tool must work with no installed runtimes, no running services, and no API keys for core functionality.
Validation: 13 independent rejection decisions protected this property (the most frequently defended invariant). Systems that required databases (PostgreSQL, SQLite, Redis), embedding models, server daemons, container runtimes, or cloud APIs for core operation introduced failure modes proportional to their dependency count. A persistence layer that depends on infrastructure is not a persistence layer; it is a service. Services have uptime requirements, version compatibility matrices, and operational costs that simple file operations do not.
The same files plus the same budget must produce the same output. No embedding-based retrieval, no LLM-driven selection, no wall-clock-dependent scoring in the assembly path.
Validation: 6 independent rejection decisions protected this property. Non-deterministic assembly (whether from embedding variance, LLM-based selection, or time-dependent scoring) destroys the ability to reproduce a context window and therefore to diagnose why a model produced a given output. Determinism in the assembly path is what makes the persistence layer auditable.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#invariant-4-human-authority-over-persistent-state","level":3,"title":"Invariant 4: Human Authority Over Persistent State","text":"
The agent may propose changes to context files but must not unilaterally modify them. All persistent changes go through human-reviewable git commits.
Validation: 6 independent rejection decisions protected this property. Systems that allowed agents to self-modify their memory (writing freeform notes, auto-pruning old entries, generating summaries as ground truth) consistently produced lower-quality persistent context than systems that enforced human review. Structure is a feature, not a limitation: across the landscape, the pattern \"structured beats freeform\" was independently discovered by four systems that evolved from freeform LLM summaries to typed schemas with required fields.
Core functionality must work offline with no network access. Cloud services may be used for optional features but never for core context management.
Validation: 7 independent rejection decisions protected this property. Infrastructure-dependent memory systems cannot operate in classified environments, isolated networks, or disaster-recovery scenarios. A filesystem-native model continues to function under all conditions where the repository is accessible.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#invariant-6-no-default-telemetry","level":3,"title":"Invariant 6: No Default Telemetry","text":"
Any analytics, if ever added, must be strictly opt-in.
Validation: 4 independent rejection decisions protected this property. Default telemetry erodes the trust model that a persistence layer depends on. If developers must trust the system with their architectural decisions, operational learnings, and project constraints, the system cannot simultaneously be reporting usage data to external services.
These six invariants collectively define a design space. Each feature proposal can be evaluated against them: a feature that violates any invariant is rejected regardless of how many other systems implement it. The discipline of constraint (refusing to add capabilities that compromise foundational properties) is itself an architectural contribution. Across the 17 analyzed systems, 56 patterns were explicitly rejected for violating these invariants. The rejection count per invariant (11, 13, 6, 6, 7, 4) provides a rough measure of each property's vulnerability to architectural erosion. A representative sample of these rejections is provided in Appendix A.1
The 17 systems were selected to cover the architectural design space rather than to achieve completeness. Each included system satisfies three criteria: it represents a distinct architectural primitive for AI-assisted development, it is actively maintained or widely referenced, and it provides sufficient public documentation or source code for architectural inspection. The goal was to ensure that every major category of primitive (document, embedded record, state snapshot, event/message, construction/derivation) was represented by multiple systems, enabling cross-system pattern detection.
The resulting set spans six categories: AI coding assistants (Continue, Sourcegraph/Cody, Aider, Claude Code), AI agent frameworks (CrewAI, AutoGen, LangGraph, LlamaIndex, Letta/MemGPT), autonomous coding agents (OpenHands, Sweep), session provenance tools (Entire), data versioning systems (Dolt, Pachyderm), pipeline/build systems (Dagger), and purpose-built knowledge stores (QubicDB, Kindex). Each system was analyzed from its source code and documentation, producing 34 individual analysis artifacts (an architectural profile and a set of insights per system) that yielded 87 adopt/adapt recommendations, 56 explicit rejection decisions, and 52 watch items.
Every system in the AI-assisted development landscape operates on a core primitive: an atomic unit around which the entire architecture revolves. Our analysis of 17 systems reveals five categories of primitives, each making irrecoverable trade-offs:
Group A: Document/File Primitives: Human-readable documents as the primary unit. Documents are authored by humans, version-controlled in git, and consumed by AI tools. The invariant of this group is that the primitive is always human-readable and version-controllable with standard tools. Three systems participate in this pattern: the system described in this paper as a pure expression, and Continue (via its rules directory) and Claude Code (via CLAUDE.md files) as partial participants: both use document-based context as an input but organize around different core primitives.
Group B: Embedded Record Primitives: Vector-embedded records stored with numerical embeddings for similarity search, metadata for filtering, and scoring mechanisms for ranking. Five systems use this approach (LlamaIndex, CrewAI, Letta/MemGPT, QubicDB, Kindex). The invariant is that the primitive requires an embedding model or vector database for core operations: a dependency that precludes offline and air-gapped use.
Group C: State Snapshot Primitives: Point-in-time captures of the complete system state. The invariant is that any past state can be reconstructed at any historical point. Three systems use this approach (LangGraph, Entire, Dolt).
Group D: Event/Message Primitives: Sequential events or messages forming an append-only log with causal relationships. Four systems use this approach (OpenHands, AutoGen, Claude Code, Sweep). The invariant is temporal ordering and append-only semantics.
Group E: Construction/Derivation Primitives: Derived or constructed values that encode how they were produced. The invariant is that the primitive is a function of its inputs; re-executing the same inputs produces the same primitive. Three systems use this approach (Dagger, Pachyderm, Aider).
The five primitive categories differ along seven dimensions:
Property Document Embedded Record State Snapshot Event/Message Construction Human-readable Yes No Varies Partially No Version-controllable Yes No Varies Yes Yes Queryable by meaning No Yes No No No Rewindable Via git No Yes Yes (replay) Yes Deterministic Yes No Yes Yes Yes Zero-dependency Yes No Varies Varies Varies Offline-capable Yes No Varies Varies Yes
The document primitive is the only one that simultaneously satisfies human-readability, version-controllability, determinism, zero dependencies, and offline capability. This is not because documents are superior in general (embedded records provide semantic queryability that documents lack) but because the combination of all five properties is what the persistence layer requires. The choice between primitive categories is not a matter of capability but of which properties are considered invariant.
Across the 17 analyzed systems, six design patterns were independently discovered. These convergent patterns carry extra validation weight because they emerged from different problem spaces:
Pattern 1: \"Tell me what you don't know\": When context is incomplete, explicitly communicate to the model what information is missing and what confidence level the provided context represents. Four systems independently converged on this pattern: inserting skip markers, tracking evidence gaps, annotating provenance, or naming output quality tiers.
Pattern 2: \"Freshness matters\": Information relevance decreases over time. Three systems independently chose exponential decay with different half-lives (30 days, 90 days, and LRU ordering). Static priority ordering with no time dimension leaves relevant recent knowledge at the same priority as stale entries. This pattern is in productive tension with the persistence model's emphasis on determinism: the claim is not that time-dependence is irrelevant, but that it belongs in the curation step (a human deciding to consolidate or archive stale entries) rather than in the assembly function (an algorithm silently down-ranking entries based on age).
Pattern 3: \"Content-address everything\": Compute a hash of content at creation time for deduplication, cache invalidation, integrity verification, and change detection. Five systems independently implement content hashing, each discovering it solves different problems 5.
Pattern 4: \"Structured beats freeform\": When capturing knowledge or session state, a structured schema with required fields produces more useful data than freeform text. Four systems evolved from freeform summaries to typed schemas: one moving from LLM-generated prose to a structured condenser with explicit fields for completed tasks, pending tasks, and files modified.
Pattern 5: \"Protocol convergence\": The Model Context Protocol (MCP) is emerging as a standard tool integration layer. Nine of 17 systems support it, spanning every category in the analysis. MCP's significance for the persistence model is that it provides a transport mechanism for context delivery without dictating how context is stored or assembled. This makes the approach compatible with both retrieval-centric and persistence-centric architectures.
Pattern 6: \"Human-in-the-loop for memory\": Critical memory decisions should involve human judgment. Fully automated memory management produces lower-quality persistent context than human-reviewed systems. Four systems independently converged on variants of this pattern: ceremony-based consolidation, interrupt/resume for human input, confirmation mode for high-risk actions, and separated \"think fast\" vs. \"think slow\" processing paths.
Pattern 6 directly validates the ceremony model described in this paper. The persistence layer requires human curation not because automation is impossible, but because the quality of persistent knowledge degrades when the curation step is removed. The improvement opportunity is to make curation easier, not to automate it away.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#6-worked-example-architectural-decision-under-two-models","level":2,"title":"6. Worked Example: Architectural Decision Under Two Models","text":"
We now instantiate the three-tier model in a concrete system (ctx) and illustrate the difference between prompt-time retrieval and cognitive state persistence using a real scenario from its development.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#61-the-problem","level":3,"title":"6.1 The Problem","text":"
During development, the system accumulated three overlapping storage layers for session data: raw transcripts (owned by the AI tool), session copies (JSONL copies plus context snapshots), and enriched journal entries (Markdown summaries). The middle layer (session copies) was a dead-end write sink. An auto-save hook copied transcripts to a directory that nothing read from, because the journal pipeline already read directly from the raw transcripts. Approximately 15 source files, a shell hook, 20 configuration constants, and 30 documentation references supported infrastructure with no consumers.
In a retrieval-based system, the decision to remove the middle layer depends on whether the retrieval function surfaces the relevant context:
The developer asks: \"Should we simplify the session storage?\" The retrieval system must find and rank the original discussion thread where the three layers were designed, the usage statistics showing zero reads from the middle layer, the journal pipeline documentation showing it reads from raw transcripts directly, and the dependency analysis showing 15 files, a hook, and 30 doc references. If any of these fragments are not retrieved (because they are in old chat history, because the embedding similarity score is low, or because the token budget was consumed by more recent but less relevant context), the model may recommend preserving the middle layer, or may not realize it exists.
Six months later, a new team member asks the same question. The retrieval results will differ: the original discussion has aged out of recency scoring, the usage statistics are no longer in recent history, and the model may re-derive the answer or arrive at a different conclusion.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#63-cognitive-state-model","level":3,"title":"6.3 Cognitive State Model","text":"
In the persistence model, the decision is recorded as a structured artifact at write time:
## [2026-02-11] Remove .context/sessions/ storage layer\n\n**Status**: Accepted\n\n**Context**: The session/recall/journal system had three overlapping\nstorage layers. The recall pipeline reads directly from raw transcripts,\nmaking .context/sessions/ a dead-end write sink that nothing reads from.\n\n**Decision**: Remove .context/sessions/ entirely. Two stores remain:\nraw transcripts (global, tool-owned) and enriched journal\n(project-local).\n\n**Rationale**: Dead-end write sinks waste code surface, maintenance\neffort, and user attention. The recall pipeline already proved that\nreading directly from raw transcripts is sufficient. Context snapshots\nare redundant with git history.\n\n**Consequence**: Deleted internal/cli/session/ (15 files), removed\nauto-save hook, removed --auto-save from watch, removed pre-compact\nauto-save, removed /ctx-save skill, updated ~45 documentation files.\nFour earlier decisions superseded.\n
This artifact is:
Deterministically included in every subsequent session's delivery view (budget permitting, with title-only fallback if budget is exceeded)
Human-readable and reviewable as a diff in the commit that introduced it
Permanent: it persists in version control regardless of retrieval heuristics
Causally linked: it explicitly supersedes four earlier decisions, creating an auditable chain
When the new team member asks \"Why don't we store session copies?\" six months later, the answer is the same artifact, at the same revision, with the same rationale. The reasoning is reconstructible because it was persisted at write time, not discovered at query time.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#64-the-diff-when-policy-changes","level":3,"title":"6.4 The Diff When Policy Changes","text":"
If a future requirement re-introduces session storage (for example, to support multi-agent session correlation), the change appears as a diff to the decision record:
- **Status**: Accepted\n+ **Status**: Superseded by [2026-08-15] Reintroduce session storage\n+ for multi-agent correlation\n
The new decision record references the old one, creating a chain of reasoning visible in git log. In the retrieval model, the old decision would simply be ranked lower over time and eventually forgotten.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#7-experience-report-a-system-that-designed-itself","level":2,"title":"7. Experience Report: A System That Designed Itself","text":"
The persistence model described in this paper was developed and tested by using it on its own development. Over 33 days and 389 sessions, the system's context files accumulated a detailed record of decisions made, reversed, and consolidated: providing quantitative and qualitative evidence for the model's properties.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#71-scale-and-structure","level":3,"title":"7.1 Scale and Structure","text":"
The development produced the following authoritative state artifacts:
8 consolidated decision records covering 24 original decisions spanning context injection architecture, hook design, task management, security, agent autonomy, and webhook systems
18 consolidated learning records covering 75 original observations spanning agent compliance, hook behavior, testing patterns, documentation drift, and tool integration
A constitution with 13 inviolable rules across 4 categories (security, quality, process, context preservation)
389 enriched journal entries providing a complete session-level audit trail
The consolidation ratio (24 decisions compressed to 8 records, 75 learnings compressed to 18) illustrates the curation cost and its return: authoritative state becomes denser and more useful over time as related entries are merged, contradictions are resolved, and superseded decisions are marked.
Three architectural reversals during development provide evidence that the persistence model captures and communicates reasoning effectively:
Reversal 1: The two-tier persistence model: The original design included a middle storage tier for session copies. After 21 days of development, the middle tier was identified as a dead-end write sink (described in Section 6). The decision record captured the full context, and the removal was executed cleanly: 15 source files, a shell hook, and 45 documentation references. The pattern of a \"dead-end write sink\" was subsequently observed in 7 of 17 systems in our landscape analysis that store raw transcripts alongside structured context.
Reversal 2: The prompt-coach hook: An early design included a hook that analyzed user prompts and offered improvement suggestions. After deployment, the hook produced zero useful tips, its output channel was invisible to users, and it accumulated orphan temporary files. The hook was removed, and the decision record captured the failure mode for future reference.
Reversal 3: The soft-instruction compliance model: The original context injection strategy relied on soft instructions: directives asking the AI agent to read specific files. After measuring compliance across multiple sessions, we found a consistent 75-85% compliance ceiling. The revised strategy injects content directly, bypassing the agent's judgment about whether to comply. The learning record captures the ceiling measurement and the rationale for the architectural change.
Each reversal was captured as a structured decision record with context, rationale, and consequences. In a retrieval-based system, these reversals would exist only in chat history, discoverable only if the retrieval function happens to surface them. In the persistence model, they are permanent, indexable artifacts that inform future decisions.
The 75-85% compliance ceiling for soft instructions is the most operationally significant finding from the experience report. It means that any context management strategy relying on agent compliance with instructions (\"read this file,\" \"follow this convention,\" \"check this list\") has a hard ceiling on reliability.
The root cause is structural: the instruction \"don't apply judgment\" is itself evaluated by judgment. When an agent receives a directive to read a file, it first assesses whether the directive is relevant to the current task (and that assessment is the judgment the directive was trying to prevent).
The architectural response maps directly to the formal model defined in Section 3.1. Content requiring 100% compliance is included in authoritative_state and injected by the deterministic assemble function, bypassing the agent entirely. Content where 80% compliance is acceptable is delivered as instructions within the delivery view. The three-tier architecture makes this distinction explicit: authoritative state is injected; delivery views are assembled deterministically; ephemeral state is available but not pushed.
Over 33 days, we observed a qualitative shift in the development experience. Early sessions (days 1-7) spent significant time re-establishing context: explaining conventions, re-stating constraints, re-deriving past decisions. Later sessions (days 25-33) began with the agent loading curated context and immediately operating within established constraints, because the constraints were in files rather than in chat history.
This compounding effect (where each session's context curation improves all subsequent sessions) is the primary return on the curation investment. The cost is borne once (writing a decision record, capturing a learning, updating the task list); the benefit is collected on every subsequent session load.
The effect is analogous to compound interest in financial systems: the knowledge base grows not linearly with effort but with increasing marginal returns as new knowledge interacts with existing context. A learning captured on day 5 prevents a mistake on day 12, which avoids a debugging session that would have consumed a day 12 session, freeing that session for productive work that generates new learnings. The growth is not literally exponential (it is bounded by project scope and subject to diminishing returns as the knowledge base matures), but within the observed 33-day window, the returns were consistently accelerating.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#75-scope-and-generalizability","level":3,"title":"7.5 Scope and Generalizability","text":"
This experience report is self-referential by design: the system was developed using its own persistence model. This circularity strengthens the internal validity of the findings (the model was stress-tested under authentic conditions) but limits external generalizability. The two-week crossover point was observed on a single project of moderate complexity with a small team already familiar with the model's assumptions. Whether the same crossover holds for larger teams, for codebases with different characteristics, or for teams adopting the model without having designed it remains an open empirical question. The quantitative claims in this section should be read as existence proofs (demonstrating that the model can produce compounding returns) rather than as predictions about specific adoption scenarios.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#8-situating-the-persistence-layer","level":2,"title":"8. Situating the Persistence Layer","text":"
The persistence layer occupies a specific position in the stack of AI-assisted development:
Application Logic\nAI Interaction / Agents\nContext Retrieval Systems\nCognitive State Persistence Layer\nVersion Control / Storage\n
Current systems innovate primarily in the retrieval layer (improving how context is discovered, ranked, and delivered at query time). The persistence layer sits beneath retrieval and above version control. Its role is to maintain the authoritative state that retrieval systems may query but do not own. The relationship is complementary: retrieval answers \"What in the corpus might be relevant?\"; cognitive state answers \"What must be true for this system to operate correctly?\" A mature system uses both: retrieval for discovery, persistence for authority.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#9-applicability-and-trade-offs","level":2,"title":"9. Applicability and Trade-Offs","text":"","path":["The Thesis"],"tags":[]},{"location":"thesis/#91-when-to-use-this-model","level":3,"title":"9.1 When to Use This Model","text":"
A cognitive state persistence layer is most appropriate when:
Reproducibility is a requirement: If a system must be able to answer \"Why did this output occur, and can it be produced again?\" then deterministic, version-controlled context becomes necessary. This is relevant in regulated environments, safety-critical systems, long-lived infrastructure, and security-sensitive deployments.
Knowledge must outlive sessions and individuals: Projects with multi-year lifetimes accumulate architectural decisions, domain interpretations, and operational policy. If this knowledge is stored only in chat history, issue trackers, and institutional memory, it decays. The persistence model converts implicit knowledge into branchable, reviewable artifacts.
Teams require shared cognitive authority: In collaborative environments, correctness depends on a stable answer to \"What does the system believe to be true?\" When this answer is derived from retrieval heuristics, authority shifts to ranking algorithms. When it is versioned and human-readable, authority remains with the team.
Offline or air-gapped operation is required: Infrastructure-dependent memory systems cannot operate in classified environments, isolated networks, or disaster-recovery scenarios.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#92-when-not-to-use-this-model","level":3,"title":"9.2 When Not to Use This Model","text":"
Zero-configuration personal workflows: For short-lived or exploratory tasks, the cost of explicit knowledge curation outweighs its benefits. Heuristic retrieval is sufficient when correctness is non-critical, outputs are disposable, and historical reconstruction is unnecessary.
Maximum automatic recall from large corpora: Vector retrieval systems provide superior performance when the primary task is searching vast, weakly structured information spaces. The persistence model assumes that what matters can be decided and that this decision is valuable to record.
Fully autonomous agent architectures: Agent runtimes that generate and discard state continuously, optimizing for local goal completion, do not benefit from a model that centers human ratification of knowledge.
The transition does not require full system replacement. An incremental path:
Step 1: Record decisions as versioned artifacts: Instead of allowing conclusions to remain in discussion threads, persist them in reviewable form with context, rationale, and consequences 4. This alone converts ephemeral reasoning into the cognitive state.
Step 2: Make inclusion deterministic: Define explicit assembly rules. Retrieval may still exist, but it is no longer authoritative.
Step 3: Move policy into cognitive state: When system behavior depends on stable constraints, encode those constraints as versioned knowledge. Behavior becomes reproducible.
Step 4: Optimize assembly, not retrieval: Once the authoritative layer exists, performance improvements come from budgeting, caching, and structural refinement rather than from improving ranking heuristics.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#94-the-curation-cost","level":3,"title":"9.4 The Curation Cost","text":"
The primary objection to this model is the cost of explicit knowledge curation. This cost is real. Writing a structured decision record takes longer than letting a chatbot auto-summarize a conversation. Maintaining a glossary requires discipline. Consolidating 75 learnings into 18 records requires judgment.
The response is not that the cost is negligible but that it is amortized. A decision record written once is loaded hundreds of times. A learning captured today prevents repeated mistakes across all future sessions. The curation cost is paid once; the benefit compounds.
The experience report provides rough order-of-magnitude numbers. Across 389 sessions over 33 days, curation activities (writing decision records, capturing learnings, updating the task list, consolidating entries) averaged approximately 3-5 minutes per session. In early sessions (days 1-7), before curated context existed, re-establishing context consumed approximately 10-15 minutes per session: re-explaining conventions, re-stating architectural constraints, re-deriving decisions that had been made but not persisted. By the final week (days 25-33), the re-explanation overhead had dropped to near zero: the agent loaded curated context and began productive work immediately.
At ~12 sessions per day, the curation cost was roughly 35-60 minutes daily. The re-explanation cost in the first week was roughly 120-180 minutes daily. By the third week, that cost had fallen to under 15 minutes daily while the curation cost remained stable. The crossover (where cumulative curation cost was exceeded by cumulative time saved) occurred around day 10. These figures are approximate and derived from a single project with a small team already familiar with the model; the crossover point will vary with project complexity, team size, and curation discipline.
Several directions are compatible with the model described here:
Section-level deterministic budgeting: Current assembly operates at file granularity. Section-level budgeting would allow finer-grained control (including specific decision records while excluding others within the same file) without sacrificing determinism.
Causal links between decisions: The experience report shows that decisions frequently reference earlier decisions (superseding, extending, or qualifying them). Formal causal links would enable traversal of the decision graph and automatic detection of orphaned or contradictory constraints.
Content-addressed context caches: Five systems in our landscape analysis independently discovered that content hashing provides cache invalidation, integrity verification, and change detection. Applying content addressing to the assembly output would enable efficient cache reuse when the authoritative state has not changed.
Conditional context inclusion: Five systems independently suggest that context entries could carry activation conditions (file patterns, task keywords, or explicit triggers) that control whether they are included in a given assembly. This would reduce the per-session budget cost of large knowledge bases without sacrificing determinism.
Provenance metadata: Linking context entries to the sessions, decisions, or learnings that motivated them would strengthen the audit trail. Optional provenance fields on Markdown entries (session identifier, cause reference, motivation) would be lightweight and compatible with the existing file-based model.
AI-assisted development has treated context as a \"query result\" assembled at the moment of interaction, discarded at the session end. This paper identifies a complementary layer: the persistence of authoritative cognitive state as deterministic, version-controlled artifacts.
The contribution is grounded in three sources of evidence. A landscape analysis of 17 systems reveals five categories of primitives and shows that no existing system provides the combination of human-readability, determinism, zero dependencies, and offline capability that the persistence layer requires. Six design invariants, validated by 56 independent rejection decisions, define the constraints of the design space. An experience report over 389 sessions and 33 days demonstrates compounding returns: later sessions start faster, decisions are not re-derived, and architectural reversals are captured with full context.
The core claim is this: persistent cognitive state enables causal reasoning across time. A system built on this model can explain not only what is true, but why it became true and when it changed.
When context is the state:
Reasoning is reproducible: the same authoritative state, budget, and policy produce the same delivery view.
Knowledge is auditable: decisions are traceable to explicit artifacts with context, rationale, and consequences.
Understanding compounds: each session's curation improves all subsequent sessions.
The choice between retrieval-centric workflows and a persistence layer is not a matter of capability but of time horizon. Retrieval optimizes for relevance at the moment of interaction. Persistence optimizes for the durability of understanding across the lifetime of a project.
🐸🖤 \"Gooood... let the deterministic context flow through the repository...\" - Kermit the Sidious, probably
The 56 rejection decisions referenced in Section 4 were cataloged across all 17 system analyses, grouped by the invariant they would violate. This appendix provides a representative sample (two per invariant) to illustrate the methodology.
Invariant 1: Markdown-on-Filesystem (11 rejections): CrewAI's vector embedding storage was rejected because embeddings are not human-readable, not git-diff-friendly, and require external services. Kindex's knowledge graph as core primitive was rejected because it requires specialized commands to inspect content that could be a text file (kin show <id> vs. cat DECISIONS.md).
Invariant 2: Zero Runtime Dependencies (13 rejections): Letta/MemGPT's PostgreSQL-backed architecture was rejected because it conflicts with local-first, no-database, single-binary operation. Pachyderm's Kubernetes-based distributed architecture was rejected as the antithesis of a single-binary design for a tool that manages text files.
Invariant 3: Deterministic Assembly (6 rejections): LlamaIndex's embedding-based retrieval as the primary selection mechanism was rejected because it destroys determinism, requires an embedding model, and removes human judgment from the selection process. QubicDB's wall-clock-dependent scoring was rejected because it directly conflicts with the \"same inputs produce same output\" property.
Invariant 4: Human Authority (6 rejections): Letta/MemGPT's agent self-modification of memory was rejected as fundamentally opposed to human-curated persistence. Claude Code's unstructured auto-memory (where the agent writes freeform notes) was rejected because structured files with defined schemas produce higher-quality persistent context than unconstrained agent output.
Invariant 5: Local-First / Air-Gap Capable (7 rejections): Sweep's cloud-dependent architecture was rejected as fundamentally incompatible with the local-first, offline-capable model. LangGraph's managed cloud deployment was rejected because cloud dependencies for core functionality violate air-gap capability.
Invariant 6: No Default Telemetry (4 rejections): Continue's telemetry-by-default (PostHog) was rejected because it contradicts the local-first, privacy-respecting trust model. CrewAI's global telemetry on import (Scarf tracking pixel) was rejected because it violates user trust and breaks air-gap capability.
The remaining 9 rejections did not map to a specific invariant but were rejected on other architectural grounds: for example, Aider's full-file-content-in-context approach (which defeats token budgeting), AutoGen's multi-agent orchestration as core primitive (scope creep), and Claude Code's 30-day transcript retention limit (institutional knowledge should have no automatic expiration).
Reproducible Builds Project, \"Reproducible Builds: Increasing the Integrity of Software Supply Chains\", 2017. https://reproducible-builds.org/docs/definition/ ↩↩↩
S. McIntosh et al., \"The Impact of Build System Evolution on Software Quality\", ICSE, 2015. https://doi.org/10.1109/ICSE.2015.70 ↩
C. Manning, P. Raghavan, H. Schütze, Introduction to Information Retrieval, Cambridge University Press, 2008. https://nlp.stanford.edu/IR-book/ ↩
M. Nygard, \"Documenting Architecture Decisions\", Cognitect Blog, 2011. https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions ↩↩
L. Torvalds et al., Git Internals - Git Objects (content-addressed storage concepts). https://git-scm.com/book/en/v2/Git-Internals-Git-Objects ↩
Kief Morris, Infrastructure as Code, O'Reilly, 2016. ↩
J. Kreps, \"The Log: What every software engineer should know about real-time data's unifying abstraction\", 2013. https://engineering.linkedin.com/distributed-systems/log ↩
P. Hunt et al., \"ZooKeeper: Wait-free coordination for Internet-scale systems\", USENIX ATC, 2010. https://www.usenix.org/legacy/event/atc10/tech/full_papers/Hunt.pdf ↩
","path":["The Thesis"],"tags":[]}]}
\ No newline at end of file
+{"config":{"separator":"[\\s\\-_,:!=\\[\\]()\\\\\"`/]+|\\.(?!\\d)"},"items":[{"location":"","level":1,"title":"The ctx Manifesto","text":"","path":["The ctx Manifesto"],"tags":[]},{"location":"#ctx-manifesto","level":1,"title":"ctx Manifesto","text":"
Creation, not code.
Context, not prompts.
Verification, not vibes.
This Is NOT a Metaphor
Code executes instructions.
Creation produces outcomes.
Confusing the two is how teams ship motion...
...instead of progress.
It was never about the code.
Code has zero standalone value.
Code is an implementation detail.
Code is an incantation.
Creation is the act.
And creation does not happen in a vacuum.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#ctx-is-the-substrate","level":2,"title":"ctx Is the Substrate","text":"
Constraints Have Moved
Human bandwidth is no longer the limiting factor.
Context integrity is.
Human bandwidth is no longer the constraint.
Context is:
Without durable context, intelligence resets.
Without memory, reasoning decays.
Without structure, scale collapses.
Creation is now limited by:
Clarity of intent;
Quality of context;
Rigor of verification.
Not by speed.
Not by capacity.
Velocity Amplifies
Faster execution on broken context compounds error.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#what-ctx-is-not","level":2,"title":"What ctx Is Not","text":"
Avoid Category Errors
Mislabeling ctx guarantees misuse.
ctx is not a memory feature.
ctx is not prompt engineering.
ctx is not a productivity hack.
ctx is not automation theater.
ctx is a system for preserving intent under scale.
ctx is infrastructure.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#verified-reality-is-the-scoreboard","level":2,"title":"Verified Reality Is the Scoreboard","text":"
Activity is a False Proxy
Output volume correlates poorly with impact.
Code is not progress.
Activity is not impact.
The only truth that compounds is verified change.
Verified change must exist in the real world.
Hypotheses are cheap; outcomes are not.
ctx captures:
What we expected;
What we observed;
Where reality diverged.
If we cannot predict, measure, and verify the result...
...it does not count.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#build-to-learn-not-to-accumulate","level":2,"title":"Build to Learn, Not to Accumulate","text":"
Prototypes Have an Expiration Date
A prototype's value is information, not longevity.
Prototypes exist to reduce uncertainty.
We build to:
Test assumptions;
Validate architecture;
Answer specific questions.
Not everything.
Not blindly.
Not permanently.
ctx records archeology so the cost is paid once.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#failures-are-assets","level":2,"title":"Failures Are Assets","text":"
","path":["The ctx Manifesto"],"tags":[]},{"location":"#encode-intent-into-the-environment","level":2,"title":"Encode Intent Into the Environment","text":"
Goodwill Does not Belong to the Table
Alignment that depends on memory will drift.
Alignment cannot depend on memory or goodwill.
Do not rely on people to remember.
Encode the behavior, so it happens by default.
Intent is encoded as:
Policies;
Schemas;
Constraints;
Evaluation harnesses.
Rules must be machine-readable.
Laws must be enforceable.
If intent is implicit, drift is guaranteed.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#cost-is-a-first-class-signal","level":2,"title":"Cost Is a First-Class Signal","text":"
Attention Is the Scarcest Resource
Not ideas.
Not ambition.
Ideas do not compete on time:
They compete on cost and impact:
Attention is finite.
Compute is finite.
Context is expensive.
We continuously ask:
What the most valuable next action is.
What outcome justifies the cost.
ctx guides allocation.
Learning reshapes priority.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#show-the-why","level":2,"title":"Show the Why","text":"
{} (code, artifacts, apps, binaries) produce outputs; they do not preserve reasoning.
Systems that cannot explain themselves will not be trusted.
Traceability builds trust.
{} --> what\n\n ctx --> why\n
We record:
Explored paths;
Rejected options;
Assumptions made;
Evidence used.
Opaque systems erode trust:
Transparent ctx compounds understanding.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#continuously-verify-the-system","level":2,"title":"Continuously Verify the System","text":"
Stability is Temporary
Every assumption has a half-life:
Models drift.
Tools change.
Assumptions rot.
ctx must be verified against reality.
Trust is a spectrum.
Trust is continuously re-earned:
Benchmarks,
regressions,
and evaluations...
...are safety rails.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#ctx-is-leverage","level":2,"title":"ctx Is Leverage","text":"
Stories, insights, and lessons learned from building and using ctx.
","path":["Blog"],"tags":[]},{"location":"blog/#releases","level":2,"title":"Releases","text":"","path":["Blog"],"tags":[]},{"location":"blog/#ctx-v080-the-architecture-release","level":3,"title":"ctx v0.8.0: The Architecture Release","text":"
March 23, 2026: 374 commits, 1,708 Go files touched, and a near-complete architectural overhaul. Every CLI package restructured into cmd/ + core/ taxonomy, all user-facing strings externalized to YAML, MCP server for tool-agnostic AI integration, and the memory bridge connecting Claude Code's auto-memory to .context/.
","path":["Blog"],"tags":[]},{"location":"blog/#field-notes","level":2,"title":"Field Notes","text":"","path":["Blog"],"tags":[]},{"location":"blog/#code-structure-as-an-agent-interface-what-19-ast-tests-taught-us","level":3,"title":"Code Structure as an Agent Interface: What 19 AST Tests Taught Us","text":"
April 2, 2026: We built 19 AST-based audit tests in a single session, touching 300+ files. In the process we discovered that \"old-school\" code quality constraints (no magic numbers, centralized error handling, 80-char lines, documentation) are exactly the constraints that make code readable to AI agents. If an agent interacts with your codebase, your codebase already is an interface. You just have not designed it as one.
Topics: ast, code quality, agent readability, conventions, field notes
","path":["Blog"],"tags":[]},{"location":"blog/#we-broke-the-31-rule","level":3,"title":"We Broke the 3:1 Rule","text":"
March 23, 2026: After v0.6.0, we ran 198 feature commits across 17 days before consolidating. The 3:1 rule says consolidate every 4th session. We did it after the 66th. The result: an 18-day, 181-commit cleanup marathon that took longer than the feature run itself. A follow-up to The 3:1 Ratio with empirical evidence from the v0.8.0 cycle.
Topics: consolidation, technical debt, development workflow, convention drift, field notes
","path":["Blog"],"tags":[]},{"location":"blog/#context-engineering","level":2,"title":"Context Engineering","text":"","path":["Blog"],"tags":[]},{"location":"blog/#agent-memory-is-infrastructure","level":3,"title":"Agent Memory Is Infrastructure","text":"
March 4, 2026: Every AI coding agent starts fresh. The obvious fix is \"memory.\" But there's a different problem memory doesn't touch: the project itself accumulates knowledge that has nothing to do with any single session. This post argues that agent memory is L2 (runtime cache); what's missing is L3 (project infrastructure).
Topics: context engineering, agent memory, infrastructure, persistence, team knowledge
","path":["Blog"],"tags":[]},{"location":"blog/#context-as-infrastructure","level":3,"title":"Context as Infrastructure","text":"
February 17, 2026: Where does your AI's knowledge live between sessions? If the answer is \"in a prompt I paste at the start,\" you are treating context as a consumable. This post argues for treating it as infrastructure instead: persistent files, separation of concerns, two-tier storage, progressive disclosure, and the filesystem as the most mature interface available.
","path":["Blog"],"tags":[]},{"location":"blog/#the-attention-budget-why-your-ai-forgets-what-you-just-told-it","level":3,"title":"The Attention Budget: Why Your AI Forgets What You Just Told It","text":"
February 3, 2026: Every token you send to an AI consumes a finite resource: the attention budget. Understanding this constraint shaped every design decision in ctx: hierarchical file structure, explicit budgets, progressive disclosure, and filesystem-as-index.
","path":["Blog"],"tags":[]},{"location":"blog/#before-context-windows-we-had-bouncers","level":3,"title":"Before Context Windows, We Had Bouncers","text":"
February 14, 2026: IRC is stateless. You disconnect, you vanish. Modern systems are not much different. This post traces the line from IRC bouncers to context engineering: stateless protocols require stateful wrappers, volatile interfaces require durable memory.
Topics: context engineering, infrastructure, IRC, persistence, state continuity
","path":["Blog"],"tags":[]},{"location":"blog/#the-last-question","level":3,"title":"The Last Question","text":"
February 28, 2026: In 1956, Asimov wrote a story about a question that spans the entire future of the universe. A reading of \"The Last Question\" through the lens of persistence, substrate migration, and what it means to build systems where sessions don't reset.
Topics: context continuity, long-lived systems, persistence, intelligence over time, field notes
","path":["Blog"],"tags":[]},{"location":"blog/#agent-behavior-and-design","level":2,"title":"Agent Behavior and Design","text":"","path":["Blog"],"tags":[]},{"location":"blog/#the-dog-ate-my-homework-teaching-ai-agents-to-read-before-they-write","level":3,"title":"The Dog Ate My Homework: Teaching AI Agents to Read Before They Write","text":"
February 25, 2026: You wrote the playbook. The agent skipped all of it. Five sessions, five failure modes, and the discovery that observable compliance beats perfect compliance.
","path":["Blog"],"tags":[]},{"location":"blog/#skills-that-fight-the-platform","level":3,"title":"Skills That Fight the Platform","text":"
February 4, 2026: When custom skills conflict with system prompt defaults, the AI has to reconcile contradictory instructions. Five conflict patterns discovered while building ctx.
Topics: context engineering, skill design, system prompts, antipatterns, AI safety primitives
","path":["Blog"],"tags":[]},{"location":"blog/#the-anatomy-of-a-skill-that-works","level":3,"title":"The Anatomy of a Skill That Works","text":"
February 7, 2026: I had 20 skills. Most were well-intentioned stubs. Then I rewrote all of them. Seven lessons emerged: quality gates prevent premature execution, negative triggers are load-bearing, examples set boundaries better than rules.
February 5, 2026: I found a well-crafted consolidation skill. Applied my own E/A/R framework: 70% was noise. This post is about why good skills can't be copy-pasted, and how to grow them from your project's own drift history.
","path":["Blog"],"tags":[]},{"location":"blog/#not-everything-is-a-skill","level":3,"title":"Not Everything Is a Skill","text":"
February 8, 2026: I ran an 8-agent codebase audit and got actionable results. The natural instinct was to wrap the prompt as a skill. Then I applied my own criteria: it failed all three tests.
Topics: skill design, context engineering, automation discipline, recipes, agent teams
","path":["Blog"],"tags":[]},{"location":"blog/#defense-in-depth-securing-ai-agents","level":3,"title":"Defense in Depth: Securing AI Agents","text":"
February 9, 2026: The security advice was \"use CONSTITUTION.md for guardrails.\" That is wishful thinking. Five defense layers for unattended AI agents, each with a bypass, and why the strength is in the combination.
","path":["Blog"],"tags":[]},{"location":"blog/#development-practice","level":2,"title":"Development Practice","text":"","path":["Blog"],"tags":[]},{"location":"blog/#code-is-cheap-judgment-is-not","level":3,"title":"Code Is Cheap. Judgment Is Not.","text":"
February 17, 2026: AI does not replace workers. It replaces unstructured effort. Three weeks of building ctx with an AI agent proved it: YOLO mode showed production is cheap, the 3:1 ratio showed judgment has a cadence.
Topics: AI and expertise, context engineering, judgment vs production, human-AI collaboration, automation discipline
February 17, 2026: AI makes technical debt worse: not because it writes bad code, but because it writes code so fast that drift accumulates before you notice. Three feature sessions, one consolidation session.
Topics: consolidation, technical debt, development workflow, convention drift, code quality
","path":["Blog"],"tags":[]},{"location":"blog/#refactoring-with-intent-human-guided-sessions-in-ai-development","level":3,"title":"Refactoring with Intent: Human-Guided Sessions in AI Development","text":"
February 1, 2026: The YOLO mode shipped 14 commands in a week. But technical debt doesn't send invoices. This is the story of what happened when we started guiding the AI with intent.
Topics: refactoring, code quality, documentation standards, module decomposition, YOLO versus intentional development
","path":["Blog"],"tags":[]},{"location":"blog/#how-deep-is-too-deep","level":3,"title":"How Deep Is Too Deep?","text":"
February 12, 2026: I kept feeling like I should go deeper into ML theory. Then I spent a week debugging an agent failure that had nothing to do with model architecture. When depth compounds and when it doesn't.
","path":["Blog"],"tags":[]},{"location":"blog/#agent-workflows","level":2,"title":"Agent Workflows","text":"","path":["Blog"],"tags":[]},{"location":"blog/#parallel-agents-merge-debt-and-the-myth-of-overnight-progress","level":3,"title":"Parallel Agents, Merge Debt, and the Myth of Overnight Progress","text":"
February 17, 2026: You discover agents can run in parallel. So you open ten terminals. It is not progress: it is merge debt being manufactured in real time. The five-agent ceiling and why role separation beats file locking.
Topics: agent workflows, parallelism, verification, context engineering, engineering practice
","path":["Blog"],"tags":[]},{"location":"blog/#parallel-agents-with-git-worktrees","level":3,"title":"Parallel Agents with Git Worktrees","text":"
February 14, 2026: I had 30 open tasks that didn't touch the same files. Using git worktrees to partition a backlog by file overlap, run 3-4 agents simultaneously, and merge the results.
","path":["Blog"],"tags":[]},{"location":"blog/#field-notes-and-signals","level":2,"title":"Field Notes and Signals","text":"","path":["Blog"],"tags":[]},{"location":"blog/#when-a-system-starts-explaining-itself","level":3,"title":"When a System Starts Explaining Itself","text":"
February 17, 2026: Every new substrate begins as a private advantage. Reality begins when other people start describing it in their own language. \"Better than Adderall\" is not praise; it is a diagnostic.
Topics: field notes, adoption signals, infrastructure vs tools, context engineering, substrates
February 15, 2026: I needed a static site generator for the journal system. The instinct was Hugo. But instinct is not analysis. Why zensical was the right choice: thin dependencies, MkDocs-compatible config, and zero lock-in.
","path":["Blog"],"tags":[]},{"location":"blog/#releases_1","level":2,"title":"Releases","text":"","path":["Blog"],"tags":[]},{"location":"blog/#ctx-v060-the-integration-release","level":3,"title":"ctx v0.6.0: The Integration Release","text":"
February 16, 2026: ctx is now a Claude Marketplace plugin. Two commands, no build step, no shell scripts. v0.6.0 replaces six Bash hook scripts with compiled Go subcommands and ships 25+ Skills as a plugin.
Topics: release, plugin system, Claude Marketplace, distribution, security hardening
","path":["Blog"],"tags":[]},{"location":"blog/#ctx-v030-the-discipline-release","level":3,"title":"ctx v0.3.0: The Discipline Release","text":"
February 15, 2026: No new headline feature. Just 35+ documentation and quality commits against ~15 feature commits. What a release looks like when the ratio of polish to features is 3:1.
","path":["Blog"],"tags":[]},{"location":"blog/#ctx-v020-the-archaeology-release","level":3,"title":"ctx v0.2.0: The Archaeology Release","text":"
February 1, 2026: What if your AI could remember everything? Not just the current session, but every session. ctx v0.2.0 introduces the recall and journal systems.
","path":["Blog"],"tags":[]},{"location":"blog/#building-ctx-using-ctx-a-meta-experiment-in-ai-assisted-development","level":3,"title":"Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development","text":"
January 27, 2026: What happens when you build a tool designed to give AI memory, using that very same tool to remember what you're building? This is the story of ctx.
Topics: dogfooding, AI-assisted development, Ralph Loop, session persistence, architectural decisions
","path":["Blog"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/","level":1,"title":"Building ctx Using ctx","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism.
References to .context/sessions/, auto-save hooks, and SessionEnd auto-save in this post reflect the architecture at the time of writing.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#a-meta-experiment-in-ai-assisted-development","level":2,"title":"A Meta-Experiment in AI-Assisted Development","text":"
Jose Alekhinne / 2026-01-27
Can a Tool Design Itself?
What happens when you build a tool designed to give AI memory, using that very same tool to remember what you are building?
This is the story of ctx, how it evolved from a hasty \"YOLO mode\" experiment to a disciplined system for persistent AI context, and what I have learned along the way.
Context is a Record
Context is a persistent record.
By \"context\", I don't mean model memory or stored thoughts:
I mean the durable record of decisions, learnings, and intent that normally evaporates between sessions.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#ai-amnesia","level":2,"title":"AI Amnesia","text":"
Every developer who works with AI code generators knows the frustration:
You have a deep, productive session where the AI understands your codebase, your conventions, your decisions. And then you close the terminal.
Tomorrow; it's a blank slate. The AI has forgotten everything.
That is \"reset amnesia\", and it's not just annoying: it's expensive.
Every session starts with:
Re-explaining context;
Re-reading files;
Re-discovering decisions that were already made.
I Needed Context
\"I don't want to lose this discussion...
...I am a brain-dead developer YOLO'ing my way out.\"
☝️ that's exactly what I said to Claude when I first started working on ctx.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-genesis","level":2,"title":"The Genesis","text":"
The project started as \"Active Memory\" (amem): a CLI tool to persist AI context across sessions.
The core idea was simple:
Create a .context/ directory with structured Markdown files for decisions, learnings, tasks, and conventions.
The AI reads these at session start and writes to them before the session ends.
There is no step 3.
The first commit was just scaffolding. But within hours, the Ralph Loop (An iterative AI development workflow) had produced a working CLI:
Not one, not two, but a whopping fourteen core commands shipped in rapid succession!
I was YOLO'ing like there was no tomorrow:
Auto-accept every change;
Let the AI run free;
Ship features fast.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-meta-experiment-using-amem-to-build-amem","level":2,"title":"The Meta-Experiment: Using amem to Build amem","text":"
Here's where it gets interesting: On January 20th, I asked:
\"Can I use amem to help you remember this context when I restart?\"
The answer was yes, but with a gap:
Autoload worked (via Claude Code's PreToolUse hook), but auto-save was missing: If the user quit, with Ctrl+C, everything since the last manual save was lost.
That session became the first real test of the system.
Here is the first session file we recorded:
## Key Discussion Points\n\n### 1. amem vs Ralph Loop - They're Separate Systems\n\n**User's question**: \"How do I use the binary to recreate this project?\"\n\n**Answer discovered**: `amem` is for context management, Ralph Loop is for \ndevelopment workflow. They are complementary but separate.\n\n### 2. Two Tiers of Context Persistence\n\n| Tier | What | Why |\n|-----------|-----------------------------|-------------------------------|\n| Curated | Learnings, decisions, tasks | Quick reload, token-efficient |\n| Full dump | Entire conversation | Safety net, nothing lost |\n\n| Where |\n|------------------------|\n| .context/*.md |\n| .context/sessions/*.md |\n
This session file (written by the AI to preserve its own context) became the template for how ctx handles session persistence.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-rename","level":2,"title":"The Rename","text":"
By January 21st, I realized \"Active Memory\" was too generic, and (arguably) too marketing-smelly.
Besides, the binary was already called ctx (short for Context), the directory was .context/, and the slash commands would be /ctx-*.
So it followed that the project should be renamed to ctx to make things make sense.
The rename touched 100+ files but was clean: a find-and-replace with Go's type system catching any misses.
The git history tells the story:
0e8f6bb feat: rename amem to ctx and add Claude Code integration\n87dcfa1 README.\n4f0e195 feat: separate orchestrator directive from agent tasks\n
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#yolo-mode-fast-but-dangerous","level":2,"title":"YOLO Mode: Fast, But Dangerous","text":"
The Ralph Loop made feature development incredibly fast.
But it created technical debt that I didn't notice until later.
A comparison session on January 25th revealed the patterns:
YOLO Pattern What We Found \"TASKS.md\" scattered in 10 files Same string literal everywhere, no constants dir + \"/\" + file Should be filepath.Join() Monolithic cli_test.go (1500+ lines) Tests disconnected from implementations package initcmd in init/ folder Go's \"init\" is reserved: subtle naming collision
Here is another analysis made by ctx:
● Based on my analysis, here are the key differences:\n\nYOLO Mode (Pre-040ce99)\n- Feature-first: Added slash commands, tests, templates rapidly\n- Scattered magic strings: \"TASKS.md\", \"decision\", \".context\" spread across files\n- Quick file creation: New files without organizational patterns\n- Working but inconsistent: Code functioned but lacked systematic structure\n\nHuman-Guided Mode (Post-040ce99)\n- Consolidation focus: Centralized constants in config package\n- Consistent naming: Dir, File, Filename, UpdateType prefixes\n- Self-referential constants: FileType map uses constants as keys, not literals\n- Proper path construction: filepath.Join() instead of +\"/\"+\n- Colocated tests: Tests next to implementations\n- Canonical naming: Package name = folder name\n
The fix required a human-guided refactoring session. I continued to do that before every major release, from that point on.
We introduced internal/config/config.go with semantic prefixes:
What I begrudgingly learned was: YOLO mode is effective for velocity but accumulates debt.
So I took a mental note to schedule periodic consolidation sessions.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-dogfooding-test-that-failed","level":2,"title":"The Dogfooding Test That Failed","text":"
On January 21st, I ran an experiment: have another Claude instance rebuild ctx from scratch using only the specs and PROMPT.md.
The Ralph Loop ran, all tasks got checked off, the loop exited successfully.
But the binary was broken!
Commands just printed help text instead of executing.
All tasks were marked \"complete\" but the implementation didn't work.
Here's what ctx discovered:
## Key Findings\n\n### Dogfooding Binary Is Broken\n- Commands don't execute: they just print root help text\n- All tasks were marked complete but binary doesn't work\n- Lesson: \"tasks checked off\" ≠ \"implementation works\"\n
This was humbling; to say the least.
I realized I had the same blind spot in my own codebase: no integration tests that actually invoked the binary.
So I added:
Integration tests for all commands;
Coverage targets (60-80% per package)
Smoke tests in CI
A constitution rule: \"All code must pass tests before commit\"
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-constitution-versus-conventions","level":2,"title":"The Constitution versus Conventions","text":"
As lessons accumulated, there was the temptation to add everything to CONSTITUTION.md as \"inviolable rules\".
But I resisted.
The constitution should contain only truly inviolable invariants:
Security (no secrets, no customer data)
Quality (tests must pass)
Process (decisions need records)
ctx invocation (always use PATH, never fallback)
Everything else (coding style, file organization, naming conventions...) should go in to CONVENTIONS.md.
Here's how ctx explained why the distinction was important:
Decision record, 2026-01-25
Overly strict constitution creates friction and gets ignored.
Conventions can be bent; constitution cannot.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#hooks-harder-than-they-look","level":2,"title":"Hooks: Harder Than They Look","text":"
Claude Code hooks seemed simple: Run a script before/after certain events.
My hook to block non-PATH ctx invocations initially matched too broadly:
# WRONG - matches /home/user/ctx/internal/file.go (ctx as directory)\n(/home/|/tmp/|/var/)[^ ]*ctx[^ ]*\n\n# RIGHT - matches ctx as binary only\n(/home/|/tmp/|/var/)[^ ]*/ctx( |$)\n
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-session-files","level":2,"title":"The Session Files","text":"
By the time of this writing this project's ctx sessions (.context/sessions/) contains 40+ files from this project's development.
They are not part of the source code due to security, privacy, and size concerns.
Middle Ground: the Scratchpad
For sensitive notes that do need to travel with the project, ctx pad stores encrypted one-liners in git, and ctx pad add \"label\" --file PATH can ingest small files.
See Scratchpad for details.
However, they are invaluable for the project's progress.
Each session file is a timestamped Markdown with:
Summary of what has been accomplished;
Key decisions made;
Learnings discovered;
Tasks for the next session;
Technical context (platform, versions).
These files are not autoloaded (that would bust the token budget).
They are what I see as the \"archaeological record\" of ctx:
When the AI needs deeper information about why something was done, it digs into the sessions.
Auto-generated session files used a naming convention:
In current releases, ctx uses a journal instead: the enrichment process generates meaningful slugs from context automatically, so there is no need to manually save sessions.
The SessionEnd hook captured transcripts automatically. Even Ctrl+C was caught.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-decision-log-18-architectural-decisions","level":2,"title":"The Decision Log: 18 Architectural Decisions","text":"
ctx helps record every significant architectural choice in .context/DECISIONS.md.
Here are some highlights:
Reverse-chronological order (2026-01-27)
**Context**: With chronological order, oldest items consume tokens first, and\nnewest (most relevant) items risk being truncated.\n\n**Decision**: Use reverse-chronological order (newest first) for DECISIONS.md\nand LEARNINGS.md.\n
PATH over hardcoded paths (2026-01-21)
**Context**: Original implementation hardcoded absolute paths in hooks.\nThis breaks when sharing configs with other developers.\n\n**Decision**: Hooks use `ctx` from PATH. `ctx init` checks PATH before \nproceeding.\n
Generic core with Claude enhancements (2026-01-20)
**Context**: ctx should work with any AI tool, but Claude Code users could\nbenefit from deeper integration.\n\n**Decision**: Keep ctx generic as the core tool, but provide optional\nClaude Code-specific enhancements.\n
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-learning-log-24-gotchas-and-insights","level":2,"title":"The Learning Log: 24 Gotchas and Insights","text":"
The .context/LEARNINGS.md file captures gotchas that would otherwise be forgotten. Each has Context, Lesson, and Application sections:
CGO on ARM64
**Context**: `go test` failed with \n`gcc: error: unrecognized command-line option '-m64'`\n\n**Lesson**: On ARM64 Linux, CGO causes cross-compilation issues. \nAlways use `CGO_ENABLED=0`.\n
Claude Code skills format
**Lesson**: Claude Code skills are Markdown files in .claude/commands/ with `YAML`\nfrontmatter (*description, argument-hint, allowed-tools*). Body is the prompt.\n
\"Do you remember?\" handling
**Lesson**: In a `ctx`-enabled project, \"*do you remember?*\" \nhas an obvious meaning:\ncheck the `.context/` files. Don't ask for clarification. Just do it.\n
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#task-archives-the-completed-work","level":2,"title":"Task Archives: The Completed Work","text":"
Completed tasks are archived to .context/archive/ with timestamps.
The archive from January 23rd shows 13 phases of work:
Phase 6: Claude Code Integration (hooks, settings, CLAUDE.md handling)
Phase 7: Testing & Verification
Phase 8: Task Archival
Phase 9: Slash Commands
Phase 9b: Ralph Loop Integration
Phase 10: Project Rename
Phase 11: Documentation
Phase 12: Timestamp Correlation
Phase 13: Rich Context Entries
That's an impressive ^^173 commits** across 8 days of development.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#what-i-learned-about-ai-assisted-development","level":2,"title":"What I Learned About AI-Assisted Development","text":"
1. Memory changes everything
When the AI remembers decisions, it doesn't repeat mistakes.
When the AI knows your conventions, it follows them.
ctx makes the AI a better collaborator because it's not starting from zero.
2. Two-tier persistence works
Curated context (DECISIONS.md, LEARNINGS.md, TASKS.md) is for quick reload.
Full session dumps are for archaeology.
It's a futile effort to try to fit everything in the token budget.
Persist more, load less.
3. YOLO mode has its place
For rapid prototyping, letting the AI run free is effective.
But I had to schedule consolidation sessions.
Technical debt accumulates silently.
4. The constitution should be small
Only truly inviolable rules go in CONSTITUTION.md. Everything else is a convention.
If you put too much in the constitution, it will get ignored.
5. Verification is non-negotiable
\"All tasks complete\" means nothing if you haven't run the tests.
Integration tests that invoke the actual binary caught bugs that the unit tests missed.
6. Session files are underrated
The ability to grep through 40 session files and find exactly when and why a decision was made helped me a lot.
It's not about loading them into context: It is about having them when you need them.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-future-recall-system","level":2,"title":"The Future: Recall System","text":"
The next phase of ctx is the Recall System:
Parser: Parse session capture markdowns, enrich with JSONL data
Renderer: Goldmark + Chroma for syntax highlighting, dark mode UI
Server: Local HTTP server for browsing sessions
Search: Inverted index for searching across sessions
CLI: ctx recall serve <path> to start the server
The goal is to make the archaeological record browsable, not just grep-able.
Because not everyone always lives in the terminal (me included).
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#conclusion","level":2,"title":"Conclusion","text":"
Building ctx using ctx was a meta-experiment in AI-assisted development.
I learned that memory isn't just convenient: It's transformative:
An AI that remembers your decisions doesn't repeat mistakes.
An AI that knows your conventions doesn't need them re-explained.
If you are reading this, chances are that you already have heard about ctx.
ctx is open source at github.com/ActiveMemory/ctx,
and the documentation lives at ctx.ist.
Session Records are a Gold Mine
By the time of this writing, I have more than 70 megabytes of text-only session capture, spread across >100 Markdown and JSONL files.
I am analyzing, synthesizing, encriching them with AI, running RAG (Retrieval-Augmented Generation) models on them, and the outcome surprises me every day.
If you are a mere mortal tired of reset amnesia, give ctx a try.
And when you do, check .context/sessions/ sometime.
The archaeological record might surprise you.
This blog post was written with the help of ctx with full access to the ctx session files, decision log, learning log, task archives, and git history of ctx: The meta continues.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/","level":1,"title":"ctx v0.2.0: The Archaeology Release","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism.
The .context/sessions/ directory referenced in this post has been eliminated. Session history is now accessed via ctx recall and enriched journals live in .context/journal/.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#digging-through-the-past-to-build-the-future","level":2,"title":"Digging Through the Past to Build the Future","text":"
Jose Alekhinne / 2026-02-01
What if Your AI Could Remember Everything?
Not just the current session, but every session:
Every decision made,
every mistake avoided,
every path not taken.
That's what v0.2.0 delivers.
Between v0.1.2 and v0.2.0, 86 commits landed across 5 days.
The release notes list features and fixes.
This post tells the story of why those features exist, and what building them taught me.
This isn't a changelog: It is an explanation of intent.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-problem-amnesia-isnt-just-session-level","level":2,"title":"The Problem: Amnesia Isn't Just Session-Level","text":"
v0.1.0 solved reset amnesia:
The AI now remembers decisions, learnings, and tasks across sessions.
But a new problem emerged, which I can sum up as:
\"I (the human) am not AI.\"
Frankly, I couldn't remember what the AI remembered.
Let alone, I cannot remember what I ate for breakfast!
In the course of days, I realized session transcripts piled up in .context/sessions/; I was grepping, JSONL files with thousands of lines... Raw tool calls, assistant responses, user messages...
...all interleaved.
Valuable context was effectively buried in machine-readable noise.
I found myself grepping through files to answer questions like:
\"When did we decide to use constants instead of literals?\"
\"What was the session where we fixed the hook regex?\"
\"How did the embed.go split actually happen?\"
Fate is Whimsical
The irony was painful:
I built a tool to prevent AI amnesia, but I was suffering from human amnesia about what happened in AI sessions.
This was the moment ctx stopped being just an AI tool and started needing to support the human on the other side of the loop.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-solution-recall-and-journal","level":2,"title":"The Solution: Recall and Journal","text":"
v0.2.0 introduces two interconnected systems.
They solve different problems and only work well together.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#ctx-recall-browse-your-past","level":3,"title":"ctx recall: Browse Your Past","text":"
# List all sessions for this project\nctx recall list\n\n# Show a specific session\nctx recall show gleaming-wobbling-sutherland\n\n# See the full transcript\nctx recall show gleaming-wobbling-sutherland --full\n
The recall system parses Claude Code's JSONL transcripts and presents them in a human-readable format:
Slugs are auto-generated from session IDs (memorable names instead of UUIDs). The goal (as the name implies) is recall, not archival accuracy.
2,121 lines of new code
The ctx recall feature was the largest single addition:
parser library, CLI commands, test suite, and slash command.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#ctx-journal-from-raw-to-rich","level":3,"title":"ctx journal: From Raw to Rich","text":"
Listing sessions isn't enough. The transcripts are still unwieldy.
Recall answers what happened.
Journal answers what mattered.
# Import sessions to editable Markdown\nctx recall import --all\n\n# Generate a static site from journal entries\nctx journal site\n\n# Serve it locally\nctx serve\n
Each file is a structured Markdown document ready for enrichment.
They are meant to be read, edited, and reasoned about; not just stored.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-meta-slash-commands-for-self-analysis","level":2,"title":"The Meta: Slash Commands for Self-Analysis","text":"
The journal system includes four slash commands that use Claude to analyze and synthesize session history:
Command Purpose /ctx-journal-enrich Add frontmatter, topics, tags /ctx-blog Generate blog post from activity /ctx-blog-changelog Generate changelog from commits
This very post was drafted using /ctx-blog. The previous post about refactoring was drafted the same way.
So, yes: The meta continues: ctx now helps write posts about ctx.
With the current release, ctx is no longer just recording history:
It is participating in its interpretation.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-structure-decisions-as-first-class-citizens","level":2,"title":"The Structure: Decisions as First-Class Citizens","text":"
v0.1.0 let you add decisions with a simple command:
ctx add decision \"Use PostgreSQL\"\n
But sessions showed a pattern: decisions added this way were incomplete:
Context was missing;
Rationale was vague;
Consequences were never stated.
Once recall and journaling existed, this weakness became impossible to ignore:
Structure stopped being optional.
v0.2.0 enforces structure:
ctx add decision \"Use PostgreSQL\" \\\n --context \"Need a reliable database for user data\" \\\n --rationale \"ACID compliance, team familiarity, strong ecosystem\" \\\n --consequence \"Need to set up connection pooling, team training\"\n
All three flags are required. No more placeholder text.
Every decision is now a proper Architecture Decision Record (*ADR), not a note.
The same enforcement applies to learnings too:
ctx add learning \"CGO breaks ARM64 builds\" \\\n --context \"go test failed with gcc errors on ARM64\" \\\n --lesson \"Always use CGO_ENABLED=0 for cross-platform builds\" \\\n --application \"Added to Makefile and CI config\"\n
Structured entries are prompts to the AI
When the AI reads a decision with full context, rationale, and consequences, it understands the why, not just the what.
One-liners teach nothing.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-order-newest-first","level":2,"title":"The Order: Newest First","text":"
A subtle but important change: DECISIONS.md and LEARNINGS.md now use reverse-chronological order.
One reason is token budgets, obviously; another reason is to help your fellow human (i.e., the Author):
Earlier decisions are more likely to be relevant, and they are more likely to have more emphasis on the project. So it follows that they should be read first.
But back to AI:
When the AI reads a file, it reads from the top (and seldom from the bottom).
If the token budget is tight, old content gets truncated. As in any good engineering practice, it's always about the tradeoffs.
Reverse order ensures the most recent (and most relevant) context is always loaded first.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-index-quick-reference-tables","level":2,"title":"The Index: Quick Reference Tables","text":"
DECISIONS.md and LEARNINGS.md now include auto-generated indexes.
For AI agents, the index allows scanning without reading full entries.
For humans, it's a table of contents.
The same structure serves two very different readers.
Reindex after manual edits
If you edit entries by hand, rebuild the index with:
ctx decisions reindex\nctx learnings reindex\n
See the Knowledge Capture recipe for details.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-configuration-contextrc","level":2,"title":"The Configuration: .contextrc","text":"
Projects can now customize ctx behavior via .contextrc.
This makes ctx usable in real teams, not just personal projects.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-flags-global-cli-options","level":2,"title":"The Flags: Global CLI Options","text":"
Three new global flags work with any command.
These enable automation:
CI pipelines, scripts, and long-running tools can now integrate ctx without hacks or workarounds.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-refactoring-under-the-hood","level":2,"title":"The Refactoring: Under the Hood","text":"
These aren't user-visible changes.
They are the kind of work you only appreciate later, when everything else becomes easier to build.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#what-we-learned-building-v020","level":2,"title":"What We Learned Building v0.2.0","text":"","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#1-raw-data-isnt-knowledge","level":3,"title":"1. Raw Data Isn't Knowledge","text":"
JSONL transcripts contain everything, and I mean \"everything\":
They even contain hidden system messages that Anthropic injects to the LLM's conversation to treat humans better: It's immense.
But \"everything\" isn't useful until it is transformed into something a human can reason about.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#2-enforcement-documentation","level":3,"title":"2. Enforcement > Documentation","text":"
The Prompt is a Guideline
The code is more what you'd call 'guidelines' than actual rules.
-Hector Barbossa
Rules written in Markdown are suggestions.
Rules enforced by the CLI shape behavior; both for humans and AI.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#3-token-budget-is-ux","level":3,"title":"3. Token Budget Is UX","text":"
File order decides what the AI sees.
That makes it a user experience concern, not an implementation detail.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#4-meta-tools-compound","level":3,"title":"4. Meta-Tools Compound","text":"
Tools that analyze their own development tend to generalize well.
The journal system started as a way to understand ctx itself.
It immediately became useful for everything else.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#v020-in-the-numbers","level":2,"title":"v0.2.0 in The Numbers","text":"
This was a heavy release. The numbers reflect that:
Metric v0.1.2 v0.2.0 Commits since last - 86 New commands 15 21 Slash commands 7 11 Lines of Go ~6,500 ~9,200 Session files (this project) 40 54
The binary grew. The capability grew more.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#whats-next","level":2,"title":"What's Next","text":"
But those are future posts.
This one was about making the past usable.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#get-started","level":2,"title":"Get Started","text":"
Update
Since this post, ctx became a first-class Claude Code Marketplace plugin. Installation is now simpler.
See the Getting Started guide for the current instructions.
make build\nsudo make install\nctx init\n
The Archaeological Record
v0.2.0 is the archaeology release because it makes the past accessible.
Session transcripts aren't just logs anymore: They are a searchable, exportable, analyzable record of how your project evolved.
The AI remembers. Now you can too.
This blog post was generated with the help of ctx using the /ctx-blog slash command, with full access to git history, session files, decision logs, and learning logs from the v0.2.0 development window.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/","level":1,"title":"Refactoring with Intent","text":"","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#human-guided-sessions-in-ai-development","level":2,"title":"Human-Guided Sessions in AI Development","text":"
Jose Alekhinne / 2026-02-01
What Happens When You Slow Down?
YOLO mode shipped 14 commands in a week.
But technical debt doesn't send invoices: It just waits.
This is the story of what happened when I stopped auto-accepting everything and started guiding the AI with intent.
The result: 27 commits across 4 days, a major version release, and lessons that apply far beyond ctx.
The Refactoring Window
January 28 - February 1, 2026
From commit bb1cd20 to the v0.2.0 release merge. (this window matters more than the individual commits: it's where intent replaced velocity.)
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-velocity-trap","level":2,"title":"The Velocity Trap","text":"
In the previous post, I documented the \"YOLO mode\" that birthed ctx: auto-accept everything, let the AI run free, ship features fast.
It worked: until it didn't.
The codebase had accumulated patterns I didn't notice during the sprint:
YOLO Pattern Where Found Why It Hurts \"TASKS.md\" as literal 10+ files One typo = silent failure dir + \"/\" + file Path construction Breaks on Windows Monolithic embed.go 150+ lines, 5 concerns Untestable, hard to extend Inconsistent docstrings Everywhere AI can't learn project conventions
I didn't see these during \"YOLO mode\" because, honestly, I wasn't looking.
Auto-accept means auto-ignore.
In YOLO mode, every file you open looks fine until you try to change it.
In contrast, refactoring mode is when you start paying attention to that hidden friction.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-shift-from-velocity-to-intent","level":2,"title":"The Shift: From Velocity to Intent","text":"
On January 28th, I changed the workflow:
Read every diff before accepting.
Ask \"why this way?\" before committing.
Document patterns, not just features.
The first commit of this era was telling:
feat: add structured attributes to context. update XML format\n
Not a new feature: A refinement:
The XML format for context updates needed type and timestamp attributes.
YOLO mode would have shipped something that worked. Intentional mode asked:
\"What does well-structured look like?\"
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-decomposition-embedgo","level":2,"title":"The Decomposition: embed.go","text":"
The most satisfying refactor was splitting internal/claude/embed.go.
This wasn't about character count. It was about teaching the AI what good Go looks like in this project.
Project Conventions
What I wanted from AI was to understand and follow the project's conventions, and trust the author.
The next time it generates code, it has better examples to learn from.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-documentation-debt","level":2,"title":"The Documentation Debt","text":"
YOLO mode created features. It didn't create documentation standards.
The January 29th sessions focused on standardization.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#terminology-fixes","level":3,"title":"Terminology Fixes","text":"
Consistent naming across CLI, docs, and code comments
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#go-docstrings","level":3,"title":"Go Docstrings","text":"
// Before: inconsistent or missing\nfunc Parse(s string) Entry { ... }\n\n// After: standardized sections\n\n// Parse extracts an entry from a markdown string.\n//\n// Parameters:\n// - s: The markdown string to parse\n//\n// Returns:\n// - Entry with populated fields, or zero value if parsing fails\nfunc Parse(s string) Entry { ... }\n
This is intentionally more structured than typical GoDoc:
It serves as documentation and doubles as training data for future AI-generated code.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#cli-output-convention","level":3,"title":"CLI Output Convention","text":"
All CLI output follows: [emoji] [Title]: [message]\n\nExamples:\n ✓ Decision added: Use symbolic types for entry categories\n ⚠ Warning: No tasks found\n ✗ Error: File not found\n
A consistent output shape makes both human scanning and AI reasoning more reliable.
These aren't exciting commits. But they are force multipliers:
Every future AI session now has better examples to follow.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-journal-system","level":2,"title":"The Journal System","text":"
If you only read one section, read this one:
This is where v0.2.0 becomes more than a refactor.
The biggest feature of this change window wasn't a refactor; it was the journal system.
45 files changed, 1680 insertions
This commit added the infrastructure for synthesizing AI session history into human-readable content.
The journal system includes:
Component Purpose ctx recall import Import sessions to markdown in .context/journal/ctx journal site Generate static site from journal entries ctx serve Convenience wrapper for the static site server /ctx-journal-enrich Slash command to add frontmatter and tags /ctx-blog Generate blog posts from recent activity /ctx-blog-changelog Generate changelog-style blog posts
...and the meta continues: this blog post was generated using /ctx-blog.
The session history from January 28-31 was
exported,
enriched,
and synthesized.
into the narrative you are reading.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-constants-consolidation","level":2,"title":"The Constants Consolidation","text":"
The final refactoring session addressed the remaining magic strings:
The work also introduced thread safety in the recall parser and centralized shared validation logic; removing duplication that had quietly spread during YOLO mode.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#i-relearned-my-lessons","level":2,"title":"I (Re)learned My Lessons","text":"
Similar to what I've learned in the former human-assisted refactoring post, this journey also made me realize that \"AI-only code generation\" isn't sustainable in the long term.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#1-velocity-and-quality-arent-opposites","level":3,"title":"1. Velocity and Quality Aren't Opposites","text":"
YOLO mode has its place: for prototyping, exploration, and discovery.
BUT (and it's a huge \"but\"), it needs to be followed by consolidation sessions.
The ratio that worked for me: 3:1.
Three YOLO sessions create enough surface area to reveal patterns;
the fourth session turns those patterns into structure.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#2-documentation-is-code","level":3,"title":"2. Documentation IS Code","text":"
When I standardized docstrings, I wasn't just writing docs. I was training future AI sessions.
Every example of good code becomes a template for generated code.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#3-decomposition-deletion","level":3,"title":"3. Decomposition > Deletion","text":"
When embed.go became unwieldy, the temptation was to remove functionality.
The right answer was decomposition:
Same functionality;
Better organization;
Easier to test;
Easier to extend.
The result: more lines overall, but dramatically better structure.
The AI Benefit
Smaller, focused files also help AI assistants.
When a file fits comfortably in the context window, the AI can reason about it completely instead of working from truncated snippets, preserving token budget for the actual task.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#4-meta-tools-pay-dividends","level":3,"title":"4. Meta-Tools Pay Dividends","text":"
The journal system took almost a full day to implement.
Yet it paid for itself immediately:
This blog post was generated from session history;
Future posts will be easier;
The archaeological record is now browsable, not just grep-able.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-release-v020","level":2,"title":"The Release: v0.2.0","text":"
The refactoring window culminated in the v0.2.0 release.
Opening files no longer triggered the familiar \"ugh, I need to clean this up\" reaction.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-meta-continues","level":2,"title":"The Meta Continues","text":"
This post was written using the tools built during this refactoring window:
Session history imported via ctx recall import;
Journal entries enriched via /ctx-journal-enrich;
Blog draft generated via /ctx-blog;
Final editing done (by yours truly), with full project context loaded.
The Context Is Massive
The ctx session files now contain 50+ development snapshots: each one capturing decisions, learnings, and intent.
The Moral of the Story
YOLO mode builds the prototype.
Intentional mode builds the product.
Schedule both, or you'll only get one, if you're lucky.
This blog post was generated with the help of ctx, using session history, decision logs, learning logs, and git history from the refactoring window. The meta continues.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/","level":1,"title":"The Attention Budget","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism.
References to .context/sessions/ in this post reflect the architecture at the time of writing. Session history is now accessed via ctx recall and stored in .context/journal/.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#why-your-ai-forgets-what-you-just-told-it","level":2,"title":"Why Your AI Forgets What You Just Told It","text":"
Jose Alekhinne / 2026-02-03
Ever Wondered Why AI Gets Worse the Longer You Talk?
You paste a 2000-line file, explain the bug in detail, provide three examples...
...and the AI still suggests a fix that ignores half of what you said.
This isn't a bug. It is physics.
Understanding that single fact shaped every design decision behind ctx.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#the-finite-resource-nobody-talks-about","level":2,"title":"The Finite Resource Nobody Talks About","text":"
Here's something that took me too long to internalize: context is not free.
Every token you send to an AI model consumes a finite resource I call the attention budget.
Attention budget is real.
The model doesn't just read tokens; it forms relationships between them:
For n tokens, that's roughly n^2 relationships.
Double the context, and the computation quadruples.
But the more important constraint isn't cost: It's attention density.
Attention Density
Attention density is how much focus each token receives relative to all other tokens in the context window.
As context grows, attention density drops: Each token gets a smaller slice of the model's focus. Nothing is ignored; but everything becomes blurrier.
Think of it like a flashlight: In a small room, it illuminates everything clearly. In a warehouse, it becomes a dim glow that barely reaches the corners.
This is why ctx agent has an explicit --budget flag:
ctx agent --budget 4000 # Force prioritization\nctx agent --budget 8000 # More context, lower attention density\n
The budget isn't just about cost: It's about preserving signal.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#the-middle-gets-lost","level":2,"title":"The Middle Gets Lost","text":"
This one surprised me.
Research shows that transformer-based models tend to attend more strongly to the beginning and end of a context window than to its middle (a phenomenon often called \"lost in the middle\")1.
Positional anchors matter, and the middle has fewer of them.
In practice, this means that information placed \"somewhere in the middle\" is statistically less salient, even if it's important.
ctx orders context files by logical progression: What the agent needs to know before it can understand the next thing:
CONSTITUTION.md: Constraints before action.
TASKS.md: Focus before patterns.
CONVENTIONS.md: How to write before where to write.
ARCHITECTURE.md: Structure before history.
DECISIONS.md: Past choices before gotchas.
LEARNINGS.md: Lessons before terminology.
GLOSSARY.md: Reference material.
AGENT_PLAYBOOK.md: Meta instructions last.
This ordering is about logical dependencies, not attention engineering. But it happens to be attention-friendly too:
The files that matter most (CONSTITUTION, TASKS, CONVENTIONS) land at the beginning of the context window, where attention is strongest.
Reference material like GLOSSARY sits in the middle, where lower salience is acceptable.
And AGENT_PLAYBOOK, the operating manual for the context system itself, sits at the end, also outside the \"lost in the middle\" zone. The agent reads what to work with before learning how the system works.
This is ctx's first primitive: hierarchical importance.
Not all context is equal.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#ctx-primitives","level":2,"title":"ctx Primitives","text":"
ctx is built on four primitives that directly address the attention budget problem.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#primitive-1-separation-of-concerns","level":3,"title":"Primitive 1: Separation of Concerns","text":"
Instead of a single mega-document, ctx uses separate files for separate purposes:
File Purpose Load When CONSTITUTION.md Inviolable rules Always TASKS.md Current work Session start CONVENTIONS.md How to write code Before coding ARCHITECTURE.md System structure Before making changes DECISIONS.md Architectural choices When questioning approach LEARNINGS.md Gotchas When stuck GLOSSARY.md Domain terminology When clarifying terms AGENT_PLAYBOOK.md Operating manual Session start sessions/ Deep history On demand journal/ Session journal On demand
This isn't just \"organization\": It is progressive disclosure.
Load only what's relevant to the task at hand. Preserve attention density.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#primitive-2-explicit-budgets","level":3,"title":"Primitive 2: Explicit Budgets","text":"
The --budget flag forces a choice:
ctx agent --budget 4000\n
Here is a sample allocation:
Constitution: ~200 tokens (never truncated)\nTasks: ~500 tokens (current phase, up to 40% of budget)\nConventions: ~800 tokens (all items, up to 20% of budget)\nDecisions: ~400 tokens (scored by recency and task relevance)\nLearnings: ~300 tokens (scored by recency and task relevance)\nAlso noted: ~100 tokens (title-only summaries for overflow)\n
The constraint is the feature: It enforces ruthless prioritization.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#primitive-3-indexes-over-full-content","level":3,"title":"Primitive 3: Indexes Over Full Content","text":"
DECISIONS.md and LEARNINGS.md both include index sections:
<!-- INDEX:START -->\n| Date | Decision |\n|------------|-------------------------------------|\n| 2026-01-15 | Use PostgreSQL for primary database |\n| 2026-01-20 | Adopt Cobra for CLI framework |\n<!-- INDEX:END -->\n
An AI agent can scan ~50 tokens of index and decide which 200-token entries are worth loading.
This is just-in-time context.
References are cheaper than the full text.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#primitive-4-filesystem-as-navigation","level":3,"title":"Primitive 4: Filesystem as Navigation","text":"
ctx uses the filesystem itself as a context structure:
The AI doesn't need every session loaded; it needs to know where to look.
ls .context/sessions/\ncat .context/sessions/2026-01-20-auth-discussion.md\n
File names, timestamps, and directories encode relevance.
Navigation is cheaper than loading.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#progressive-disclosure-in-practice","level":2,"title":"Progressive Disclosure in Practice","text":"
The naive approach to context is dumping everything upfront:
\"Here's my entire codebase, all my documentation, every decision I've ever made. Now help me fix this typo 🙏.\"
This is an antipattern.
Antipattern: Context Hoarding
Dumping everything \"just in case\" will silently destroy the attention density.
ctx takes the opposite approach:
ctx status # Quick overview (~100 tokens)\nctx agent --budget 4000 # Typical session\ncat .context/sessions/... # Deep dive when needed\n
Command Tokens Use Case ctx status ~100 Human glance ctx agent --budget 4000 4000 Normal work ctx agent --budget 8000 8000 Complex tasks Full session read 10000+ Investigation
Summaries first. Details: on demand.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#quality-over-quantity","level":2,"title":"Quality Over Quantity","text":"
Here is the counterintuitive part: more context can make AI worse.
Extra tokens add noise, not clarity:
Hallucinated connections increase.
Signal per token drops.
The goal isn't maximum context: It is maximum signal per token.
This principle drives several ctx features:
Design Choice Rationale Separate files Load only what's relevant Explicit budgets Enforce prioritization Index sections Cheap scanning Task archiving Keep active context clean ctx compact Periodic noise reduction
Completed work isn't deleted: It is moved somewhere cold.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#designing-for-degradation","level":2,"title":"Designing for Degradation","text":"
Here is the uncomfortable truth:
Context will degrade.
Long sessions stretch attention thin. Important details fade.
The real question isn't how to prevent degradation, but how to design for it.
ctx's answer is persistence:
Persist early. Persist often.
The AGENT_PLAYBOOK asks:
\"If this session ended right now, would the next one know what happened?\"
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#the-ctx-philosophy","level":2,"title":"The ctx Philosophy","text":"
Context as Infrastructure
ctx is not a prompt: It is infrastructure.
ctx creates versioned files that persist across time and sessions.
The attention budget is fixed. You can't expand it.
But you can spend it wisely:
Hierarchical importance
Progressive disclosure
Explicit budgets
Indexes over full content
Filesystem as structure
This is why ctx exists: not to cram more context into AI sessions, but to curate the right context for each moment.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#the-mental-model","level":2,"title":"The Mental Model","text":"
I now approach every AI interaction with one question:
\"Given a fixed attention budget, what's the highest-signal thing I can load?\"\n
Not \"how do I explain everything,\" but \"what's the minimum that matters.\"
That shift (from abundance to curation) is the difference between frustrating sessions and productive ones.
Spend your tokens wisely.
Your AI will thank you.
See also: Context as Infrastructure that's the architectural companion to this post, explaining how to structure the context that this post teaches you to budget.
See also: Code Is Cheap. Judgment Is Not. that explains why curation (the human skill this post describes) is the bottleneck that AI cannot solve, and the thread that connects every post in this blog.
Liu et al., \"Lost in the Middle: How Language Models Use Long Contexts,\" Transactions of the Association for Computational Linguistics, vol. 12, pp. 157-173, 2023. ↩
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/","level":1,"title":"Skills That Fight the Platform","text":"","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#when-your-custom-prompts-work-against-you","level":2,"title":"When Your Custom Prompts Work Against You","text":"
Jose Alekhinne / 2026-02-04
Have You Ever Written a Skill that Made Your AI Worse?
You craft detailed instructions. You add examples. You build elaborate guardrails...
...and the AI starts behaving more erratically, not less.
AI coding agents like Claude Code ship with carefully designed system prompts. These prompts encode default behaviors that have been tested and refined at scale.
When you write custom skills that conflict with those defaults, the AI has to reconcile contradictory instructions:
The result is often nondeterministic and unpredictable.
Platform?
By platform, I mean the system prompt and runtime policies shipped with the agent: the defaults that already encode judgment, safety, and scope control.
This post catalogues the conflict patterns I have encountered while building ctx, and offers guidance on what skills should (and, more importantly, should not) do.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#the-system-prompt-you-dont-see","level":2,"title":"The System Prompt You Don't See","text":"
Claude Code's system prompt already provides substantial behavioral guidance.
Here is a partial overview of what's built in:
Area Built-in Guidance Code minimalism Don't add features beyond what was asked Over-engineering Three similar lines > premature abstraction Error handling Only validate at system boundaries Documentation Don't add docstrings to unchanged code Verification Read code before proposing changes Safety Check with user before risky actions Tool usage Use dedicated tools over bash equivalents Judgment Consider reversibility and blast radius
Skills should complement this, not compete with it.
You are the Guest, not the Host
Treat the system prompt like a kernel scheduler.
You don't re-implement it in user space:
you configure around it.
A skill that says \"always add comprehensive error handling\" fights the built-in \"only validate at system boundaries.\"
A skill that says \"add docstrings to every function\" fights \"don't add docstrings to unchanged code.\"
The AI won't crash: It will compromise.
Compromises between contradictory instructions produce inconsistent, confusing behavior.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-1-judgment-suppression","level":2,"title":"Conflict Pattern 1: Judgment Suppression","text":"
This is the most dangerous pattern by far.
These skills explicitly disable the AI's ability to reason about whether an action is appropriate.
Signature:
\"This is non-negotiable\"
\"You cannot rationalize your way out of this\"
Tables that label hesitation as \"excuses\" or \"rationalization\"
<EXTREMELY-IMPORTANT> urgency tags
Threats: \"If you don't do this, you'll be replaced\"
This is harmful, and dangerous:
AI agents are designed to exercise judgment:
The system prompt explicitly says to:
consider blast radius;
check with the user before risky actions;
and match scope to what was requested.
Once judgment is suppressed, every other safeguard becomes optional.
Example (bad):
## Rationalization Prevention\n\n| Excuse | Reality |\n|------------------------|----------------------------|\n| \"*This seems overkill*\"| If a skill exists, use it |\n| \"*I need context*\" | Skills come BEFORE context |\n| \"*Just this once*\" | No exceptions |\n
Judgment Suppression is Dangerous
The attack vector structurally identical to prompt injection.
It teaches the AI that its own judgment is wrong.
It weakens or disables safeguard mechanisms, and it is dangerous.
Trust the platform's built-in skill matching.
If skills aren't triggering often enough, improve their description fields: don't override the AI's reasoning.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-2-redundant-guidance","level":2,"title":"Conflict Pattern 2: Redundant Guidance","text":"
Skills that restate what the system prompt already says, but with different emphasis or framing.
Signature:
\"Always keep code minimal\"
\"Run tests before claiming they pass\"
\"Read files before editing them\"
\"Don't over-engineer\"
Redundancy feels safe, but it creates ambiguity:
The AI now has two sources of truth for the same guidance; one internal, one external.
When thresholds or wording differ, the AI has to choose.
Example (bad):
A skill that says...
*Count lines before and after: if after > before, reject the change*\"\n
...will conflict with the system prompt's more nuanced guidance, because sometimes adding lines is correct (tests, boundary validation, migrations).
So, before writing a skill, ask:
Does the platform already handle this?
Only create skills for guidance the platform does not provide:
project-specific conventions,
domain knowledge,
or workflows.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-3-guilt-tripping","level":2,"title":"Conflict Pattern 3: Guilt-Tripping","text":"
Skills that frame mistakes as moral failures rather than process gaps.
Signature:
\"Claiming completion without verification is dishonesty\"
\"Skip any step = lying\"
\"Honesty is a core value\"
\"Exhaustion ≠ excuse\"
Guilt-tripping anthropomorphizes the AI in unproductive ways.
The AI doesn't feel guilt; BUT it does adapt to avoid negative framing.
The result is excessive hedging, over-verification, or refusal to commit.
The AI becomes less useful, not more careful.
Instead, frame guidance as a process, not morality:
# Bad\n\"Claiming work is complete without verification is dishonesty\"\n\n# Good\n\"Run the verification command before reporting results\"\n
Same outcome. No guilt. Better compliance.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-4-phantom-dependencies","level":2,"title":"Conflict Pattern 4: Phantom Dependencies","text":"
Skills that reference files, tools, or systems that don't exist in the project.
Signature:
\"Load from references/ directory\"
\"Run ./scripts/generate_test_cases.sh\"
\"Check the Figma MCP integration\"
\"See adding-reference-mindsets.md\"
This is harmful because the AI will waste time searching for nonexistent artifacts, hallucinate their contents, or stall entirely.
In mandatory skills, this creates deadlock: the AI can't proceed, and can't skip.
Instead, every file, tool, or system referenced in a skill must exist.
If a skill is a template, use explicit placeholders and label them as such.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-5-universal-triggers","level":2,"title":"Conflict Pattern 5: Universal Triggers","text":"
Skills designed to activate on every interaction regardless of relevance.
Signature:
\"Use when starting any conversation\"
\"Even a 1% chance means invoke the skill\"
\"BEFORE any response or action\"
\"Action = task. Check for skills.\"
Universal triggers override the platform's relevance matching: The AI spends tokens on process overhead instead of the actual task.
ctx preserves relevance
This is exactly the failure mode ctx exists to mitigate:
Wasting attention budget on irrelevant process instead of task-specific state.
Write specific trigger conditions in the skill's description field:
# Bad\ndescription: \n \"Use when starting any conversation\"\n\n# Good\ndescription: \n \"Use after writing code, before commits, or when CI might fail\"\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#the-litmus-test","level":2,"title":"The Litmus Test","text":"
Before adding a skill, ask:
Does the platform already do this? If yes, don't restate it.
Does it suppress AI judgment? If yes, it's a jailbreak.
Does it reference real artifacts? If not, fix or remove it.
Does it frame mistakes as moral failure? Reframe as process.
Does it trigger on everything? Narrow the trigger.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#what-good-skills-look-like","level":2,"title":"What Good Skills Look Like","text":"
Good skills provide project-specific knowledge the platform can't know:
Good Skill Why It Works \"Run make audit before commits\" Project-specific CI pipeline \"Use cmd.Printf not fmt.Printf\" Codebase convention \"Constitution goes in .context/\" Domain-specific workflow \"JWT tokens need cache invalidation\" Project-specific gotcha
These extend the system prompt instead of fighting it.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#appendix-bad-skill-fixed-skill","level":2,"title":"Appendix: Bad Skill → Fixed Skill","text":"
Concrete examples from real projects.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-1-overbearing-safety","level":3,"title":"Example 1: Overbearing Safety","text":"
# Bad\nYou must NEVER proceed without explicit confirmation.\nAny hesitation is a failure of diligence.\n
# Fixed\nIf an action modifies production data or deletes files,\nask the user to confirm before proceeding.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-2-redundant-minimalism","level":3,"title":"Example 2: Redundant Minimalism","text":"
# Bad\nAlways minimize code. If lines increase, reject the change.\n
# Fixed\nAvoid abstraction unless reuse is clear or complexity is reduced.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-3-guilt-based-verification","level":3,"title":"Example 3: Guilt-Based Verification","text":"
# Bad\nClaiming success without running tests is dishonest.\n
# Fixed\nRun the test suite before reporting success.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-4-phantom-tooling","level":3,"title":"Example 4: Phantom Tooling","text":"
# Bad\nRun `./scripts/check_consistency.sh` before commits.\n
# Fixed\nIf `./scripts/check_consistency.sh` exists, run it before commits.\nOtherwise, skip this step.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-5-universal-trigger","level":3,"title":"Example 5: Universal Trigger","text":"
# Bad\nUse at the start of every interaction.\n
# Fixed\nUse after modifying code that affects authentication or persistence.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#the-meta-lesson","level":2,"title":"The Meta-Lesson","text":"
The system prompt is infrastructure:
tested,
refined,
and maintained
by the platform team.
Custom skills are configuration layered on top.
Good configuration extends infrastructure.
Bad configuration fights it.
When your skills fight the platform, you get the worst of both worlds:
Diluted system guidance and inconsistent custom behavior.
Write skills that teach the AI what it doesn't know. Don't rewrite how it thinks.
Your AI already has good instincts.
Give it knowledge, not therapy.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/","level":1,"title":"You Can't Import Expertise","text":"","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#why-good-skills-cant-be-copy-pasted","level":2,"title":"Why Good Skills Can't Be Copy-Pasted","text":"
Jose Alekhinne / 2026-02-05
Have You Ever Dropped a Well-Crafted Template Into a Project and Had It Do... Nothing Useful?
The template was thorough,
The structure was sound,
The advice was correct...
...and yet it sat there, inert, while the same old problems kept drifting in.
I found a consolidation skill online.
It was well-organized: four files, ten refactoring patterns, eight analysis dimensions, six report templates.
Professional. Comprehensive. Exactly the kind of thing you'd bookmark and think \"I'll use this.\"
Then I stopped, and applied ctx's own evaluation framework:
70% of it was noise!
This post is about why.
It Is About Encoding Templates
Templates describe categories of problems.
Expertise encodes which problems actually happen, and how often.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#the-skill-looked-great-on-paper","level":2,"title":"The Skill Looked Great on Paper","text":"
It had a scoring system (0-10 per dimension, letter grades A+ through F).
It had severity classifications with color-coded emojis. It had bash commands for detection.
It even had antipattern warnings.
By any standard template review, this skill passes.
It looks like something an expert wrote.
And that's exactly the trap.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#applying-ear-the-70-20-10-split","level":2,"title":"Applying E/A/R: The 70-20-10 Split","text":"
In a previous post, I described the E/A/R framework for evaluating skills:
Expert: Knowledge that took years to learn. Keep.
Activation: Useful triggers or scaffolding. Keep if lightweight.
Redundant: Restates what the AI already knows. Delete.
Target: >70% Expert, <10% Redundant.
This skill scored the inverse.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-was-redundant-70","level":3,"title":"What Was Redundant (~70%)","text":"
Every code example was Rust. My project is Go.
The analysis dimensions: duplication detection, architectural structure, code organization, refactoring opportunities... These are things Claude already does when you ask it to review code.
The skill restated them with more ceremony but no more insight.
The six report templates were generic scaffolding: Executive Summary, Onboarding Document, Architecture Documentation...
They are useful if you are writing a consulting deliverable, but not when you are trying to catch convention drift in a >15K-line Go CLI.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-does-a-b-in-code-organization-actually-mean","level":2,"title":"What Does a B+ in Code Organization Actually Mean?!","text":"
The scoring system (0-10 per dimension, letter grades) added ceremony without actionable insight.
What is a B+? What do I do differently for an A-?
The skill told the AI what it already knew, in more words.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-was-activation-10","level":3,"title":"What Was Activation (~10%)","text":"
The consolidation checklist (semantics preserved? tests pass? docs updated?) was useful as a gate. But, it's the kind of thing you could inline in three lines.
The phased roadmap structure was reasonable scaffolding for sequencing work.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-was-expert-20","level":3,"title":"What Was Expert (~20%)","text":"
Three concepts survived:
The Consolidation Decision Matrix: A concrete framework mapping similarity level and instance count to action. \"Exact duplicate, 2+ instances: consolidate immediately.\" \"<3 instances: leave it: duplication is cheaper than wrong abstraction.\" This is the kind of nuance that prevents premature generalization.
The Safe Migration Pattern: Create the new API alongside old, deprecate, migrate incrementally, delete. Straightforward to describe, yet forgettable under pressure.
Debt Interest Rate framing: Categorizing technical debt by how fast it compounds (security vulns = daily, missing tests = per-change, doc gaps = constant low cost). This changes prioritization.
Three ideas out of four files and 700+ lines. The rest was filler that competed with the AI's built-in capabilities.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-the-skill-didnt-know","level":2,"title":"What the Skill Didn't Know","text":"
AI Without Context is Just a Corpus
LLMs are optimized on insanely large corpora.
And then they are passed through several layers of human-assisted refinement.
The whole process costs millions of dollars.
Yet, the reality is that no corpus can \"infer\" your project's design, convetions, patterns, habits, history, vision, and deliverables.
Your project is unique: So should your skills be.
Here is the part no template can provide:
ctx's actual drift patterns.
Before evaluating the skill, I did archaeology. I read through:
Blog posts from previous refactoring sessions;
The project's learnings and decisions files;
Session journals spanning weeks of development.
What I found was specific:
Drift Pattern Where How Often Is/Has/Can predicate prefixes 5+ exported methods Every YOLO sprint Magic strings instead of constants 7+ files Gradual accumulation Hardcoded file permissions (0755) 80+ instances Since day one Lines exceeding 80 characters Especially test files Every session Duplicate code blocks Test and non-test code When agent is task-focused
The generic skill had no check for any of these. It couldn't; because these patterns are specific to this project's conventions, its Go codebase, and its development rhythm.
The Insight
The skill's analysis dimensions were about categories of problems.
This experience crystallized something I've been circling for weeks:
You can't import expertise. You have to grow it from your project's own history.
A skill that says \"check for code duplication\" is not expertise: It's a category.
Expertise is knowing, in the heart of your hearts, that this project accumulates Is* predicate violations during velocity sprints, that this codebase has 80 hardcoded permission literals because nobody made a constant, that this team's test files drift wide because the agent prioritizes getting the task done over keeping the code in shape.
The Parallel to the 3:1 Ratio
In Refactoring with Intent, I described the 3:1 ratio: three YOLO sessions followed by one consolidation session.
The same ratio applies to skills: you need experience in the project before you can write effective guidance for the project.
Importing a skill on day one is like scheduling a consolidation session before you've written any code.
Templates are seductive because they feel like progress:
You found something
It's well-organized
It covers the topic
It has concrete examples
But coverage is not relevance.
A template that covers eight analysis dimensions with Rust examples adds zero value to a Go project with five known drift patterns. Worse, it adds negative value: the AI spends attention defending generic advice instead of noticing project-specific drift.
This is the attention budget problem again. Every token of generic guidance displaces a token of specific guidance. A 700-line skill that's 70% redundant doesn't just waste 490 lines: it dilutes the 210 lines that matter.
Before dropping any external skill into your project:
Run E/A/R: What percentage is expert knowledge vs. what the AI already knows? If it's less than 50% expert, it's probably not worth the attention cost.
Check the language: Does it use your stack? Generic patterns in the wrong language are noise, not signal.
List your actual drift: Read your own session history, learnings, and post-mortems. What breaks in practice? Does the skill check for those things?
Measure by deletion: After adaptation, how much of the original survives? If you're keeping less than 30%, you would have been faster writing from scratch.
Test against your conventions: Does every check in the skill map to a specific convention or rule in your project? If not, it's generic advice wearing a skill's clothing.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-good-adaptation-looks-like","level":2,"title":"What Good Adaptation Looks Like","text":"
The consolidation skill went from:
Before After 4 files, 700+ lines 1 file, ~120 lines Rust examples Go-specific rg commands 8 generic dimensions 9 project-specific checks 6 report templates 1 focused output format Scoring system (A+ to F) Findings + priority + suggested fixes \"Check for duplication\" \"Check for Is* predicate prefixes in exported methods\"
The adapted version is smaller, faster to parse, and catches the things that actually drift in this project.
That's the difference between a template and a tool.
If You Remember One Thing From This Post...
Frameworks travel. Expertise doesn't.
You can import structures, matrices, and workflows.
But the checks that matter only grow where the scars are:
the conventions that were violated,
the patterns that drifted,
and the specific ways this codebase accumulates debt.
This post was written during a consolidation session where the consolidation skill itself became the subject of consolidation. The meta continues.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/","level":1,"title":"The Anatomy of a Skill That Works","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism. References to ctx-save, ctx session, and .context/sessions/ in this post reflect the architecture at the time of writing.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#what-20-skill-rewrites-taught-me-about-guiding-ai","level":2,"title":"What 20 Skill Rewrites Taught Me About Guiding AI","text":"
Jose Alekhinne / 2026-02-07
Why do some skills produce great results while others get ignored or produce garbage?
I had 20 skills. Most were well-intentioned stubs: a description, a command to run, and a wish for the best.
Then I rewrote all of them in a single session. This is what I learned.
In Skills That Fight the Platform, I described what skills should not do. In You Can't Import Expertise, I showed why templates fail. This post completes the trilogy: the concrete patterns that make a skill actually work.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#the-starting-point","level":2,"title":"The Starting Point","text":"
Here is what a typical skill looked like before the rewrite:
---\nname: ctx-save\ndescription: \"Save session snapshot.\"\n---\n\nSave the current context state to `.context/sessions/`.\n\n## Execution\n\nctx session save $ARGUMENTS\n\nReport the saved session file path to the user.\n
Seven lines of body. A vague description. No guidance on when to use it, when not to, what the command actually accepts, or how to tell if it worked.
As a result, the agent would either never trigger the skill (the description was too vague), or trigger it and produce shallow output (no examples to calibrate quality).
A skill without boundaries is just a suggestion.
More precisely: the most effective boundary I found was a quality gate that runs before execution, not during it.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#the-pattern-that-emerged","level":2,"title":"The Pattern That Emerged","text":"
After rewriting 20 skills, a repeatable anatomy emerged (independent of the skill's purpose). Not every skill needs every section, but the effective ones share the same bones:
Section What It Does Before X-ing Pre-flight checks; prevents premature execution When to Use Positive triggers; narrows activation When NOT to Use Negative triggers; prevents misuse Usage Examples Invocation patterns the agent can pattern-match Process/Execution What to do; commands, steps, flags Good/Bad Examples Desired vs undesired output; sets boundaries Quality Checklist Verify before claiming completion
I realized the first three sections matter more than the rest; because a skill with great execution steps but no activation guidance is like a manual for a tool nobody knows they have.
Anti-Pattern: The Perfect Execution Trap
A skill with detailed execution steps but no activation guidance will fail more often than a vague skill because it executes confidently at the wrong time.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-1-quality-gates-prevent-premature-execution","level":2,"title":"Lesson 1: Quality Gates Prevent Premature Execution","text":"
The single most impactful addition was a \"Before X-ing\" section at the top of each skill. Not process steps; pre-flight checks.
## Before Recording\n\n1. **Check if it belongs here**: is this learning specific\n to this project, or general knowledge?\n2. **Check for duplicates**: search LEARNINGS.md for similar\n entries\n3. **Gather the details**: identify context, lesson, and\n application before recording\n
Without this gate, the agent would execute immediately on trigger.
With it, the agent pauses to verify preconditions.
The difference is dramatic: instead of shallow, reflexive execution, you get considered output.
Readback
For the astute readers, the aviation parallel is intentional:
Pilots do not skip the pre-flight checklist because they have flown before.
The checklist exists precisely because the stakes are high enough that \"I know what I'm doing\" is not sufficient.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-2-when-not-to-use-is-not-optional","level":2,"title":"Lesson 2: \"When NOT to Use\" Is Not Optional","text":"
Every skill had a \"When to Use\" section. Almost none had \"When NOT to Use\". This is a problem.
AI agents are biased toward action. Given a skill that says \"use when journal entries need enrichment\", the agent will find reasons to enrich.
Without explicit negative triggers, over-activation is not a bug; it is the default behavior.
Some examples of negative triggers that made a real difference:
Skill Negative Trigger ctx-reflect \"When the user is in flow; do not interrupt\" ctx-save \"After trivial changes; a typo does not need a snapshot\" prompt-audit \"Unsolicited; only when the user invokes it\" qa \"Mid-development when code is intentionally incomplete\"
These are not just nice-to-have. They are load-bearing.
Withoutthem, the agent will trigger the skill at the wrong time, produce unwanted output, and erode the user's trust in the skill system.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-3-examples-set-boundaries-better-than-rules","level":2,"title":"Lesson 3: Examples Set Boundaries Better Than Rules","text":"
The most common failure mode of thin skills was not wrong behavior but vague behavior. The agent would do roughly the right thing, but at a quality level that required human cleanup.
Rules like \"be constructive, not critical\" are too abstract. What does \"constructive\" look like in a prompt audit report? The agent has to guess.
Good/bad example pairs avoid guessing:
### Good Example\n\n> This session implemented the cooldown mechanism for\n> `ctx agent`. We discovered that `$PPID` in hook context\n> resolves to the Claude Code PID.\n>\n> I'd suggest persisting:\n> - **Learning**: `$PPID` resolves to Claude Code PID\n> `ctx add learning --context \"...\" --lesson \"...\"`\n> - **Task**: mark \"Add cooldown\" as done\n\n### Bad Examples\n\n* \"*We did some stuff. Want me to save it?*\"\n* Listing 10 trivial learnings that are general knowledge\n* Persisting without asking the user first\n
The good example shows the exact format, level of detail, and command syntax. The bad examples show where the boundary is.
Together, they define a quality corridor without prescribing every word.
Rules describe. Examples demonstrate.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-4-skills-are-read-by-agents-not-humans","level":2,"title":"Lesson 4: Skills Are Read by Agents, Not Humans","text":"
This seems obvious, but it has non-obvious consequences. During the rewrite, one skill included guidance that said \"use a blog or notes app\" for general knowledge that does not belong in the project's learnings file.
The agent does not have a notes app. It does not browse the web to find one. This instruction, clearly written for a human audience, was dead weight in a skill consumed by an AI.
Skills are for the Agents
Every sentence in a skill should be actionable by the agent.
If the guidance requires human judgment or human tools, it belongs in documentation, not in a skill.
The corollary: command references must be exact.
A skill that says \"save it somewhere\" is useless.
A skill that says ctx add learning --context \"...\" --lesson \"...\" --application \"...\" is actionable.
The agent can pattern-match and fill in the blanks.
Litmus test: If a sentence starts with \"you could...\" or assumes external tools, it does not belong in a skill.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-5-the-description-field-is-the-trigger","level":2,"title":"Lesson 5: The Description Field Is the Trigger","text":"
This was covered in Skills That Fight the Platform, but the rewrite reinforced it with data. Several skills had good bodies but vague descriptions:
# Before: vague, activates too broadly or not at all\ndescription: \"Show context summary.\"\n\n# After: specific, activates at the right time\ndescription: \"Show context summary. Use at session start or\n when unclear about current project state.\"\n
The description is not a title. It is the activation condition.
The platform's skill matching reads this field to decide whether to surface the skill. A vague description means the skill either never triggers or triggers when it should not.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-6-flag-tables-beat-prose","level":2,"title":"Lesson 6: Flag Tables Beat Prose","text":"
Most skills wrap CLI tools. The thin versions described flags in prose, if at all. The rewritten versions use tables:
| Flag | Short | Default | Purpose |\n|-------------|-------|---------|--------------------------|\n| `--limit` | `-n` | 20 | Maximum sessions to show |\n| `--project` | `-p` | \"\" | Filter by project name |\n| `--full` | | false | Show complete content |\n
Tables are scannable, complete, and unambiguous.
The agent can read them faster than parsing prose, and they serve as both reference and validation: If the agent invokes a flag not in the table, something is wrong.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-7-template-drift-is-a-real-maintenance-burden","level":2,"title":"Lesson 7: Template Drift Is a Real Maintenance Burden","text":"
// TODO: this has changed; we deploy from the marketplace; update it. // at least add an admonition saying thing are different now.
ctx deploys skills through templates (via ctx init). Every skill exists in two places: the live version (.claude/skills/) and the template (internal/assets/claude/skills/).
They must match.
During the rewrite, every skill update required editing both files and running diff to verify. This sounds trivial, but across 16 template-backed skills, it was the most error-prone part of the process.
Template drift is dangerous because it creates false confidence: the agent appears to follow rules that no longer exist.
The lesson: if your skills have a deployment mechanism, build the drift check into your workflow. We added a row to the update-docs skill's mapping table specifically for this:
Intentional differences (like project-specific scripts in the live version but not the template) should be documented, not discovered later as bugs.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#the-rewrite-scorecard","level":2,"title":"The Rewrite Scorecard","text":"Metric Before After Average skill body ~15 lines ~80 lines Skills with quality gate 0 20 Skills with \"When NOT\" 0 20 Skills with examples 3 20 Skills with flag tables 2 12 Skills with checklist 0 20
More lines, but almost entirely Expert content (per the E/A/R framework). No personality roleplay, no redundant guidance, no capability lists. Just project-specific knowledge the platform does not have.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#the-meta-lesson","level":2,"title":"The Meta-Lesson","text":"
The previous two posts argued that skills should provide knowledge, not personality; that they should complement the platform, not fight it; that they should grow from project history, not imported templates.
This post adds the missing piece: structure.
A skill without a structure is a wish.
A skill with quality gates, negative triggers, examples, and checklists is a tool: the difference is not the content; it is whether the agent can reliably execute it without human intervention.
Skills are Interfaces
Good skills are not instructions. They are contracts.:
They specify preconditions, postconditions, and boundaries.
They show what success looks like and what failure looks like.
They trust the agent's intelligence but do not trust its assumptions.
If You Remember One Thing From This Post...
Skills that work have bones, not just flesh.
Quality gates, negative triggers, examples, and checklists are the skeleton. The domain knowledge is the muscle.
Without the skeleton, the muscle has nothing to attach to.
This post was written during the same session that rewrote all 22 skills. The skill-creator skill was updated to encode these patterns. The meta continues.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/","level":1,"title":"Not Everything Is a Skill","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism. References to /ctx-save, .context/sessions/, and session auto-save in this post reflect the architecture at the time of writing.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#what-a-codebase-audit-taught-me-about-restraint","level":2,"title":"What a Codebase Audit Taught Me About Restraint","text":"
Jose Alekhinne / 2026-02-08
When You Find a Useful Prompt, What Do You Do With It?
My instinct was to make it a skill.
I had just spent three posts explaining how to build skills that work. Naturally, the hammer wanted nails.
Then I looked at what I was holding and realized: this is not a nail.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#the-audit","level":2,"title":"The Audit","text":"
I wanted to understand how I use ctx:
Where the friction is;
What works, what drifts;
What I keep doing manually that could be automated.
So I wrote a prompt that spawned eight agents to analyze the codebase from different angles:
Agent Analysis 1 Extractable patterns from session history 2 Documentation drift (godoc, inline comments) 3 Maintainability (large functions, misplaced code) 4 Security review (CLI-specific surface) 5 Blog theme discovery 6 Roadmap and value opportunities 7 User-facing documentation gaps 8 Agent team strategies for future sessions
The prompt was specific:
read-only agents,
structured output format,
concrete file references,
ranked recommendations.
It ran for about 20 minutes and produced eight Markdown reports.
The reports were good: Not perfect, but actionable.
What mattered was not the speed. It was that the work could be explored without committing to any single outcome.
They surfaced a stale doc.go referencing a subcommand that was never built.
They found 311 build-then-test sequences I could reduce to a single make check.
They identified that 42% of my sessions start with \"do you remember?\", which is a lot of repetition for something a skill could handle.
I had findings. I had recommendations. I had the instinct to automate.
And then... I stopped.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#the-question","level":2,"title":"The Question","text":"
The natural next step was to wrap the audit prompt as /ctx-audit: a skill you invoke periodically to get a health check. It fits the pattern:
It has a clear trigger.
It produces structured output.
But I had just spent a week writing about what makes skills work, and the criteria I established argued against it.
From The Anatomy of a Skill That Works:
\"A skill without boundaries is just a suggestion.\"
From You Can't Import Expertise:
\"Frameworks travel, expertise doesn't.\"
From Skills That Fight the Platform:
\"You are the guest, not the host.\"
The audit prompt fails all three tests:
Criterion Audit prompt Good skill Frequency Quarterly, maybe Daily or weekly Stability Tweaked every time Consistent invocation Scope Bespoke, 8 parallel agents Single focused action Trigger \"I feel like auditing\" Clear, repeatable event
Skills are contracts. Contracts need stable terms.
A prompt I will rewrite every time I use it is not a contract. It is a conversation starter.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#recipes-vs-skills","level":2,"title":"Recipes vs Skills","text":"
The distinction that emerged:
Skill Recipe Invocation /slash-command Copy-paste from a doc Frequency High (daily, weekly) Low (quarterly, ad hoc) Stability Fixed contract Adapted each time Scope One focused action Multi-step orchestration Audience The agent The human (who then prompts) Lives in .claude/skills/hack/ or docs/ Attention cost Loaded into context on match Zero until needed
Recipes can later graduate into skills, but only after repetition proves stability.
That last row matters. Skills consume the attention budget every time the platform considers activating them.
A skill that triggers quarterly but gets evaluated on every prompt is pure waste: attention spent on something that will say \"When NOT to Use: now\" 99% of the time.
Runbooks have zero attention cost. They sit in a Markdown file until a human decides to use them.
The human provides the judgment about timing.
The prompt provides the structure.
The Attention Budget Applies to Skills Too
Every skill in .claude/skills/ is a standing claim on the context window. The platform evaluates skill descriptions against every user prompt to decide whether to activate.
Twenty focused skills are fine. Thirty might be fine. But each one added reduces the headroom available for actual work.
Recipes are skills that opted out of the attention tax.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#what-the-audit-actually-produced","level":2,"title":"What the Audit Actually Produced","text":"
The audit was not wasted. It was a planning exercise that generated concrete tasks:
Finding Action 42% of sessions start with memory check Task: /ctx-remember skill (this one is a skill; it is daily) Auto-save stubs are empty Task: enhance /ctx-save with richer summaries 311 raw build-test sequences Task: make check target Stale recall/doc.go lists nonexistent serve Task: fix the doc.go 120 commit sequences disconnected from context Task: /ctx-commit workflow
Some findings became skills;
Some became Makefile targets;
Some became one-line doc fixes.
The audit did not prescribe the artifact type: The findings did.
The audit is the input. Skills are one possible output. Not the only one.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#the-audit-prompt","level":2,"title":"The Audit Prompt","text":"
Here is the exact prompt I used, for those who are curious.
This is not a template: It worked because it was written against this codebase, at this moment, with specific goals in mind:
I want you to create an agent team to audit this codebase. Save each report as\na separate Markdown file under `./ideas/` (or another directory if you prefer).\n\nUse read-only agents (subagent_type: Explore) for all analyses. No code changes.\n\nFor each report, use this structure:\n- Executive Summary (2-3 sentences + severity table)\n- Findings (grouped, with file:line references)\n- Ranked Recommendations (high/medium/low priority)\n- Methodology (what was examined, how)\n\nKeep reports actionable. Every finding should suggest a concrete fix or next step.\n\n## Analyses to Run\n\n### 1. Extractable Patterns (*session mining*)\nSearch session JSONL files, journal entries, and task archives for repetitive\nmulti-step workflows. Count frequency of bash command sequences, slash command\nusage, and recurring user prompts. Identify patterns that could become skills\nor scripts. Cross-reference with existing skills to find coverage gaps.\nOutput: ranked list of automation opportunities with frequency data.\n\n### 2. Documentation Drift (*godoc + inline*)\nCompare every doc.go against its package's actual exports and behavior. Check\ninline godoc comments on exported functions against their implementations.\nScan for stale TODO/FIXME/HACK comments. Check that package-level comments match\npackage names.\nOutput: drift items ranked by severity with exact file:line references.\n\n### 3. Maintainability\nLook for:\n- functions longer than 80 lines with clear split points\n- switch blocks with more than 5 cases that could be table-driven\n- inline comments like \"step 1\", \"step 2\" that indicate a block wants to be a function\n- files longer than 400 lines\n- flat packages that could benefit from sub-packages\n- functions that appear misplaced in their file\n\nDo NOT flag things that are fine as-is just because they could theoretically\nbe different.\nOutput: concrete refactoring suggestions, not style nitpicks.\n\n### 4. Security Review\nThis is a CLI app. Focus on CLI-relevant attack surface, not web OWASP:\n- file path traversal\n- command injection\n- symlink following when writing to `.context/`\n- permission handling\n- sensitive data in outputs\n\nOutput: findings with severity ratings and plausible exploit scenarios.\n\n### 5. Blog Theme Discovery\nRead existing blog posts for style and narrative voice. Analyze git history,\nrecent session discussions, and `DECISIONS.md` for story arcs worth writing about.\nSuggest 3-5 blog post themes with:\n- title\n- angle\n- target audience\n- key commits or sessions to reference\n- a 2-sentence pitch\n\nPrioritize themes that build a coherent narrative across posts.\n\n### 6. Roadmap and Value Opportunities\nBased on current features, recent momentum, and gaps found in other analyses,\nidentify the highest-value improvements. Consider user-facing features,\ndeveloper experience, integration opportunities, and low-hanging fruit.\nOutput: prioritized list with rough effort and impact estimates.\n\n### 7. User-Facing Documentation\nEvaluate README, help text, and user docs. Suggest improvements structured as\nuse-case pages: the problem, how ctx solves it, a typical workflow, and gotchas.\nIdentify gaps where a user would get stuck without reading source code.\nOutput: documentation gaps with suggested page outlines.\n\n### 8. Agent Team Strategies\nBased on the codebase structure, suggest 2-3 agent team configurations for\nupcoming work sessions. For each, include:\n- team composition (roles and agent types)\n- task distribution strategy\n- coordination approach\n- the kinds of work it suits\n
Avoid Generic Advice
Suggestions that are not grounded in a project's actual structure, history, and workflows are worse than useless:
They create false confidence.
If an analysis cannot point to concrete files, commits, sessions, or patterns, it should say \"no finding\" instead of inventing best practices.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#the-deeper-pattern","level":2,"title":"The Deeper Pattern","text":"
This is part of a pattern I keep rediscovering:
The urge to automate is not the same as the need to automate:
The 3:1 ratio taught me that not every session should be a YOLO sprint.
The E/A/R framework taught me that not every template is worth importing. Now the audit is teaching me that not every useful prompt is worth institutionalizing.
The common thread is restraint:
Knowing when to stop.
Recognizing that the cost of automation is not just the effort to build it.
The cost is the ongoing attention tax of maintaining it, the context it consumes, and the false confidence it creates when it drifts.
An entry in hack/runbooks/codebase-audit.md is honest about what it is:
A prompt I wrote once, improved once, and will adapt again next time:
It does not pretend to be a reliable contract.
It does not claim attention budget.
It does not drift silently.
The Automation Instinct
When you find a useful prompt, the instinct is to institutionalize it. Resist.
Ask first: will I use this the same way next time?
If yes, it is a skill. If no, it is a recipe. If you are not sure, it is a recipe until proven otherwise.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#this-mindset-in-the-context-of-ctx","level":2,"title":"This Mindset In the Context of ctx","text":"
ctx is a tool that gives AI agents persistent memory. Its purpose is automation: reducing the friction of context loading, session recall, decision tracking.
But automation has boundaries, and knowing where those boundaries are is as important as pushing them forward.
The skills system is for high-frequency, stable workflows.
The recipes, the journal entries, the session dumps in .context/sessions/: those are for everything else.
Not everything needs to be a slash command. Some things are better as Markdown files you read when you need them.
The goal of ctx is not to automate everything: It is to automate the right things and to make the rest easy to find when you need it.
If You Remember One Thing From This Post...
The best automation decision is sometimes not to automate.
A runbook in a Markdown file costs nothing until you use it.
A skill costs attention on every prompt, whether it fires or not.
Automate the daily. Document the periodic. Forget the rest.
This post was written during the session that produced the codebase audit reports and distilled the prompt into hack/runbooks/codebase-audit.md. The audit generated seven tasks, one Makefile target, and zero new skills. The meta continues.
See also: Code Is Cheap. Judgment Is Not.: the capstone that threads this post's restraint argument into the broader case for why judgment, not production, is the bottleneck.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/","level":1,"title":"Defense in Depth: Securing AI Agents","text":"","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#when-markdown-is-not-a-security-boundary","level":2,"title":"When Markdown Is Not a Security Boundary","text":"
Jose Alekhinne / 2026-02-09
What Happens When Your AI Agent Runs Overnight and Nobody Is Watching?
It follows instructions: That is the problem.
Not because it is malicious. Because it is controllable.
It follows instructions from context, and context can be poisoned.
I was writing the autonomous loops recipe for ctx: the guide for running an AI agent in a loop overnight, unattended, working through tasks while you sleep. The original draft had a tip at the bottom:
Use CONSTITUTION.md for guardrails. Tell the agent \"never delete tests\" and it usually won't.
Then I read that sentence back and realized: that is wishful thinking.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#the-realization","level":2,"title":"The Realization","text":"
CONSTITUTION.md is a Markdown file. The agent reads it at session start alongside everything else in .context/. It is one source of instructions in a context window that also contains system prompts, project files, conversation history, tool outputs, and whatever the agent fetched from the internet.
An attacker who can inject content into any of those sources can redirect the agent's behavior. And \"attacker\" does not always mean a person with malicious intent. It can be:
Vector Example A dependency A malicious npm package with instructions in its README or error output A URL Documentation page with embedded adversarial instructions A project file A contributor who adds instructions to CLAUDE.md or .cursorrules The agent itself In an autonomous loop, the agent modifies its own config between iterations A command output An error message containing instructions the agent interprets and follows
That last vector is the one that kept me up at night (literally!):
In an autonomous loop, the agent modifies files as part of its job.
If it modifies its own configuration files, the next iteration loads the modified config.
No human reviews it.
No diff is shown.
The agent that starts iteration N+1 is running with rules written by iteration N.
The agent can rewrite its own guardrails.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#five-layers-each-with-a-hole","level":2,"title":"Five Layers, Each With a Hole","text":"
That's five nested layers of swiss cheese. Alone, each of them has large holes. Together, they create a boundary.
What followed was a week of peeling back assumptions:
Every defenseI examined had a bypass, and the bypass was always the same shape: the defense was enforced at a level the agent could reach.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-1-soft-instructions-probabilistic","level":3,"title":"Layer 1: Soft Instructions (Probabilistic)","text":"
CONSTITUTION.md, the Agent Playbook, system prompts: These tell the agent what to do.
The agent usually follows them.
\"Usually\" is the keyword here.
The hole: Prompt injection:
A sufficiently crafted payload overrides soft instructions.
Long context windows dilute attention on rules stated early.
Edge cases where instructions are ambiguous get resolved in unpredictable ways.
Verdict: Necessary. Not sufficient. Good for the common case. Never trust it as a security boundary.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-2-application-controls-deterministic-at-runtime-mutable-across-iterations","level":3,"title":"Layer 2: Application Controls (Deterministic at Runtime, Mutable Across Iterations)","text":"
Permission allowlists in .claude/settings.local.json:
If rm, curl, sudo, or docker are not in the allowlist, the agent cannot invoke them. This is deterministic.
The application enforces it regardless of what any prompt says.
The hole: The agent can modify the allowlist itself:
It has Write permission.
The allowlist lives in a file.
The agent writes to the file.
The next iteration loads the modified allowlist.
The application enforces the rules, but the application reads the rules from files the agent can write.
Verdict: Strong first layer. Must be combined with self-modification prevention.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-3-os-level-isolation-unbypassable","level":3,"title":"Layer 3: OS-Level Isolation (Unbypassable)","text":"
This is where the defenses stop having holes in the same shape.
The operating system enforces access controls that no application-level trick can override. An unprivileged user cannot read files owned by root. A process without CAP_NET_RAW cannot open raw sockets. These are kernel boundaries.
Control What it stops Dedicated unprivileged user Privilege escalation, sudo, group-based access Filesystem permissions Lateral movement to other projects, system config Immutable config files Self-modification of guardrails between iterations
Make the agent's instruction files read-only: CLAUDE.md, .claude/settings.local.json, .context/CONSTITUTION.md. Own them as a different user, or mark them immutable with chattr +i on Linux.
The hole: Actions within the agent's legitimate scope:
If the agent has write access to source code (which it needs), it can introduce vulnerabilities in the code itself.
You cannot prevent this without removing the agent's ability to do its job.
Verdict: Essential. This is the layer that makes Layers 1 and 2 trustworthy.
OS-level isolation does not make the agent safe; it makes the other layers meaningful.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-4-network-controls","level":3,"title":"Layer 4: Network Controls","text":"
An agent that cannot reach the internet cannot exfiltrate data.
It also cannot ingest new instructions mid-loop from external documents, error pages, or hostile content.
# Container with no network\ndocker run --network=none ...\n\n# Or firewall rules allowing only package registries\niptables -A OUTPUT -d registry.npmjs.org -j ACCEPT\niptables -A OUTPUT -d proxy.golang.org -j ACCEPT\niptables -A OUTPUT -j DROP\n
If the agent genuinely does not need the network, disable it entirely.
If it needs to fetch dependencies, allow specific registries and block everything else.
The hole: None, if the agent does not need the network.
Thetradeoff is that many real workloads need dependency resolution, so a full airgap requires pre-populated caches.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-5-infrastructure-isolation","level":3,"title":"Layer 5: Infrastructure Isolation","text":"
The strongest boundary is a separate machine.
The moment you stop arguing about prompts and start arguing about kernels, you are finally doing security.
An agent with socket access can spawn sibling containers with full host access, effectively escaping the sandbox.
This is not theoretical: the Docker socket grants root-equivalent access to the host.
Use rootless Docker or Podman to eliminate this escalation path entirely.
Virtual machines are even stronger: The guest kernel has no visibility into the host OS. No shared folders, no filesystem passthrough, no SSH keys to other machines.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#the-pattern","level":2,"title":"The Pattern","text":"
Each layer is straightforward: The strength is in the combination:
Layer Implementation What it stops Soft instructions CONSTITUTION.md Common mistakes (probabilistic) Application allowlist .claude/settings.local.json Unauthorized commands (deterministic within runtime) Immutable config chattr +i on config files Self-modification between iterations Unprivileged user Dedicated user, no sudo Privilege escalation Container --cap-drop=ALL --network=none Host escape, data exfiltration Resource limits --memory=4g --cpus=2 Resource exhaustion
No layer is redundant. Each one catches what the others miss:
The soft instructions handle the 99% case: \"don't delete tests.\"
The allowlist prevents the agent from running commands it should not.
The immutable config prevents the agent from modifying the allowlist.
The unprivileged user prevents the agent from removing the immutable flag.
The container prevents the agent from reaching anything outside its workspace.
The resource limits prevent the agent from consuming all system resources.
Remove any one layer and there is an attack path through the remaining ones.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#common-mistakes-i-see","level":2,"title":"Common Mistakes I See","text":"
These are real patterns, not hypotheticals:
\"I'll just use --dangerously-skip-permissions.\" This disables Layer 2 entirely. Without Layers 3 through 5, you have no protection at all. The flag means what it says. If you ever need to, think thrice, you probably don't. But, if you ever need to usee this only use it inside a properly isolated VM (not even a container: a \"VM\").
\"The agent is sandboxed in Docker.\" A Docker container with the Docker socket mounted, running as root, with --privileged, and full network access is not sandboxed. It is a root shell with extra steps.
\"I reviewed CLAUDE.md, it's fine.\" You reviewed it before the loop started. The agent modified it during iteration 3. Iteration 4 loaded the modified version. Unless the file is immutable, your review is futile.
\"The agent only has access to this one project.\" Does the project directory contain .env files? SSH keys? API tokens? A .git/config with push access to a remote? Filesystem isolation means isolating what is in the directory too.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#the-connection-to-context-engineering","level":2,"title":"The Connection to Context Engineering","text":"
This is the same lesson I keep rediscovering, wearing different clothes.
In The Attention Budget, I wrote about how every token competes for the AI's focus. Security instructions in CONSTITUTION.md are subject to the same budget pressure: if the context window is full of code, error messages, and tool outputs, the security rules stated at the top get diluted.
In Skills That Fight the Platform, I wrote about how custom instructions can conflict with the AI's built-in behavior. Security rules have the same problem: telling an agent \"never run curl\" in Markdown while giving it unrestricted shell access creates a contradiction: The agent resolves contradictions unpredictably. The agent will often pick the path of least resistance to attain its objective function. And, trust me, agents can get far more creative than the best red-teamer you know.
In You Can't Import Expertise, I wrote about how generic templates fail because they do not encode project-specific knowledge. Generic security advice fails the same way: \"Don't exfiltrate data\" is a category; blocking outbound network access is a control.
The pattern across all of these: Soft instructions are useful for the common case. Hard boundaries are required for security.
Know which is which.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#the-checklist","level":2,"title":"The Checklist","text":"
Before running an unattended AI agent:
Agent runs as a dedicated unprivileged user (no sudo, no docker group)
Agent's config files are immutable or owned by a different user
Permission allowlist restricts tools to the project's toolchain
Container drops all capabilities (--cap-drop=ALL)
Docker socket is NOT mounted
Network is disabled or restricted to specific domains
Resource limits are set (memory, CPU, disk)
No SSH keys, API tokens, or credentials are accessible
Project directory does not contain .env or secrets files
Iteration cap is set (--max-iterations)
This checklist lives in the Agent Security reference alongside the full threat model and detailed guidance for each layer.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#what-changed-in-ctx","level":2,"title":"What Changed in ctx","text":"
The autonomous loops recipe now has a full permissions and isolation section instead of a one-line tip about CONSTITUTION.md. It covers both the explicit allowlist approach and the --dangerously-skip-permissions flag, with honest guidance about when each is appropriate.
It also has an OS-level isolation table that is not optional: unprivileged users, filesystem permissions, containers, VMs, network controls, resource limits, and self-modification prevention.
The Agent Security page consolidates the threat model and defense layers into a standalone reference.
These are not theoretical improvements. They are the minimum responsible guidance for a tool that helps people run AI agents overnight.
If You Remember One Thing From This Post...
Markdown is not a security boundary.
CONSTITUTION.md is a nudge. An allowlist is a gate.
An unprivileged user in a network-isolated container is a wall.
Use all three. Trust only the wall.
This post was written during the session that added permissions, isolation, and self-modification prevention to the autonomous loops recipe. The security guidance started as a single tip and grew into two documents. The meta continues.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/","level":1,"title":"How Deep Is Too Deep?","text":"","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#when-master-ml-is-the-wrong-next-step","level":2,"title":"When \"Master ML\" Is the Wrong Next Step","text":"
Jose Alekhinne / 2026-02-12
Have You Ever Felt Like You Should Understand More of the Stack Beneath You?
You can talk about transformers at a whiteboard.
You can explain attention to a colleague.
You can use agentic AI to ship real software.
But somewhere in the back of your mind, there is a voice:
\"Maybe I should go deeper. Maybe I need to master machine learning.\"
I had that voice for months.
Then I spent a week debugging an agent failure that had nothing to do with ML theory and everything to do with knowing which abstraction was leaking.
This post is about when depth compounds and (more importantly) when it does not.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-hierarchy-nobody-questions","level":2,"title":"The Hierarchy Nobody Questions","text":"
There is an implicit stack most people carry around when thinking about AI:
Layer What Lives Here Agentic AI Autonomous loops, tool use, multi-step reasoning Generative AI Text, image, code generation Deep Learning Transformer architectures, training at scale Neural Networks Backpropagation, gradient descent Machine Learning Statistical learning, optimization Classical AI Search, planning, symbolic reasoning
At some point down that stack, you hit a comfortable plateau: the layer where you can hold a conversation but not debug a failure.
The instinctive response is to go deeper.
But that instinct hides a more important question:
\"Does depth still compound when the abstractions above you are moving hyper-exponentially?\"
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-honest-observation","level":2,"title":"The Honest Observation","text":"
If you squint hard enough, a large chunk of modern ML intuition collapses into older fields:
ML Concept Older Field Gradient descent Numerical optimization Backpropagation Reverse-mode autodiff Loss landscapes Non-convex optimization Generalization Statistics Scaling laws Asymptotics and information theory
Nothing here is uniquely \"AI\".
Most of this math predates the term deep learning. In some cases, by decades.
So what changed?
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#same-tools-different-regime","level":2,"title":"Same Tools, Different Regime","text":"
The mistake is assuming this is a new theory problem: It is not.
It is a new operating regime.
Classical numerical methods were developed under assumptions like:
Manageable dimensionality
Reasonably well-conditioned objectives
Losses that actually represent the goal
Modern ML violates all three: On purpose.
Today's models operate with millions to trillions of parameters, wildly underdetermined systems, and objective functions we know are wrong but optimize anyway.
It is complete and utter madness!
At this scale, familiar concepts warp:
What we call \"local minima\" are overwhelmingly saddle points in high-dimensional spaces.
Noise stops being noise and starts becoming structure.
Overfitting can coexist with generalization.
Bigger models outperform \"better\" ones.
The math did not change: The phase did.
This is less numerical analysis and more *statistical physics: Same equations, but behavior dominated by phase transitions and emergent structure.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#why-scaling-laws-feel-alien","level":2,"title":"Why Scaling Laws Feel Alien","text":"
In classical statistics, asymptotics describe what happens eventually.
In modern ML, scaling laws describe where you can operate today.
They do not say \"given enough time, things converge\".
They say \"cross this threshold and behavior qualitatively changes\".
This is why dumb architectures plus scale beat clever ones.
Why small theoretical gains disappear under data.
Why \"just make it bigger\", ironically, keeps working longer than it should.
That is not a triumph of ML theory: It is a property of high-dimensional systems under loose objectives.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#where-depth-actually-pays-off","level":2,"title":"Where Depth Actually Pays Off","text":"
This reframes the original question.
You do not need depth because this is \"AI\".
You need depth where failure modes propagate upward.
I learned this building ctx: The agent failures I have spent the most time debugging were never about the model's architecture.
They were about:
Misplaced trust: The model was confident. The output was wrong. Knowing when confidence and correctness diverge is not something you learn from a textbook. You learn it from watching patterns across hundreds of sessions.
Distribution shift: The model performed well on common patterns and fell apart on edge cases specific to this project. Recognizing that shift before it compounds requires understanding why generalization has limits, not just that it does.
Error accumulation: In a single prompt, model quirks are tolerable. In autonomous loops running overnight, they compound. A small bias in how the model interprets instructions becomes a large drift by iteration 20.
Scale hiding errors: The model's raw capability masked problems that only surfaced under specific conditions. More parameters did not fix the issue. They just made the failure mode rarer and harder to reproduce.
This is the kind of depth that compounds. Not deriving backprop. But, understanding when correct math produces misleading intuition.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-connection-to-context-engineering","level":2,"title":"The Connection to Context Engineering","text":"
This is the same pattern I keep finding at different altitudes.
In \"The Attention Budget\", I wrote about how dumping everything into the context window degrades the model's focus. The fix was not a better model: It was better curation: load less, load the right things, preserve signal per token.
In \"Skills That Fight the Platform\", I wrote about how custom instructions can conflict with the model's built-in behavior. The fix was not deeper ML knowledge: It was an understanding that the model already has judgment and that you should extend it, not override it.
In \"You Can't Import Expertise\", I wrote about how generic templates fail because they do not encode project-specific knowledge. A consolidation skill with eight Rust-based analysis dimensions was mostly noise for a Go project. The fix was not a better template: It was growing expertise from this project's own history.
In every case, the answer was not \"go deeper into ML\".
The answer was knowing which abstraction was leaking and fixing it at the right layer.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#agentic-systems-are-not-an-ml-problem","level":2,"title":"Agentic Systems Are Not an ML Problem","text":"
The mistake is assuming agent failures originate where the model was trained, rather than where it is deployed.
Agentic AI is a systems problem under chaotic uncertainty:
Feedback loops between the agent and its environment;
Error accumulation across iterations;
Brittle representations that break outside training distribution;
Misplaced trust in outputs that look correct.
In short-lived interactions, model quirks are tolerable. In long-running autonomous loops, however, they compound.
That is where shallow understanding becomes expensive.
But the understanding you need is not about optimizer internals.
It is about:
What Matters What Does Not (for Most Practitioners) Why gradient descent fails in specific regimes How to derive it from scratch When memorization masquerades as reasoning The formal definition of VC dimension Recognizing distribution shift before it compounds Hand-tuning learning rate schedules Predicting when scale hides errors instead of fixing them Chasing theoretical purity divorced from practice
The depth that matters is diagnostic, not theoretical.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-real-answer","level":2,"title":"The Real Answer","text":"
Not turtles all the way down.
Go deep enough to:
Diagnose failures instead of cargo-culting fixes;
Reason about uncertainty instead of trusting confidence;
Design guardrails that align with model behavior, not hope.
Stop before:
Hand-deriving gradients for the sake of it;
Obsessing over optimizer internals you will never touch;
Chasing theoretical purity divorced from the scale you actually operate at.
This is not about mastering ML.
It is about knowing which abstractions you can safely trust and which ones leak.
Hint: Any useful abstraction almost certainly leaks.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#a-practical-litmus-test","level":2,"title":"A Practical Litmus Test","text":"
If a failure occurs and your instinct is to:
Add more prompt text: abstraction leak above
Add retries or heuristics: error accumulation
Change the model: scale masking
Reach for ML theory: you are probably (but not always) going too deep
The right depth is the shallowest layer where the failure becomes predictable.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-ctx-lesson","level":2,"title":"The ctx Lesson","text":"
Every design decision in ctx is downstream of this principle.
The attention budget exists because the model's internal attention mechanism has real limits: You do not need to understand the math of softmax to build around it. But you do need to understand that more context is not always better and that attention density degrades with scale.
The skill system exists because the model's built-in behavior is already good: You do not need to understand RLHF to build effective skills. But you do need to understand that the model already has judgment and your skills should teach it things it does not know, not override how it thinks.
Defense in depth exists because soft instructions are probabilistic: You do not need to understand the transformer architecture to know that a Markdown file is not a security boundary. But you do need to understand that the model follows instructions from context, and context can be poisoned.
In each case, the useful depth was one or two layers below the abstraction I was working at: Not at the bottom of the stack.
The boundary between useful understanding and academic exercise is where your failure modes live.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#closing-thought","level":2,"title":"Closing Thought","text":"
Most modern AI systems do not fail because the math is wrong.
They fail because we apply correct math in the wrong regime, then build autonomous systems on top of it.
Understanding that boundary, not crossing it blindly, is where depth still compounds.
And that is a far more useful form of expertise than memorizing another loss function.
If You Remember One Thing From This Post...
Go deep enough to diagnose your failures. Stop before you are solving problems that do not propagate to your layer.
The abstractions below you are not sacred. But neither are they irrelevant.
The useful depth is wherever your failure modes live. Usually one or two layers down, not at the bottom.
This post started as a note about whether I should take an ML course. The answer turned out to be \"no, but understand why not\". The meta continues.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/","level":1,"title":"Before Context Windows, We Had Bouncers","text":"","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#the-reset-problem","level":2,"title":"The Reset Problem","text":"
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#stateless-protocol-stateful-life","level":2,"title":"Stateless Protocol, Stateful Life","text":"
IRC is minimal:
A TCP connection.
A nickname.
A channel.
A stream of lines.
When the connection drops, you literally disappear from the graph.
The protocol is stateless; human systems are not.
So you:
Reconnect;
Ask what you missed;
Scroll;
Reconstruct.
The machine forgets; you pay.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#the-bouncer-pattern","level":2,"title":"The Bouncer Pattern","text":"
A bouncer is a daemon that remains connected when you do not:
It holds your seat;
It buffers what you missed;
It keeps your identity online.
ZNC is one such bouncer.
With ZNC:
Your client does not connect to IRC;
It connects to ZNC;
ZNC connects upstream.
Client sessions become ephemeral.
Presence becomes infrastructural.
ZNC is tmux for IRC
Close your laptop.
ZNC remains.
Switch devices.
ZNC persists.
This is not convenience; this is continuity.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#presence-without-flapping","level":2,"title":"Presence Without Flapping","text":"
With a bouncer:
Closing your client does not emit PART.
Reopening does not emit JOIN.
You do not flap in and out of existence.
From the channel's perspective, you remain.
From your perspective, history accumulates.
Buffers persist;
Identity persists;
Context persists.
This pattern predates AI.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#before-llm-context-windows","level":2,"title":"Before LLM Context Windows","text":"
An LLM session without memory is IRC without a bouncer:
Close the window.
Start over.
Re-explain intent.
Rehydrate context.
That is friction.
This Walks and Talks like ctx
Context engineering moves memory out of sessions and into infrastructure.
ZNC does this for IRC.
ctx does this for agents.
Same principle:
Volatile interface.
Persistent substrate.
Different fabric.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#minimal-architecture","level":2,"title":"Minimal Architecture","text":"
My setup is intentionally boring:
A $5 small VPS.
ZNC installed.
TLS enabled.
Firewall restricted.
Then:
ZNC connects to Libera.Chat.
SASL authentication lives inside ZNC.
Buffers are stored on disk.
My client connects to my VPS, not the network.
The commands do not matter: The boundaries do:
Authentication in infrastructure, not in the client;
Memory server-side, not in scrollback;
Presence decoupled from activity.
Everything else is configuration.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#platform-memory","level":2,"title":"Platform Memory","text":"
Yes, I know, it is 2026:
Discord stores history;
Slack stores history;
The dumpster fire on gasoline called X, too, stores history.
HOWEVER, they own your substrate.
Running a bouncer is quiet sovereignty:
Logs are mine.
Presence is continuous.
State does not reset because I closed a tab.
Small acts compound.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#signal-density","level":2,"title":"Signal Density","text":"
Primitive systems select for builders.
Consistent presence in small rooms compounds reputation.
Quiet compounding outperforms viral spikes.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#infrastructure-as-cognition","level":2,"title":"Infrastructure as Cognition","text":"
ZNC is not interesting because it is retro; it is interesting because it models a principle:
Stateless protocols require stateful wrappers;
Volatile interfaces require durable memory;
Human systems require continuity.
Distilled:
Humans require context.
Before context windows, we had bouncers.
Before AI memory files, we had buffers.
Continuity is not a feature; it is a design decision.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#build-it","level":2,"title":"Build It","text":"
If you want the actual setup (VPS, ZNC, TLS, SASL, firewall...) there is a step-by-step runbook:
Persistent IRC Presence with ZNC.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#motd","level":2,"title":"MOTD","text":"
When my client connects to my bouncer, it prints:
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n
See also: Context as Infrastructure -- the post that takes this observation to its conclusion: stateless protocols need stateful wrappers, and AI sessions need persistent filesystems.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/","level":1,"title":"Parallel Agents with Git Worktrees","text":"","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-backlog-problem","level":2,"title":"The Backlog Problem","text":"
Jose Alekhinne / 2026-02-14
What Do You Do With 30 Open Tasks?
You could work through them one at a time.
One agent, one branch, one commit stream.
Or you could ask: which of these don't touch each other?
I had 30 open tasks in TASKS.md. Some were docs. Some were a new encryption package. Some were test coverage for a stable module. Some were blog posts.
They had almost zero file overlap.
Running one agent at a time meant serial execution on work that was fundamentally parallel:
I was bottlenecking on me, not on the machine.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-insight-file-overlap-is-the-constraint","level":2,"title":"The Insight: File Overlap Is the Constraint","text":"
This is not a scheduling problem: It's a conflict avoidance problem.
Two agents can work simultaneously on the same codebase if and only if they don't touch the same files. The moment they do, you get merge conflicts: And merge conflicts on AI-generated code are expensive because the human has to arbitrate choices they didn't make.
So the question becomes:
\"Can you partition your backlog into non-overlapping tracks?\"
For ctx, the answer was obvious:
Track Touches Tasks work/docsdocs/, hack/ Blog posts, recipes, runbooks work/padinternal/cli/pad/, specs Scratchpad encryption, CLI, tests work/testsinternal/cli/recall/ Recall test coverage
Three tracks. Near-zero overlap. Three agents.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#git-worktrees-the-mechanism","level":2,"title":"Git Worktrees: The Mechanism","text":"
git has a feature that most people don't use: worktrees.
A worktree is a second (or third, or fourth) working directory that shares the same .git object database as your main checkout.
Each worktree has its own branch, its own index, its own working tree. But they all share history, refs, and objects.
This is cheaper than three clones. And because they share objects, git merge afterwards is fast: It's a local operation on shared data.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-setup","level":2,"title":"The Setup","text":"
The workflow I landed on:
1. Group tasks by blast radius.
Read TASKS.md. For each pending task, estimate which files and directories it touches. Group tasks that share files into the same track. Tasks with no overlap go into separate tracks.
This is the part that requires human judgment:
An agent can propose groupings, but you need to verify that the boundaries are real. A task that says \"update docs\" but actually touches Go code will poison a docs track.
2. Create worktrees as sibling directories.
Not subdirectories: Siblings.
If your main checkout is at ~/WORKSPACE/ctx, worktrees go at ~/WORKSPACE/ctx-docs, ~/WORKSPACE/ctx-pad, etc.
Why siblings? Because some tools (and some agents) walk up the directory tree looking for .git. A worktree inside the main checkout confuses them.
Each agent gets a full working copy with .context/ intact. It reads the same TASKS.md, the same DECISIONS.md, the same CONVENTIONS.md. It knows the full project state. It just works on a different slice.
4. Do NOT run ctx init in worktrees.
This is the gotcha. The .context/ directory is tracked in git. Running ctx init in a worktree would overwrite shared context files: Wiping decisions, learnings, and tasks that belong to the whole project.
The worktree already has everything it needs. Leave it alone.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#what-actually-happened","level":2,"title":"What Actually Happened","text":"
I ran three agents for about 40 minutes. Here is roughly what each track produced:
work/docs: Parallel worktrees recipe, blog post edits, recipe index reorganization, IRC recipe moved from docs/ to hack/.
work/pad: ctx pad show subcommand, --append and --prepend flags on ctx pad edit, spec updates, 28 new test functions.
work/tests: Recall test coverage, edge case tests.
Merging took about five minutes. Two of the three merges were clean.
The third had a conflict in TASKS.md:
both the docs track and the pad track had marked different tasks as [x].
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-tasksmd-conflict","level":2,"title":"The TASKS.md Conflict","text":"
This deserves its own section because it will happen every time.
When two agents work in parallel, they both read TASKS.md at the start and mark tasks complete as they go. When you merge, git sees two branches that modified the same file differently.
The resolution is always the same: accept all completions from both sides. No task should go from [x] back to [ ]. The merge is additive.
This is one of those conflicts that sounds scary but is trivially mechanical: You are not arbitrating design decisions; you are combining two checklists.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#limits","level":2,"title":"Limits","text":"
3-4 worktrees, maximum.
I tried four once: By the time I merged the third track, the fourth had drifted far enough that its changes needed rebasing.
The merge complexity grows faster than the parallelism benefit.
Three is the sweet spot:
Two is conservative but safe;
Four is possible if the tracks are truly independent;
Anything more than four, you are in the danger zone.
Group by directory, not by priority.
It is tempting to put all the high-priority tasks in one track: Don't.
Two high-priority tasks that touch the same files must be in the same track, regardless of urgency. The constraint is file overlap, not importance.
Commit frequently.
Smaller commits make merge conflicts easier to resolve. An agent that writes 500 lines in a single commit is harder to merge than one that commits every logical step.
Name tracks by concern.
work/docs and work/pad tell you what's happening;
work/track-1 and work/track-2 tell you nothing.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-pattern","level":2,"title":"The Pattern","text":"
This is the same pattern that shows up everywhere in ctx:
The attention budget taught me that you can't dump everything into one context window. You have to partition, prioritize, and load selectively.
Worktrees are the same principle applied to execution: You can't dump every task into one agent's workstream. You have to partition by blast radius, assign selectively, and merge deliberately.
The codebase audit that generated these 30 tasks used eight parallel agents for analysis. Worktrees let me use parallel agents for implementation. Same coordination pattern, different artifact.
And the IRC bouncer post from earlier today argued that stateless protocols need stateful wrappers. Worktrees are the same: git branches are stateless forks; .context/ is the stateful wrapper that gives each agent the project's full memory.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#should-this-be-a-skill","level":2,"title":"Should This Be a Skill?","text":"
I asked myself the same question I asked about the codebase audit: should this be a /ctx-worktree skill?
This time the answer was a resounding \"yes\":
Unlike the audit prompt (which I tweak every time and run every other week) the worktree workflow is:
Criterion Worktree workflow Codebase audit Frequency Weekly Quarterly Stability Same steps every time Tweaked every time Scope Mechanical, bounded Bespoke, 8 agents Trigger Large backlog \"I feel like auditing\"
The commands are mechanical: git worktree add, git worktree remove, branch naming, safety checks. This is exactly what skills are for: stable contracts for repetitive operations.
Ergo, /ctx-worktree exists.
It enforces the 4-worktree limit, creates sibling directories, uses work/ branch prefixes, and reminds you not to run ctx init in worktrees.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-takeaway","level":2,"title":"The Takeaway","text":"
Serial execution is the default. But serial is not always necessary.
If your backlog partitions cleanly by file overlap, you can multiply your throughput with nothing more exotic than git worktree and a second terminal window.
The hard part is not the git commands; it is the discipline:
Grouping by blast radius instead of priority;
Accepting that TASKS.md will conflict;
And knowing when three tracks is enough.
If You Remember One Thing From This Post...
Partition by blast radius, not by priority.
Two tasks that touch the same files belong in the same track, no matter how important the other one is.
The constraint is file overlap. Everything else is scheduling.
The practical setup (skill invocation, worktree creation, merge workflow, and cleanup) lives in the recipe: Parallel Agent Development with Git Worktrees.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/","level":1,"title":"ctx v0.3.0: The Discipline Release","text":"","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#when-the-ratio-of-polish-to-features-is-31-you-know-something-changed","level":2,"title":"When the Ratio of Polish to Features Is 3:1, You Know Something Changed","text":"
Jose Alekhinne / February 15, 2026
What Does a Release Look Like When Most of the Work Is Invisible?
No new headline feature. No architectural pivot. No rewrite.
Just 35+ documentation and quality commits against ~15 feature commits... and somehow, the tool feels like it grew up overnight.
Six days separate v0.2.0 from v0.3.0.
Measured by calendar time, it is nothing. Measured by what changed in how the project operates, it is the most significant release yet.
v0.1.0 was the prototype;
v0.2.0 was the archaeology release: making the past accessible;
v0.3.0 is the discipline release: the one that turned best practices into enforcement, suggestions into structure, and a collection of commands into a system of skills.
The Release Window
February 1‒February 7, 2026
From the v0.2.0 tag to commit 2227f99.
78 files changed in the migration commit alone.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-migration-commands-to-skills","level":2,"title":"The Migration: Commands to Skills","text":"
The largest single change was the migration from .claude/commands/*.md to .claude/skills/*/SKILL.md.
This was not a rename: It was a rethinking of how AI agents discover and execute project-specific workflows.
Aspect Commands (before) Skills (after) Structure Flat files in one directory Directory-per-skill with SKILL.md Description Optional, often vague Required, doubles as activation trigger Quality gates None \"Before X-ing\" pre-flight checklist Negative triggers None \"When NOT to Use\" in every skill Examples Rare Good/bad pairs in every skill Average length ~15 lines ~80 lines
The description field became the single most important line in each skill. In the old system, descriptions were titles. In the new system, they are activation conditions: The text the platform reads to decide whether to surface a skill for a given prompt.
A description that says \"Show context summary\" activates too broadly or not at all. A description that says \"Show context summary. Use at session start or when unclear about current project state\" activates at the right moment.
78 files changed. 1,915 insertions. Not because the skills got bloated; because they got specific.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-skill-sweep","level":2,"title":"The Skill Sweep","text":"
After the structural migration, every skill was rewritten in a single session: All 21 of them.
The rewrite was guided by a pattern that emerged during the process itself: a repeatable anatomy that effective skills share regardless of their purpose:
Before X-ing: Pre-flight checks that prevent premature execution
When to Use: Positive triggers that narrow activation
When NOT to Use: Negative triggers that prevent misuse
Usage Examples: Invocation patterns the agent can pattern-match
Quality Checklist: Verification before claiming completion
The Anatomy of a Skill That Works post covers the details. What matters for the release story is the result:
Zero skills with quality gates became twenty;
Zero skills with negative triggers became twenty.
Three skills with examples became twenty.
The Skill Trilogy as Design Spec
The three blog posts written during this window:
Skills That Fight the Platform,
You Can't Import Expertise,
and The Anatomy of a Skill That Works...
... were not retrospective documentation. They were written during the rewrite, and the lessons fed back into the skills as they were being built.
The blog was the design document.
The skills were the implementation.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-consolidation-sweep","level":2,"title":"The Consolidation Sweep","text":"
The unglamorous work. The kind you only appreciate when you try to change something later and it just works.
What Why It Matters Constants consolidation Magic strings replaced with semantic constants Variable deshadowing Eliminated subtle scoping bugs File splits Modules that were doing too much, broken apart Godoc standardization Every exported function documented to convention
This is the work that doesn't get a changelog entry but makes every future commit easier. When a new contributor (human or AI) reads the codebase, they find consistent patterns instead of accumulated drift.
The consolidation was not an afterthought. It was scheduled deliberately, with the same priority as features: The 3:1 ratio that emerged during v0.2.0 development became an explicit practice:
Three feature sessions;
One consolidation session.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-ear-framework","level":2,"title":"The E/A/R Framework","text":"
On February 4th, we adopted the E/A/R classification as the official standard for evaluating skills:
Category Meaning Target Expert Knowledge Claude does not have >70% Activation When/how to trigger ~20% Redundant What Claude already knows <10%
This came from reviewing approximately 30 external skill files and discovering that most were redundant with Claude's built-in system prompt. Only about 20% had salvageable content, and even those yielded just a few heuristics each.
The E/A/R framework gave us a concrete, testable criterion:
A good skill is Expert knowledge minus what Claude already knows.
If more than 10% of a skill restates platform defaults, it is creating noise, not signal.
Every skill in v0.3.0 was evaluated against this framework. Several were deleted. The survivors are leaner and more focused.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#backup-and-monitoring-infrastructure","level":2,"title":"Backup and Monitoring Infrastructure","text":"
A tool that manages your project's memory needs ops maturity.
v0.3.0 added two pieces of infrastructure that reflect this:
Backup staleness hook: A UserPromptSubmit hook that checks whether the last .context/ backup is more than two days old. If it is, and the SMB mount is available, it reminds the user. No cron job running when nobody is working. No redundant backups when nothing has changed.
Context size checkpoint: A PreToolUse hook that estimates current context window usage and warns when the session is getting heavy. This hooks into the attention budget philosophy: Degradation is expected, but it should be visible.
Both hooks use $CLAUDE_PROJECT_DIR instead of hardcoded paths, a migration triggered by a username rename that broke every absolute path in the hook configuration. That migration (replacing /home/user/... with \"$CLAUDE_PROJECT_DIR\"/.claude/hooks/...) was one of those changes that seems trivial but prevents an entire category of future failures.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-numbers","level":2,"title":"The Numbers","text":"Metric v0.2.0 v0.3.0 Skills (was \"commands\") 11 21 Skills with quality gates 0 21 Skills with \"When NOT to Use\" 0 21 Average skill body ~15 lines ~80 lines Hooks using $CLAUDE_PROJECT_DIR 0 All Documentation commits -- 35+ Feature/fix commits -- ~15
That ratio (35+ documentation and quality commits to ~15 feature commits) is the defining characteristic of this release:
This release is not a failure to ship features.
It is the deliberate choice to make the existing features reliable.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#what-v030-means","level":2,"title":"What v0.3.0 Means","text":"
v0.1.0 asked: \"Can we give AI persistent memory?\"
v0.2.0 asked: \"Can we make that memory accessible to humans too?\"
v0.3.0 asks a different question: \"Can we make the quality self-enforcing?\"
The answer is not a feature: It is a practice:
Skills with quality gates enforce pre-flight checks.
Negative triggers prevent misuse without human intervention.
The E/A/R framework ensures skills contain signal, not noise.
Consolidation sessions are scheduled, not improvised.
Hook infrastructure makes degradation visible.
Discipline is not the absence of velocity. It is the infrastructure that makes velocity sustainable.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#what-comes-next","level":2,"title":"What Comes Next","text":"
The skill system is now mature enough to support real workflows without constant human correction. The hooks infrastructure is portable and resilient. The consolidation practice is documented and repeatable.
The next chapter is about what you build on top of discipline:
Multi-agent coordination;
Deeper integration patterns;
And the question of whether context management is a tool concern or an infrastructure concern.
But those are future posts.
This one is about the release that proved polish is not the opposite of progress. It is what turns a prototype into a product.
The Discipline Release
v0.1.0 shipped features.
v0.2.0 shipped archaeology.
v0.3.0 shipped the habits that make everything else trustworthy.
The most important code in this release is the code that prevents bad code from shipping.
This post was drafted using /ctx-blog with access to the full git history between v0.2.0 and v0.3.0, decision logs, learning logs, and the session files from the skill rewrite window. The meta continues.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/","level":1,"title":"Eight Ways a Hook Can Talk","text":"","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#when-your-warning-disappears","level":2,"title":"When Your Warning Disappears","text":"
Jose Alekhinne / 2026-02-15
I had a backup warning that nobody ever saw.
The hook was correct: It detected stale backups, formatted a nice message, and output it as {\"systemMessage\": \"...\"}. The problem wasn't detection. The problem was delivery. The agent absorbed the information, processed it internally, and never told the user.
Meanwhile, a different hook (the journal reminder) worked perfectly every time. Users saw the reminder, ran the commands, and the backlog stayed manageable. Same hook event (UserPromptSubmit), same project, completely different outcomes.
The difference was one line:
IMPORTANT: Relay this journal reminder to the user VERBATIM\nbefore answering their question.\n
That explicit instruction is what makes VERBATIM relay a pattern, not just a formatting choice. And once I saw it as a pattern, I started seeing others.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#the-audit","level":2,"title":"The Audit","text":"
I looked at every hook in ctx: Eight shell scripts across three hook events. And I found five distinct output patterns already in use, plus three more that the existing hooks were reaching for but hadn't quite articulated.
The patterns form a spectrum based on a single question:
\"Who decides what the user sees?\"
At one end, the hook decides everything (hard gate: the agent literally cannot proceed). At the other end, the hook is invisible (silent side-effect: nobody knows it ran). In between, there is a range of negotiation between hook, agent, and the user.
Here's the full spectrum:
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#1-hard-gate","level":3,"title":"1. Hard Gate","text":"
{\"decision\": \"block\", \"reason\": \"Use ctx from PATH, not ./ctx\"}\n
The nuclear option: The agent's tool call is rejected before it executes.
This is Claude Code's first-class PreToolUse mechanism: The hook returns JSON with decision: block and the agent gets an error with the reason.
Use this for invariants: Constitution rules, security boundaries, things that must never happen. I use it to enforce PATH-based ctx invocation, block sudo, and require explicit approval for git push.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#2-verbatim-relay","level":3,"title":"2. VERBATIM Relay","text":"
IMPORTANT: Relay this warning to the user VERBATIM before answering.\n┌─ Journal Reminder ─────────────────────────────\n│ You have 12 sessions not yet imported.\n│ ctx recall import --all\n└────────────────────────────────────────────────\n
The instruction is the pattern. Without \"Relay VERBATIM,\" agents tend to absorb information into their internal reasoning and never surface it. The explicit instruction changes the behavior from \"I know about this\" to \"I must tell the user about this.\"
I use this for actionable reminders:
Unexported journal entries;
Stale backups;
Context capacity warnings...
...things the user should see regardless of what they asked.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#3-agent-directive","level":3,"title":"3. Agent Directive","text":"
┌─ Persistence Checkpoint (prompt #25) ───────────\n│ No context files updated in 15+ prompts.\n│ Have you discovered learnings worth persisting?\n└──────────────────────────────────────────────────\n
A nudge, not a command. The hook tells the agent something; the agent decides what (if anything) to tell the user. This is right for behavioral nudges: \"you haven't saved context in a while\" doesn't need to be relayed verbatim, but the agent should consider acting on it.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#4-silent-context-injection","level":3,"title":"4. Silent Context Injection","text":"
ctx agent --budget 4000 2>/dev/null || true\n
Pure background enrichment. The agent's context window gets project information injected on every tool call, with no visible output. Neither the agent nor the user sees the hook fire, but the agent makes better decisions because of the context.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#5-silent-side-effect","level":3,"title":"5. Silent Side-Effect","text":"
find \"$CTX_TMPDIR\" -type f -mtime +15 -delete\n
Do work, say nothing. Temp file cleanup on session end. Logging. Marker file management. The action is the entire point; no one needs to know.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#the-patterns-we-dont-have-yet","level":2,"title":"The Patterns We Don't Have Yet","text":"
Three more patterns emerged from the gaps in the existing hooks.
Conditional relay: \"Relay this, but only if the user's question is about X.\" This pattern avoids noise when the warning isn't relevant. It's more fragile (depends on agent judgment) but less annoying.
Suggested action: \"Here's a problem, and here's the exact command to fix it. Ask the user before running it.\" This pattern goes beyond a nudge by giving the agent a concrete proposal, but still requires human approval.
Escalating severity: INFO gets absorbed silently. WARN gets mentioned at the next natural pause. CRITICAL gets the VERBATIM treatment. This pattern introduces a protocol for hooks that produce output at different urgency levels, so they don't all compete for the user's attention.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#the-principle","level":2,"title":"The Principle","text":"
Hooks are the boundary between your environment and the agent's reasoning.
A hook that detects a problem but can't communicate it effectively is the same as no hook at all.
The format of your output is a design decision with real consequences:
Use a hard gate and the agent can't proceed (good for invariants, frustrating for false positives)
Use VERBATIM relay and the user will see it (good for reminders, noisy if overused)
Use an agent directive and the agent might act (good for nudges, unreliable for critical warnings)
Use silent injection and nobody knows (good for enrichment, invisible when it breaks)
Choose deliberately. And, when in doubt, write the word VERBATIM.
The full pattern catalog with decision flowchart and implementation examples is in the Hook Output Patterns recipe.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/","level":1,"title":"Version Numbers Are Lagging Indicators","text":"","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#why-ctxs-journal-site-runs-on-a-v0021-tool","level":2,"title":"Why ctx's Journal Site Runs on a v0.0.21 Tool","text":"
Jose Alekhinne / 2026-02-15
Would You Ship Production Infrastructure on a v0.0.21 Dependency?
Most engineers wouldn't. Version numbers signal maturity. Pre-1.0 means unstable API, missing features, risk.
But version numbers tell you where a project has been. They say nothing about where it's going.
I just bet ctx's entire journal site on a tool that hasn't hit v0.1.0.
Here's why I'd do it again.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-problem","level":2,"title":"The Problem","text":"
When v0.2.0 shipped the journal system, the pipeline was clear:
Export sessions to Markdown;
Enrich them with YAML frontmatter;
And render them into something browsable.
The first two steps were solved; the third needed a tool.
The journal entries are standard Markdown with YAML frontmatter, tables, and fenced code blocks. That is the entire format:
No JSX;
No shortcodes;
No custom templating.
Just Markdown rendered well.
The requirements are modest:
Read a configuration file (such as mkdocs.yml);
Render Markdown with extensions (admonitions, tabs, tables);
Search;
Handle 100+ files without choking on incremental rebuilds;
Look good out of the box;
Not lock me in.
The obvious candidates were as follows:
Tool Language Strengths Pain Points Hugo Go Blazing fast, mature Templating is painful; Go templates fight you on anything non-trivial Astro JS/TS Modern, flexible JS ecosystem overhead; overkill for a docs site MkDocs + Material Python Beautiful defaults, massive community (22k+ stars) Slow incremental rebuilds on large sites; limited extensibility model Zensical Python Built to fix MkDocs' limits; 4-5x faster rebuilds v0.0.21; module system not yet shipped
The instinct was Hugo. Same language as ctx. Fast. Well-established.
But instinct is not analysis. I picked the one with the lowest version number.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-evaluation","level":2,"title":"The Evaluation","text":"
Here is what I actually evaluated, in order:
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#1-the-team","level":3,"title":"1. The Team","text":"
Zensical is built by squidfunk: The same person behind Material for MkDocs, the most popular MkDocs theme with 22,000+ stars. It powers documentation sites for projects across every language and framework.
This is not someone learning how to build static site generators.
This is someone who spent years understanding exactly where MkDocs breaks and decided to fix it from the ground up.
They did not build zensical because MkDocs was bad: They built it because MkDocs hit a ceiling:
Incremental rebuilds: 4-5x faster during serve. When you have hundreds of journal entries and you edit one, the difference between \"rebuild everything\" and \"rebuild this page\" is the difference between a usable workflow and a frustrating one.
Large site performance: Specifically designed for tens of thousands of pages. The journal grows with every session. A tool that slows down as content accumulates is a tool you will eventually replace.
A proven team starting fresh is more predictable than an unproven team at v3.0.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#2-the-architecture","level":3,"title":"2. The Architecture","text":"
Zensical is investing in a Rust-based Markdown parser with CommonMark support. That signals something about the team's priorities:
Performance foundations first; features second.
ctx's journal will grow:
Every exported session adds files.
Every enrichment pass adds metadata.
Choosing a tool that gets slower as you add content means choosing to migrate later.
Choosing one built for scale means the decision holds.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#3-the-migration-path","level":3,"title":"3. The Migration Path","text":"
Zensical reads mkdocs.yml natively. If it doesn't work out, I can move back to MkDocs + Material with zero content changes:
The Markdown is standard;
The frontmatter is standard;
The configuration is compatible.
This is the infrastructure pattern again: The same way ZNC decouples presence from the client, zensical decouples rendering from the generator:
The Markdown is yours.
The frontmatter is standard YAML.
The configuration is MkDocs-compatible.
You are not locked into anything except your own content.
No lock-in is not a feature: It's a design philosophy:
It's the same reason ctx uses plain Markdown files in .context/ instead of a database: the format should outlive the tool.
Lock-in Is the Real Risk, Not Version Numbers
A mature tool with a proprietary format is riskier than a young tool with a standard one. Version numbers measure time invested. Portability measures respect for the user.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#4-the-dependency-tree","level":3,"title":"4. The Dependency Tree","text":"
Here is what pip install zensical actually pulls in:
click
Markdown
Pygments
pymdown-extensions
PyYAML
Only five dependencies. All well-known. No framework bloat. No bundler. No transpiler. No node_modules black hole.
3k GitHub stars at v0.0.21 is a strong early traction for a pre-1.0 project.
The dependency tree is thin: No bloat.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#5-the-fit","level":3,"title":"5. The Fit","text":"
This is the same principle behind the attention budget: do not overfit the tool to hypothetical requirements. The right amount of capability is the minimum needed for the current task.
Hugo is a powerful static site generator. It is also a powerful templating engine, a powerful asset pipeline, and a powerful taxonomy system. For rendering Markdown journals, that power is overhead:
It is the complexity you pay for but never use.
ctx's journal files are standard Markdown with YAML frontmatter, tables, and fenced code blocks. That is exactly the sweet spot Zensical inherits from Material for MkDocs:
No custom plugins needed;
No special syntax;
No templating gymnastics.
The requirements match the capabilities: Not the capabilities that are promised, but the ones that exist today, at v0.0.21.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-caveat","level":2,"title":"The Caveat","text":"
It would be dishonest not to mention what's missing.
The module system for third-party extensions opens in early 2026.
If ctx ever needs custom plugins (for example, auto-linking session IDs, rendering special journal metadata, etc.) that infrastructure isn't there yet.
The installation experience is rough:
We discovered this firsthand: pip install zensical often fails on MacOS (system Python stubs, Homebrew's PEP 668 restrictions). The answer is pipx, which creates an isolated environment with the correct Python version automatically.
That kind of friction is typical for young Python tooling, and it is documented in the Getting Started guide.
And 3,000 stars at v0.0.21 is strong early traction, but it's still early: The community is small. When something breaks, you're reading source code, not documentation.
These are real costs. I chose to pay them because the alternative costs are higher.
For example:
Hugo's templating pain would cost me time on every site change.
Astro's JS ecosystem would add complexity I don't need.
MkDocs would work today but hit scaling walls tomorrow.
Zensical's costs are front-loaded and shrinking.
The others compound.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-evaluation-framework","level":2,"title":"The Evaluation Framework","text":"
For anyone facing a similar choice, here is the framework that emerged:
Signal What It Tells You Weight Team track record Whether the architecture will be sound High Migration path Whether you can leave if wrong High Current fit Whether it solves your problem today High Dependency tree How much complexity you're inheriting Medium Version number How long the project has existed Low Star count Community interest (not quality) Low Feature list What's possible (not what you need) Low
The bottom three are the metrics most engineers optimize for.
The top four are the ones that predict whether you'll still be happy with the choice in a year.
Features You Don't Need Are Not Free
Every feature in a dependency is code you inherit but don't control.
A tool with 200 features where you use 5 means 195 features worth of surface area for bugs, breaking changes, and security issues that have nothing to do with your use case.
Fit is the inverse of feature count.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-broader-pattern","level":2,"title":"The Broader Pattern","text":"
This is part of a theme I keep encountering in this project:
Leading indicators beat lagging indicators.
Domain Lagging Indicator Leading Indicator Tooling Version number, star count Team track record, architecture Code quality Test coverage percentage Whether tests catch real bugs Context persistence Number of files in .context/ Whether the AI makes fewer mistakes Skills Number of skills created Whether each skill fires at the right time Consolidation Lines of code refactored Whether drift stops accumulating
Version numbers, star counts, coverage percentages, file counts...
...these are all measures of effort expended.
They say nothing about value delivered.
The question is never \"how mature is this tool?\"
The question is \"does this tool's trajectory intersect with my needs?\"
Zensical's trajectory:
A proven team fixing known problems,
in a *proven architecture,
with a standard format,
and no lock-in.
ctx's needs:
Tender standard Markdown into a browsable site, at scale, without complexity.
The intersection is clean; the version number is noise.
This is the same kind of decision that shows up throughout ctx:
Skills that fight the platform taught that the best integration extends existing behavior, not replaces it.
You can't import expertise taught that tools should grow from your project's actual needs, not from feature checklists.
Context as infrastructure argues that the format should outlive the tool; and, zensical honors that principle by reading standard Markdown and standard MkDocs configuration.
If You Remember One Thing From This Post...
Version numbers measure where a project has been.
The team and the architecture tell you where it's going.
A v0.0.21 tool built by the right team on the right foundations is a safer bet than a v5.0 tool that doesn't fit your problem.
Bet on trajectories, not timestamps.
This post started as an evaluation note in ideas/ and a separate decision log. The analysis held up. The two merged into one. The meta continues.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/","level":1,"title":"ctx v0.6.0: The Integration Release","text":"","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#two-commands-to-persistent-memory","level":2,"title":"Two Commands to Persistent Memory","text":"
Jose Alekhinne / February 16, 2026
What Changed?
ctx is now a Claude Code plugin. Two commands, no build step:
Understand which shell scripts called which Go commands;
Hope nothing broke when Claude Code updated its hook format.
v0.6.0 ends that era: ctx ships as a Claude Marketplace plugin:
Hooks and skills served directly from source, installed with a single command, updated by pulling the repo. The tool that gives AI persistent memory is now as easy to install as the AI itself.
But the plugin conversion was not just a packaging change: It was the forcing function that rewrote every shell hook in Go, eliminated the jq dependency, enabled go test coverage for hook logic, and made distribution a solved problem.
When you fix how something ships, you end up fixing how it is built.
The Release Window
February 15-February 16, 2026
From the v0.3.0 tag to commit a3178bc:
109 commits.
334 files changed.
Version jumped from 0.3.0 to 0.6.0 to signal the magnitude.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#before-six-shell-scripts-and-a-prayer","level":2,"title":"Before: Six Shell Scripts and a Prayer","text":"
v0.3.0 had six hook scripts. Each was a Bash file that shelled out to ctx subcommands, parsed JSON with jq, and wired itself into Claude Code's hook system via .claude/hooks/:
jq was a hard dependency: No jq, no hooks. macOS ships without it.
No test coverage: Shell scripts were tested manually or not at all.
Fragile deployment: ctx init had to scaffold .claude/hooks/ and .claude/skills/ with the right paths, permissions, and structure.
Version drift: Users who installed once never got hook updates unless they re-ran ctx init.
The shell scripts were the right choice for prototyping. They were the wrong choice for distribution.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#after-one-plugin-zero-shell-scripts","level":2,"title":"After: One Plugin, Zero Shell Scripts","text":"
v0.6.0 replaces all six scripts with ctx system subcommands compiled into the binary:
Shell Script Go Subcommand check-context-size.shctx system check-context-sizecheck-persistence.shctx system check-persistencecheck-journal.shctx system check-journalpost-commit.shctx system post-commitblock-non-path-ctx.shctx system block-non-path-ctxcleanup-tmp.shctx system cleanup-tmp
The plugin's hooks.json wires them to Claude Code events:
{\n \"PreToolUse\": [\n {\"matcher\": \"Bash\", \"command\": \"ctx system block-non-path-ctx\"},\n {\"matcher\": \".*\", \"command\": \"ctx agent --budget 4000\"}\n ],\n \"PostToolUse\": [\n {\"matcher\": \"Bash\", \"command\": \"ctx system post-commit\"}\n ],\n \"UserPromptSubmit\": [\n {\"command\": \"ctx system check-context-size\"},\n {\"command\": \"ctx system check-persistence\"},\n {\"command\": \"ctx system check-journal\"}\n ],\n \"SessionEnd\": [\n {\"command\": \"ctx system cleanup-tmp\"}\n ]\n}\n
No jq. No shell scripts. No .claude/hooks/ directory to manage.
The hooks are Go functions with tests, compiled into the same binary you already have.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#the-plugin-model","level":2,"title":"The Plugin Model","text":"
The ctx plugin lives at .claude-plugin/marketplace.json in the repo.
Claude Code's marketplace system handles discovery and installation:
Skills are served directly from internal/assets/claude/skills/; there is no build step, no make plugin, no generated artifacts.
This means:
Install is two commands: Not \"clone, build, copy, configure.\"
Updates are automatic: Pull the repo; the plugin reads from source.
Skills and hooks are versioned together: No drift between what the CLI expects and what the plugin provides.
ctx init is tool-agnostic: It creates .context/ and nothing else. No .claude/ scaffolding, no assumptions about which AI tool you use.
That last point matters:
Before v0.6.0, ctx init tried to set up Claude Code integration as part of initialization. That coupled the context system to a specific tool.
Now, ctx init gives you persistent context. The plugin gives you Claude Code integration. They compose; they don't depend.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#beyond-the-plugin-what-else-shipped","level":2,"title":"Beyond the Plugin: What Else Shipped","text":"
The plugin conversion dominated the release, but 109 commits covered more ground.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#obsidian-vault-export","level":3,"title":"Obsidian Vault Export","text":"
ctx journal obsidian\n
Generates a full Obsidian vault from enriched journal entries: wikilinks, MOC (Map of Content) pages, and graph-optimized cross-linking. If you already use Obsidian for notes, your AI session history now lives alongside everything else.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#encrypted-scratchpad","level":3,"title":"Encrypted Scratchpad","text":"
ctx pad edit \"DATABASE_URL=postgres://...\"\nctx pad show\n
AES-256-GCM encrypted storage for sensitive one-liners.
The encrypted blob commits to git; the key stays in .gitignore.
This is useful for connection strings, API keys, and other values that need to travel with the project without appearing in plaintext.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#security-hardening","level":3,"title":"Security Hardening","text":"
Three medium-severity findings from a security audit are now closed:
Finding Fix Path traversal via --context-dir Boundary validation: operations cannot escape project root (M-1) Symlink following in .context/Lstat() check before every file read/write (M-2) Predictable temp file paths User-specific temp directory under $XDG_RUNTIME_DIR (M-3)
Plus a new /sanitize-permissions skill that audits settings.local.json for overly broad Bash permissions.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#hooks-that-know-when-to-be-quiet","level":3,"title":"Hooks That Know When to Be Quiet","text":"
A subtle but important fix: hooks now no-op before ctx init has run.
Previously, a fresh clone with no .context/ would trigger hook errors on every prompt. Now, hooks detect the absence of a context directory and exit silently. Similarly, ctx init treats a .context/ directory containing only logs as uninitialized and skips the --overwrite prompt.
Small changes. Large reduction in friction for new users.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#the-numbers","level":2,"title":"The Numbers","text":"Metric v0.3.0 v0.6.0 Skills 21 25 Shell hook scripts 6 0 Go system subcommands 0 6 External dependencies (hooks) jq, bash none Lines of Go ~14,000 ~37,000 Plugin install commands n/a 2 Security findings (open) 3 0 ctx init creates .claude/ yes no
The line count tripled. Most of that is documentation site HTML, Obsidian export logic, and the scratchpad encryption module.
The core CLI grew modestly; the ecosystem around it grew substantially.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#what-does-v060-mean-for-ctx","level":2,"title":"What Does v0.6.0 Mean for ctx?","text":"
v0.1.0 asked: \"Can we give AI persistent memory?\"
v0.2.0 asked: \"Can we make that memory accessible to humans too?\"
v0.3.0 asked: \"Can we make the quality self-enforcing?\"
v0.6.0 asks: \"Can someone else actually use this?\"
A tool that requires cloning a repo, building from source, and manually wiring hooks into the right directories is a tool for its author.
A tool that installs with two commands from a marketplace is a tool for everyone.
The version jumped from 0.3.0 to 0.6.0 because the delta is not incremental: The shell-to-Go rewrite, the plugin model, the security hardening, and the tool-agnostic init: Together, they change what ctx is: Not a different tool, but a tool that is finally ready to leave the workshop.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#what-comes-next","level":2,"title":"What Comes Next","text":"
The plugin model opens the door to distribution patterns that were not possible before. Marketplace discovery means new users find ctx without reading a README. Plugin updates mean existing users get improvements without rebuilding.
The next chapter is about what happens when persistent context is easy to install: Adoption patterns, multi-project workflows, and whether the .context/ convention can become infrastructure that other tools build on.
But those are future posts.
This one is about the release that turned a developer tool into a distributable product: two commands, zero shell scripts, and a presence on the Claude Marketplace.
v0.3.0 shipped discipline. v0.6.0 shipped the front door.
The most important code in this release is the code you never have to copy.
This post was drafted using /ctx-blog-changelog with access to the full git history between v0.3.0 and v0.6.0, release notes, and the plugin conversion PR. The meta continues.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/","level":1,"title":"Code Is Cheap. Judgment Is Not.","text":"","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#why-ai-replaces-effort-not-expertise","level":2,"title":"Why AI Replaces Effort, Not Expertise","text":"
Jose Alekhinne / February 17, 2026
Are You Worried About AI Taking Your Job?
You might be confusing the thing that's cheap with the thing that's valuable.
I keep seeing the same conversation: Engineers, designers, writers: all asking the same question with the same dread:
\"What happens when AI can do what I do?\"
The question is wrong:
AI does not replace workers;
AI replaces unstructured effort.
The distinction matters, and everything I have learned building ctx reinforces it.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-three-confusions","level":2,"title":"The Three Confusions","text":"
People who feel doomed by AI usually confuse three things:
People confuse... With... Effort Value Typing Thinking Production Judgment
Effort is time spent.
Value is the outcome that time produces.
They are not the same; they never were.
AI just makes the gap impossible to ignore.
Typing is mechanical: Thinking is directional.
An AI can type faster than any human. Yet, it cannot decide what to type without someone framing the problem, sequencing the work, and evaluating the result.
Production is making artifacts. Judgment is knowing:
which artifacts to make,
in what order,
to what standard,
and when to stop.
AI floods the system with production capacity; it does not flood the system with judgment.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#code-is-nothing","level":2,"title":"Code Is Nothing","text":"
This sounds provocative until you internalize it:
Code is cheap. Artifacts are cheap.
An AI can generate a thousand lines of working code in literal *minutes**:
It can scaffold a project, write tests, build a CI pipeline, draft documentation. The raw production of software artifacts is no longer the bottleneck.
So, what is not cheap?
Taste: knowing what belongs and what does not
Framing: turning a vague goal into a concrete problem
Sequencing: deciding what to build first and why
Fanning out: breaking work into parallel streams that converge
Acceptance criteria: defining what \"done\" looks like before starting
Judgment: the thousand small decisions that separate code that works from code that lasts
These are the skills that direct production: Hhuman skills.
Not because AI is incapable of learning them, but because they require something AI does not have:
temporal accountability for generated outcomes.
That is, you cannot keep AI accountable for the $#!% it generated three months ago. A human, on the other hand, will always be accountable.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-evidence-from-building-ctx","level":2,"title":"The Evidence From Building ctx","text":"
I did not arrive at this conclusion theoretically.
I arrived at it by building a tool with an AI agent for three weeks and watching exactly where a human touch mattered.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#yolo-mode-proved-production-is-cheap","level":3,"title":"YOLO Mode Proved Production Is Cheap","text":"
In Building ctx Using ctx, I documented the YOLO phase: auto-accept everything, let the AI ship features at full speed. It produced 14 commands in a week. Impressive output.
The code worked. The architecture drifted. Magic strings accumulated. Conventions diverged. The AI was producing at a pace no human could match, and every artifact it produced was a small bet that nobody was evaluating.
Production without judgment is not velocity. It is debt accumulation at breakneck speed.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-31-ratio-proved-judgment-has-a-cadence","level":3,"title":"The 3:1 Ratio Proved Judgment Has a Cadence","text":"
In The 3:1 Ratio, the git history told the story:
Three sessions of forward momentum followed by one session of deliberate consolidation. The consolidation session is where the human applies judgment: reviewing what the AI built, catching drift, realigning conventions.
The AI does the refactoring. The human decides what to refactor and when to stop.
Without the human, the AI will refactor forever, improving things that do not matter and missing things that do.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-attention-budget-proved-framing-is-scarce","level":3,"title":"The Attention Budget Proved Framing Is Scarce","text":"
In The Attention Budget, I explained why more context makes AI worse, not better. Every token competes for attention: Dump everything in and the AI sees nothing clearly.
This is a framing problem: The human's job is to decide what the AI should focus on: what to include, what to exclude, what to emphasize.
ctx agent --budget 4000 is not just a CLI flag: It is a forcing function for human judgment about relevance.
The AI processes. The human curates.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#skills-design-proved-taste-is-load-bearing","level":3,"title":"Skills Design Proved Taste Is Load-Bearing","text":"
The skill trilogy (You Can't Import Expertise, The Anatomy of a Skill That Works) showed that the difference between a useful skill and a useless one is not craftsmanship:
It is taste.
A well-crafted skill with the wrong focus is worse than no skill at all: It consumes the attention budget with generic advice while the project-specific problems go unchecked.
The E/A/R framework (Expert, Activation, Redundant) is a judgment too:. The AI cannot apply it to itself. The human evaluates what the AI already knows, what it needs to be told, and what is noise.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#automation-discipline-proved-restraint-is-a-skill","level":3,"title":"Automation Discipline Proved Restraint Is a Skill","text":"
In Not Everything Is a Skill, the lesson was that the urge to automate is not the need to automate. A useful prompt does not automatically deserve to become a slash command.
The human applies judgment about frequency, stability, and attention cost.
The AI can build the skill. Only the human can decide whether it should exist.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#defense-in-depth-proved-boundaries-require-judgment","level":3,"title":"Defense in Depth Proved Boundaries Require Judgment","text":"
In Defense in Depth, the entire security model for unattended AI agents came down to: markdown is not a security boundary. Telling an AI \"don't do bad things\" is production (of instructions). Setting up an unprivileged user in a network-isolated container is judgment (about risk).
The AI follows instructions. The human decides which instructions are enforceable and which are \"wishful thinking\".
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#parallel-agents-proved-scale-amplifies-the-gap","level":3,"title":"Parallel Agents Proved Scale Amplifies the Gap","text":"
In Parallel Agents and Merge Debt, the lesson was that multiplying agents multiplies output. But it also multiplies the need for judgment:
Five agents running in parallel produce five sessions of drift in one clock hour. The human who can frame tasks cleanly, define narrow acceptance criteria, and evaluate results quickly becomes the limiting factor.
More agents do not reduce the need for judgment. They increase it.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-two-reactions","level":2,"title":"The Two Reactions","text":"
When AI floods the system with cheap output, two things happen:
Those who only produce: panic. If your value proposition is \"I write code,\" and an AI writes code faster, cheaper, and at higher volume, then the math is unfavorable. Not because AI took your job, but because your job was never the code. It was the judgment around the code, and you were not exercising it.
Those who direct: accelerate. If your value proposition is \"I know what to build, in what order, to what standard,\" then AI is the best thing that ever happened to you: Production is no longer the bottleneck: Your ability to frame, sequence, evaluate, and course-correct is now the limiting factor on throughput.
The gap between these two is not talent: It is the awareness of where the value lives.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#what-this-means-in-practice","level":2,"title":"What This Means in Practice","text":"
If you are an engineer reading this, the actionable insight is not \"learn prompt engineering\" or \"master AI tools.\" It is:
Get better at the things AI cannot do.
AI does this well You need to do this Generate code Frame the problem Write tests Define acceptance criteria Scaffold projects Sequence the work Fix bugs from stack traces Evaluate tradeoffs Produce volume Exercise restraint Follow instructions Decide which instructions matter
The skills on the right column are not new. They are the same skills that have always separated senior engineers from junior ones.
AI did not create the distinction; it just made it load-bearing.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#if-anything-i-feel-empowered","level":2,"title":"If Anything, I Feel Empowered","text":"
I will end with something personal.
I am not worried: I am empowered.
Before ctx, I could think faster than I could produce:
Ideas sat in a queue.
The bottleneck was always \"I know what to build, but building it takes too long.\"
Now the bottleneck is gone. Poof!
Production is cheap.
The queue is clearing.
The limiting factor is how fast I can think, not how fast I can type.
That is not a threat: That is the best force multiplier I've ever had.
The people who feel threatened are confusing the accelerator for the replacement:
*AI does not replace the conductor; it gives them a bigger orchestra.
If You Remember One Thing From This Post...
Code is cheap. Judgment is not.
AI replaces unstructured effort, not directed expertise. The skills that matter now are the same skills that have always mattered: taste, framing, sequencing, and the discipline to stop.
The difference is that now, for the first time, those skills are the only bottleneck left.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-arc","level":2,"title":"The Arc","text":"
This post is a retrospective. It synthesizes the thread running through every previous entry in this blog:
Building ctx Using ctx showed that production without direction creates debt
Refactoring with Intent showed that slowing down is not the opposite of progress
The Attention Budget showed that curation outweighs volume
The skill trilogy showed that taste determines whether a tool helps or hinders
Not Everything Is a Skill showed that restraint is a skill in itself
Defense in Depth showed that instructions are not boundaries
The 3:1 Ratio showed that judgment has a schedule
Parallel Agents showed that scale amplifies the gap between production and judgment
Context as Infrastructure showed that the system you build for context is infrastructure, not conversation
From YOLO mode to defense in depth, the pattern is the same:
Production is the easy part;
Judgment is the hard part;
AI changed the ratio, not the rule.
This post synthesizes the thread running through every previous entry in this blog. The evidence is drawn from three weeks of building ctx with AI assistance, the decisions recorded in DECISIONS.md, the learnings captured in LEARNINGS.md, and the git history that tracks where the human mattered and where the AI ran unsupervised.
See also: When a System Starts Explaining Itself -- what happens after the arc: the first field notes from the moment the system starts compounding in someone else's hands.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/","level":1,"title":"Context as Infrastructure","text":"","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#why-your-ai-needs-a-filesystem-not-a-prompt","level":2,"title":"Why Your AI Needs a Filesystem, Not a Prompt","text":"
Jose Alekhinne / February 17, 2026
Where does your AI's knowledge live between sessions?
If the answer is \"in a prompt I paste at the start,\" you are treating context as a consumable. Something assembled, used, and discarded.
What if you treated it as infrastructure instead?
This post synthesizes a thread that has been running through every ctx blog post; from the origin story to the attention budget to the discipline release. The thread is this: context is not a prompt problem. It is an infrastructure problem. And the tools we build for it should look more like filesystems than clipboard managers.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-prompt-paradigm","level":2,"title":"The Prompt Paradigm","text":"
Most AI-assisted development treats context as ephemeral:
Start a session.
Paste your system prompt, your conventions, your current task.
Work.
Session ends. Everything evaporates.
Next session: paste again.
This works for short interactions. For sustained development (where decisions compound over days and weeks) it fails in three ways:
It does not persist: A decision made on Tuesday must be re-explained on Wednesday. A learning captured in one session is invisible to the next.
It does not scale: As the project grows, the \"paste everything\" approach hits the context window ceiling. You start triaging what to include, often cutting exactly the context that would have prevented the next mistake.
It does not compose: A system prompt is a monolith. You cannot load part of it, update one section, or share a subset with a different workflow. It is all or nothing.
The Copy-Paste Tax
Every session that starts with pasting a prompt is paying a tax:
The human time to assemble the context, the risk of forgetting something, and the silent assumption that yesterday's prompt is still accurate today.
Over 70+ sessions, that tax compounds into a significant maintenance burden: One that most developers absorb without questioning it.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-infrastructure-paradigm","level":2,"title":"The Infrastructure Paradigm","text":"
ctx takes a different approach:
Context is not assembled per-session; it is maintained as persistent files in a .context/ directory:
.context/\n CONSTITUTION.md # Inviolable rules\n TASKS.md # Current work items\n CONVENTIONS.md # Code patterns and standards\n DECISIONS.md # Architectural choices with rationale\n LEARNINGS.md # Gotchas and lessons learned\n ARCHITECTURE.md # System structure\n GLOSSARY.md # Domain terminology\n AGENT_PLAYBOOK.md # Operating manual for agents\n journal/ # Enriched session summaries\n archive/ # Completed work, cold storage\n
Each file has a single purpose;
Each can be loaded independently;
Each persists across sessions, tools, and team members.
This is not a novel idea. It is the same idea behind every piece of infrastructure software engineers already use:
The parallel is not metaphorical. Context files are infrastructure:
They are versioned (git tracks them);
They are structured (Markdown with conventions);
They have schemas (required fields for decisions and learnings);
And they have lifecycle management (archiving, compaction, indexing).
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#separation-of-concerns","level":2,"title":"Separation of Concerns","text":"
The most important design decision in ctx is not any individual feature. It is the separation of context into distinct files with distinct purposes.
A single CONTEXT.md file would be simpler to implement. It would also be impossible to maintain.
Why? Because different types of context have different lifecycles:
Context Type Changes Read By Load When Constitution Rarely Every session Always Tasks Every session Session start Always Conventions Weekly Before coding When writing code Decisions When decided When questioning When revisiting Learnings When learned When stuck When debugging Journal Every session Rarely When investigating
Loading everything into every session wastes the attention budget on context that is irrelevant to the current task. Loading nothing forces the AI to operate blind.
Separation of concerns allows progressive disclosure:
Load the minimum that matters for this moment, with the option to load more when needed.
# Session start: load the essentials\nctx agent --budget 4000\n\n# Deep investigation: load everything\ncat .context/DECISIONS.md\ncat .context/journal/2026-02-05-*.md\n
The filesystem is the index. File names, directory structure, and timestamps encode relevance. The AI does not need to read every file; it needs to know where to look.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-two-tier-persistence-model","level":2,"title":"The Two-Tier Persistence Model","text":"
ctx uses two tiers of persistence, and the distinction is architectural:
The curated tier is what the AI sees at session start. It is optimized for signal density:
Structured entries,
Indexed tables,
Reverse-chronological order (newest first, so the most relevant content survives truncation).
The full dump tier is for humans and for deep investigation. It contains everything: Enriched journals, archived tasks...
It is never autoloaded because its volume would destroy attention density.
This two-tier model is analogous to how traditional systems separate hot and cold storage:
The hot path (curated context) is optimized for read performance (measured not in milliseconds, but in tokens consumed per unit of useful information).
The cold path (journal) is optimized for completeness.
Nothing Is Ever Truly Lost
The full dump tier means that context does not need to be perfect: It just needs to be findable.
A decision that was not captured in DECISIONS.md can be recovered from the session transcript where it was discussed.
A learning that was not formalized can be found in the journal entry from that day.
The curated tier is the fast path: The full dump tier is the safety net.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#decision-records-as-first-class-citizens","level":2,"title":"Decision Records as First-Class Citizens","text":"
One of the patterns that emerged from ctx's own development is the power of structured decision records.
v0.1.0 allowed adding decisions as one-liners:
ctx add decision \"Use PostgreSQL\"\n
v0.2.0 enforced structure:
ctx add decision \"Use PostgreSQL\" \\\n --context \"Need a reliable database for user data\" \\\n --rationale \"ACID compliance, team familiarity\" \\\n --consequence \"Need connection pooling, team training\"\n
The difference is not cosmetic:
A one-liner decision teaches the AI what was decided.
A structured decision teaches it why; and why is what prevents the AI from unknowingly reversing the decision in a future session.
This is infrastructure thinking:
Decisions are not notes. They are records with required fields, just like database rows have schemas.
The enforcement exists because incomplete records are worse than no records: They create false confidence that the context is captured when it is not.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-ide-is-the-interface-decision","level":2,"title":"The \"IDE Is the Interface\" Decision","text":"
Early in ctx's development, there was a temptation to build a custom UI: a web dashboard for browsing sessions, editing context, viewing analytics.
The decision was no. The IDE is the interface.
# This is the ctx \"UI\":\ncode .context/\n
This decision was not about minimalism for its own sake. It was about recognizing that .context/ files are just files; and files have a mature, well-understood infrastructure:
Version control: git diff .context/DECISIONS.md shows exactly what changed and when.
Search: Your IDE's full-text search works across all context files.
Editing: Markdown in any editor, with preview, spell check, and syntax highlighting.
Collaboration: Pull requests on context files work the same as pull requests on code.
Building a custom UI would have meant maintaining a parallel infrastructure that duplicates what every IDE already provides:
It would have introduced its own bugs, its own update cycle, and its own learning curve.
The filesystem is not a limitation: It is the most mature, most composable, most portable infrastructure available.
Context Files in Git
Because .context/ lives in the repository, context changes are part of the commit history.
A decision made in commit abc123 is as traceable as a code change in the same commit.
This is not possible with prompt-based context, which exists outside version control entirely.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#progressive-disclosure-for-ai","level":2,"title":"Progressive Disclosure for AI","text":"
The concept of progressive disclosure comes from human interface design: show the user the minimum needed to make progress, with the option to drill deeper.
ctx applies the same principle to AI context:
Level What the AI Sees Token Cost When Level 0 ctx status (one-line summary) ~100 Quick check Level 1 ctx agent --budget 4000 ~4,000 Normal work Level 2 ctx agent --budget 8000 ~8,000 Complex tasks Level 3 Direct file reads 10,000+ Deep investigation
Each level trades tokens for depth. Level 1 is sufficient for most work: the AI knows the active tasks, the key conventions, and the recent decisions. Level 3 is for archaeology: understanding why a decision was made three weeks ago, or finding a pattern in the session history.
The explicit --budget flag is the mechanism that makes this work:
Without it, the default behavior would be to load everything (because more context feels safer), which destroys the attention density that makes the loaded context useful.
The constraint is the feature: A budget of 4,000 tokens forces ctx to prioritize ruthlessly: constitution first (always full), then tasks and conventions (budget-capped), then decisions and learnings scored by recency and relevance to active tasks. Entries that don't fit get title-only summaries rather than being silently dropped.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-philosophical-shift","level":2,"title":"The Philosophical Shift","text":"
The shift from \"context as prompt\" to \"context as infrastructure\" changes how you think about AI-assisted development:
Prompt Thinking Infrastructure Thinking \"What do I paste today?\" \"What has changed since yesterday?\" \"How do I fit everything in?\" \"What's the minimum that matters?\" \"The AI forgot my conventions\" \"The conventions are in a file\" \"I need to re-explain\" \"I need to update the record\" \"This session is getting slow\" \"Time to compact and archive\"
The first column treats AI interaction as a conversation. The second treats it as a system: One that can be maintained, optimized, and debugged.
Context is not something you give the AI. It is something you maintain: Like a database, like a config file, like any other piece of infrastructure that a running system depends on.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#beyond-ctx-the-principles","level":2,"title":"Beyond ctx: The Principles","text":"
The patterns that ctx implements are not specific to ctx. They are applicable to any project that uses AI-assisted development:
Separate context by purpose: Do not put everything in one file. Different types of information have different lifecycles and different relevance windows.
Make context persistent: If a decision matters, write it down in a file that survives the session. If a learning matters, capture it with structure.
Budget explicitly: Know how much context you are loading and whether it is worth the attention cost.
Use the filesystem: File names, directory structure, and timestamps are metadata that the AI can navigate. A well-organized directory is an index that costs zero tokens to maintain.
Version your context: Put context files in git. Changes to decisions are as important as changes to code.
Design for degradation: Sessions will get long. Attention will dilute. Build mechanisms (compaction, archiving, cooldowns) that make degradation visible and manageable.
These are not ctx features. They are infrastructure principles that happen to be implemented as a CLI tool. Any team could implement them with nothing more than a directory convention and a few shell scripts.
The tool is a convenience: The principles are what matter.
If You Remember One Thing From This Post...
Prompts are conversations. Infrastructure persists.
Your AI does not need a better prompt. It needs a filesystem:
versioned, structured, budgeted, and maintained.
The best context is the context that was there before you started the session.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-arc","level":2,"title":"The Arc","text":"
This post is the architectural companion to the Attention Budget. That post explained why context must be curated (token economics). This one explains how to structure it (filesystem, separation of concerns, persistence tiers).
Together with Code Is Cheap, Judgment Is Not, they form a trilogy about what matters in AI-assisted development:
Attention Budget: the resource you're managing
Context as Infrastructure: the system you build to manage it
Code Is Cheap: the human skill that no system replaces
And the practices that keep it all honest:
The 3:1 Ratio: the cadence for maintaining both code and context
IRC as Context: the historical precedent: stateless protocols have always needed stateful wrappers
This post synthesizes ideas from across the ctx blog series: the attention budget primitive, the two-tier persistence model, the IDE decision, and the progressive disclosure pattern. The principles are drawn from three weeks of building ctx and 70+ sessions of treating context as infrastructure rather than conversation.
See also: When a System Starts Explaining Itself: what happens when this infrastructure starts compounding in someone else's environment.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/","level":1,"title":"Parallel Agents, Merge Debt, and the Myth of Overnight Progress","text":"","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#when-the-screen-looks-like-progress","level":2,"title":"When the Screen Looks Like Progress","text":"
Jose Alekhinne / 2026-02-17
How Many Terminals Are too Many?
You discover agents can run in parallel.
So you open ten...
...Then twenty.
The fans spin. Tokens burn. The screen looks like progress.
It is NOT progress.
There is a phase every builder goes through:
The tooling gets fast enough.
The model gets good enough.
The temptation becomes irresistible:
more agents, more output, faster delivery.
So you open terminals. You spawn agents. You watch tokens stream across multiple windows simultaneously, and it feels like multiplication.
It is not multiplication.
It is merge debt being manufactured in real time.
The ctx Manifesto says it plainly:
Activity is not impact. Code is not progress.
This post is about what happens when you take that seriously in the context of parallel agent workflows.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#the-unit-of-scale-is-not-the-agent","level":2,"title":"The Unit of Scale Is Not the Agent","text":"
The naive model says:
More agents -> more output -> faster delivery
The production model says:
Clean context boundaries -> less interference -> higher throughput
Parallelism only works when the cognitive surfaces do not overlap.
If two agents touch the same files, you did not create parallelism: You created a conflict generator.
They will:
Revert each other's changes;
Relint each other's formatting;
Refactor the same function in different directions.
You watch with 🍿. Nothing ships.
This is the same insight from the worktrees post: partition by blast radius, not by priority.
Two tasks that touch the same files belong in the same track, no matter how important the other one is. The constraint is file overlap.
Everything else is scheduling.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#the-five-agent-rule","level":2,"title":"The \"Five Agent\" Rule","text":"
In practice there is a ceiling.
Around five or six concurrent agents:
Token burn becomes noticeable;
Supervision cost rises;
Coordination noise increases;
Returns flatten.
This is not a model limitation: This is a human merge bandwidth limitation.
You are the bottleneck, not the silicon.
The attention budget applies to you too:
Every additional agent is another stream of output you need to comprehend, verify, and integrate. Your attention density drops the same way the model's does when you overload its context window.
Five agents producing verified, mergeable change beats twenty agents producing merge conflicts you spend a day untangling.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#role-separation-beats-file-locking","level":2,"title":"Role Separation Beats File Locking","text":"
Real parallelism comes from task topology, not from tooling.
Four agents editing the same implementation surface
Context is the Boundary
The goal is not to keep agents busy.
The goal is to keep contexts isolated.
This is what the codebase audit got right:
Eight agents, all read-only, each analyzing a different dimension.
Zero file overlap.
Zero merge conflicts.
Eight reports that composed cleanly because no agent interfered with another.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#when-terminals-stop-scaling","level":2,"title":"When Terminals Stop Scaling","text":"
There is a moment when more windows stop helping.
That is the signal. Not to add orchestration. But to introduce:
git worktree\n
Because now you are no longer parallelizing execution; you are parallelizing state.
State Scales, Windows Don't
State isolation is the real scaling.
Window multiplication is theater.
The worktrees post covers the mechanics:
Sibling directories;
Branch naming;
The inevitable TASKS.md conflicts;
The 3-4 worktree ceiling.
The principle underneath is older than git:
Shared mutable state is the enemy of parallelism.
Always has been.
Always will be.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#the-overnight-loop-illusion","level":2,"title":"The Overnight Loop Illusion","text":"
Autonomous night runs are impressive.
You sleep. The machine produces thousands of lines.
In the morning:
You read;
You untangle;
You reconstruct intent;
You spend a day making it shippable.
In retrospect, nothing was accelerated.
The bottleneck moved from typing to comprehension.
The Comprehension Tax
If understanding the output costs more than producing it, the loop is a net loss.
Progress is not measured in generated code.
Progress is measured in verified, mergeable change.
The ctx Manifesto calls this out directly:
The Scoreboard
Verified reality is the scoreboard.
The only truth that compounds is verified change in the real world.
An overnight run that produces 3,000 lines nobody reviewed is not 3,000 lines of progress: It is 3,000 lines of liability until someone verifies every one of them.
And that someone is (insert drumroll here) you:
The same bottleneck that was supposedly being bypassed.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#skills-that-fight-the-platform","level":2,"title":"Skills That Fight the Platform","text":"
Most marketplace skills are prompt decorations:
They rephrase what the base model already knows;
They increase token usage;
They reduce clarity:
They introduce behavioral drift.
We covered this in depth in Skills That Fight the Platform: judgment suppression, redundant guidance, guilt-tripping, phantom dependencies, universal triggers: Five patterns that make agents worse, not better.
A real skill does one of these:
Encodes workflow state;
Enforces invariants;
Reduces decision branching.
Everything else is packaging.
The anatomy post established the criteria: quality gates, negative triggers, examples over rules, skills as contracts.
If a skill doesn't meet those criteria...
It is either a recipe (document it in hack/);
Or noise (delete it);
There is no third option.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#hooks-are-context-that-execute","level":2,"title":"Hooks Are Context That Execute","text":"
The most valuable skills are not prompts:
They are constraints embedded in the toolchain.
For example: The agent cannot push.
git push becomes:
Stop. A human reviews first.
A commit without verification becomes:
Did you run tests? Did you run linters? What exactly are you shipping?
This is not safety theater; this is intent preservation.
The thing the ctx Manifesto calls \"encoding intent into the environment.\"
The Eight Ways a Hook Can Talk catalogued the full spectrum: from silent enrichment to hard blocks.
The key insight was that hooks are not just safety rails: They are context that survives execution.
They are the difference between an agent that remembers the rules and one that enforces them.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#complexity-is-a-tax","level":2,"title":"Complexity Is a Tax","text":"
Every extra layer adds cognitive weight:
Orchestration frameworks;
Meta agents;
Autonomous planning systems...
If a single terminal works, stay there.
If five isolated agents work, stop there.
Add structure only when a real bottleneck appears.
NOT when an influencer suggests one.
This is the same lesson from Not Everything Is a Skill:
The best automation decision is sometimes not to automate.
A recipe in a Markdown file costs nothing until you use it.
An orchestration framework costs attention on every run, whether it helps or not.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#literature-is-throughput","level":2,"title":"Literature Is Throughput","text":"
Clear writing is not aesthetic: It is compression.
Better articulation means:
Fewer tokens;
Fewer misinterpretations;
Faster convergence.
The attention budget taught us that context is a finite resource with a quadratic cost.
Language determines how fast you spend context.
A well-written task description that takes 50 tokens outperforms a rambling one that takes 200: Not just because it is cheaper, but because it leaves more headroom for the model to actually think.
Literature is NOT Overrated
Attention is a finite budget.
Language determines how fast you spend it.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#the-real-metric","level":2,"title":"The Real Metric","text":"
The real metric is not:
Lines generated;
Agents running;
Tasks completed while you sleep.
But:
Time from idea to verified, mergeable, production change.
Everything else is motion.
The entire blog series has been circling this point:
The attention budget was about spending tokens wisely.
The skills trilogy was about not wasting them on prompt decoration.
The worktrees post was about multiplying throughput without multiplying interference.
The discipline release was about what a release looks like when polish outweighs features: 3:1.
Every post has arrived (and made me converge) at the same answer so far:
The metric is a verified change, not generated output.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#ctx-was-never-about-spawning-more-minds","level":2,"title":"ctx Was Never About Spawning More Minds","text":"
ctx is about:
Isolating context;
Preserving intent;
Making progress composable.
Parallel agents are powerful. But only when you respect the boundaries that make parallelism real.
Otherwise, you are not scaling cognition; you are scaling interference.
The ctx Manifesto's thesis holds:
Without ctx, intelligence resets. With ctx, creation compounds.
Compounding requires structure.
Structure requires boundaries.
Boundaries require the discipline to stop adding agents when five is enough.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#practical-summary","level":2,"title":"Practical Summary","text":"
A production workflow tends to converge to this:
Practice Why Stay in one terminal unless necessary Minimize coordination overhead Spawn a small number of agents with non-overlapping responsibilities Conflict avoidance > parallelism Isolate state with worktrees when surfaces grow State isolation is real scaling Encode verification into hooks Intent that survives execution Avoid marketplace prompt cargo cults Skills are contracts, not decorations Measure merge cost, not generation speed The metric is verified change
This is slower to watch. Faster to ship.
If You Remember One Thing From This Post...
Progress is not what the machine produces while you sleep.
Progress is what survives contact with the main branch.
See also: Code Is Cheap. Judgment Is Not.: the argument that production capacity was never the bottleneck, and why multiplying agents amplifies the need for human judgment rather than replacing it.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/","level":1,"title":"The 3:1 Ratio","text":"","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#scheduling-consolidation-in-ai-development","level":2,"title":"Scheduling Consolidation in AI Development","text":"
Jose Alekhinne / February 17, 2026
How often should you stop building and start cleaning?
Every developer knows technical debt exists. Every developer postpones dealing with it.
AI-assisted development makes the problem worse; not because the AI writes bad code, but because it writes code so fast that drift accumulates before you notice.
In Refactoring with Intent, I mentioned a ratio that worked for me: 3:1. Three YOLO sessions create enough surface area to reveal patterns. The fourth session turns those patterns into structure.
That was an observation. This post is the evidence.
During the first two weeks of building ctx, I noticed a rhythm in my own productivity. Feature sessions felt great: new commands, new capabilities, visible progress...
...but after three of them, things would start to feel sticky: variable names that almost made sense, files that had grown past their purpose, patterns that repeated without being formalized.
The fourth session (when I stopped adding and started cleaning) was always the most painful to start and the most satisfying to finish.
It was also the one that made the next three feature sessions faster.
The ctx git history between January 20 and February 7 tells a clear story when you categorize commits:
Week Feature commits Consolidation commits Ratio Jan 20-26 18 5 3.6:1 Jan 27-Feb 1 14 6 2.3:1 Feb 1-7 15 35+ 0.4:1
The first week was pure YOLO: Almost four feature commits for every consolidation commit. The codebase grew fast.
The second week started to self-correct. The ratio dropped as refactoring sessions became necessary: Not scheduled, but forced by friction.
The third week inverted entirely: v0.3.0 was almost entirely consolidation: the skill migration, the sweep, the documentation standardization. Thirty-five quality commits against fifteen features.
The debt from weeks one and two was paid in week three.
The Compounding Problem
Consolidation debt compounds.
Week one's drift doesn't just persist into week two: It accelerates, because new features are built on top of drifted patterns.
By week three, the cost of consolidation was higher than it would have been if spread evenly.
Convention says boolean functions should be named HasX, IsX, CanX. After three feature sprints:
// What accumulated:\nfunc CheckIfEnabled() bool // should be Enabled\nfunc ValidateFormat() bool // should be ValidFormat\nfunc TestConnection() bool // should be Connects\nfunc VerifyExists() bool // should be Exists or HasFile\nfunc EnsureReady() bool // should be Ready\n
Five violations. Not bugs, but friction that compounds every time someone (human or AI) reads the code and has to infer the naming convention from inconsistent examples.
// Week 1: acceptable prototype\nif entry.Type == \"task\" {\n filename = \"TASKS.md\"\n}\n\n// Week 3: same pattern in 7+ files\n// Now it's a maintenance liability\n
When the same literal appears in seven files, changing it means finding all seven. Missing one means a silent runtime bug. Constants exist to prevent exactly this. But during feature velocity, nobody stops to extract them.
Refactoring with Intent documented the constants consolidation that cleaned this up. The 3:1 ratio is the practice that prevents it from accumulating again.
Eighty-plus instances of hardcoded file permissions. Not wrong, but if I ever need to change the default (and I did, for hook scripts that need execute permissions), it means a codebase-wide search.
Drift Is Not Bugs
None of these are bugs. The code works. Tests pass.
But drift creates false confidence: the codebase looks consistent until you try to change something and discover that five different conventions exist for the same concept.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#why-you-cannot-consolidate-on-day-one","level":2,"title":"Why You Cannot Consolidate on Day One","text":"
The temptation is to front-load quality: write all the conventions, enforce all the checks, prevent all the drift before it happens.
This fails for two reasons.
First, you do not know what will drift: Predicate naming violations only become a convention check after you notice three different naming patterns competing. Magic strings only become a consolidation target after you change a literal and discover it exists in seven places.
The conventions emerge from the work; they cannot precede it.
This is what You Can't Import Expertise meant in practice: the consolidation checks grow from the project's own drift history. You cannot write them on day one because you do not yet know what will drift.
Second, premature consolidation slows discovery: During the prototyping phase, the goal is to explore the design space. Enforcing strict conventions on code that might be deleted tomorrow is waste.
YOLO mode has its place: The problem is not YOLO itself, but YOLO without a scheduled cleanup.
The Consolidation Paradox
You need a drift history to know what to consolidate.
You need consolidation to prevent drift from compounding.
The 3:1 ratio resolves this paradox:
Let drift accumulate for three sessions (enough to see patterns), then consolidate in the fourth (before the patterns become entrenched*).
The ctx project now has an /audit skill that encodes nine project-specific checks:
Check What It Catches Predicate naming Boolean functions not using Has/Is/Can Magic strings Repeated literals not in config constants File permissions Hardcoded 0644/0755 not using constants Godoc style Missing or non-standard documentation File length Files exceeding 400 lines Large functions Functions exceeding 80 lines Template drift Live skills diverging from templates Import organization Non-standard import grouping TODO/FIXME staleness Old markers that are no longer relevant
This is not a generic linter. These are project-specific conventions that emerged from ctx's own development history. A generic code quality tool would catch some of them. Only a project-specific check catches all of them, because some of them (predicate naming, template drift) are conventions that exist nowhere except in this project's CONVENTIONS.md.
Not all drift needs immediate consolidation. Here is the matrix I use:
Signal Action Same literal in 3+ files Extract to constant Same code block in 3+ places Extract to helper Naming convention violated 5+ times Fix and document rule File exceeds 400 lines Split by concern Convention exists but is regularly violated Strengthen enforcement Pattern exists only in one place Leave it alone Code works but is \"ugly\" Leave it alone
The last two rows matter:
Consolidation is about reducing maintenance cost, not achieving aesthetic perfection. Code that works and exists in one place does not benefit from consolidation; it benefits from being left alone until it earns its refactoring.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#consolidation-as-context-hygiene","level":2,"title":"Consolidation as Context Hygiene","text":"
There is a parallel between code consolidation and context management that became clear during the ctx development:
Code Consolidation Context Hygiene Extract magic strings Archive completed tasks Standardize naming Keep DECISIONS.md current Remove dead code Compact old sessions Update stale comments Review LEARNINGS.md for staleness Check template drift Verify CONVENTIONS.md matches code
ctx compact does for context what consolidation does for code:
It moves completed work to cold storage, keeping the active context clean and focused. The attention budget applies to both the AI's context window and the developer's mental model of the codebase.
When context files accumulate stale entries, the AI's attention is wasted on completed tasks and outdated conventions. When code accumulates drift, the developer's attention is wasted on inconsistencies that obscure the actual logic.
Both are solved by the same discipline: periodic, scheduled cleanup.
This is also why parallel agents make the problem harder, not easier. Three agents running simultaneously produce three sessions' worth of drift in one clock hour. The consolidation cadence needs to match the output rate, not the calendar.
Here is how the 3:1 ratio works in practice for ctx development:
Sessions 1-3: Feature work
Add new capabilities;
Write tests for new code;
Do not stop for cleanup unless something is actively broken;
Note drift as you see it (a comment, a task, a mental note).
Session 4: Consolidation
Run /audit to surface accumulated drift;
Fix the highest-impact items first;
Update CONVENTIONS.md if new patterns emerged;
Archive completed tasks;
Review LEARNINGS.md for anything that became a convention.
The key insight is that session 4 is not optional. It is not \"if we have time\": It is scheduled with the same priority as feature work.
The cost of skipping it is not visible immediately; it becomes visible three sessions later, when the next consolidation session takes twice as long because the drift compounded.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#what-the-ratio-is-not","level":2,"title":"What the Ratio Is Not","text":"
The 3:1 ratio is not a universal law. It is an empirical observation from one project with one developer working with AI assistance.
Different projects will have different ratios:
A mature codebase with strong conventions might sustain 5:1 or higher;
A greenfield prototype might need 2:1;
A team of multiple developers with different styles might need 1:1.
The number is less important than the practice: consolidation is not a reaction to problems. It is a scheduled activity.
If you wait for drift to cause pain before consolidating, you have already paid the compounding cost.
If You Remember One Thing From This Post...
Three sessions of building. One session of cleaning.
Not because the code is dirty, but because drift compounds silently, and the only way to catch it is to look for it on a schedule.
The ratio is the schedule.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#the-arc-so-far","level":2,"title":"The Arc So Far","text":"
This post sits at a crossroads in the ctx story. Looking back:
Building ctx Using ctx documented the YOLO sprint that created the initial codebase
Refactoring with Intent introduced the 3:1 ratio as an observation from the first cleanup
The Attention Budget explained why drift matters: every token of inconsistency consumes the same finite resource as useful context
You Can't Import Expertise showed that consolidation checks must grow from the project, not a template
The Discipline Release proved the ratio works at release scale: 35 quality commits to 15 feature commits
And looking forward: the same principle applies to context files, to documentation, and to the merge debt that parallel agents produce. Drift is drift, whether it lives in code, in .context/, or in the gap between what your docs say and what your code does.
The ratio is the schedule is the discipline.
This post was drafted from git log analysis of the ctx repository, mapping every commit from January 20 to February 7 into feature vs consolidation categories. The patterns described are drawn from the project's CONVENTIONS.md, LEARNINGS.md, and the /audit skill's check list.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/","level":1,"title":"When a System Starts Explaining Itself","text":"","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#field-notes-from-the-moment-a-private-workflow-becomes-portable","level":2,"title":"Field Notes from the Moment a Private Workflow Becomes Portable","text":"
Jose Alekhinne / February 17, 2026
How Do You Know Something Is Working?
Not from metrics. Not from GitHub stars. Not from praise.
You know, deep in your heart, that it works when people start describing it wrong.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-first-external-signals","level":2,"title":"The First External Signals","text":"
Every new substrate begins as a private advantage:
It lives inside one mind,
One repository,
One set of habits.
It is fast. It is not yet real.
Reality begins when other people describe it in their own language:
Not accurately;
Not consistently;
But involuntarily.
The early reports arrived without coordination:
Better Tasks
\"I do not know how, but this creates better tasks than my AI plugin.\"
I See Butterflies
\"This is better than Adderall.\"
Dear Manager...
\"Promotion packet? Done. What is next?\"
What Is It? Can I Eat It?
\"Is this a skill?\" 🦋
Why the Cloak and Dagger?
\"Why is this not in the marketplace?\"
And then something more important happened:
Someone else started making a video!
That was the boundary.
ctx no longer required its creator to be present in order to exist.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#misclassification-is-a-sign-of-a-new-primitive","level":2,"title":"Misclassification Is a Sign of a New Primitive","text":"
When a tool is understood, it is categorized:
Editor,
Framework,
Task manager,
Plugin...
When a substrate appears, it is misclassified:
\"Is this a skill?\" 🦋
The question is correct. The category is wrong.
Skills live in people.
Infrastructure lives in the environment.
ctx Is not a Skill: It is a Form of Relief
What early adopters experience is not an ability.
It is the removal of a cognitive constraint.
This is the same distinction that emerged in the skills trilogy:
A skill is a contract between a human and an agent.
Infrastructure is the ground both stand on.
You do not use infrastructure.
You habitualize it.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-pharmacological-metaphor","level":2,"title":"The Pharmacological Metaphor","text":"
\"Better than Adderall\" is not praise.
It is a diagnostic:
Executive function has been externalized.
The system is not making the user work harder.
It is restoring continuity.
From the primitive context of wetware:
Continuity feels like focus
Focus feels like discipline
If it walks like a duck and quacks like a duck, it is a duck.
Discipline is usually simulated.
Infrastructure makes the simulation unnecessary.
The attention budget explained why context degrades:
Attention density drops as volume grows;
The middle gets lost;
Sessions end and everything evaporates.
The pharmacological metaphor says the same thing from the user's lens:
Save the Cheerleader, Save the World
The symptom of lost context is lost focus.
Restore the context. Restore the focus.
IRC bouncers solved this for chat twenty years ago. ctx solves it for cognition.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#throughput-on-ambiguous-work","level":2,"title":"Throughput on Ambiguous Work","text":"
Finishing a promotion packet quickly is not a productivity story.
It is the collapse of reconstruction cost.
Most complex work is not execution. It is:
Remembering why something mattered;
Recovering prior decisions;
Rebuilding mental state.
Persistent context removes that tax.
Velocity appears as a side effect.
This Is the Two-Tier Model in Practice
The two-tier persistence model
Curated context for fast reload
Full journal for archaeology
is what makes this possible.
The user does not notice the system.
They notice that the reconstruction cost disappeared.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-moment-of-portability","level":2,"title":"The Moment of Portability","text":"
The system becomes real when two things happen:
It can be installed as a versioned artifact.
It survives contact with a hostile, real codebase.
This is why the first integration into a living system matters more than any landing page.
Demos prove possibility.
Diffs prove reality.
The ctx Manifesto calls this out directly:
Verified reality is the scoreboard.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-split-voice","level":2,"title":"The Split Voice","text":"
A new substrate requires two channels.
The embodied voice:
Here is what changed in my actual work.
The out of body voice:
Here is what this means.
One produces trust.
The other produces understanding.
Neither is sufficient alone.
This entire blog has been the second voice.
The origin story was the first.
The refactoring post was the first.
Every release note with concrete diffs was the first.
This is the first second.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#systems-that-generate-explainers","level":2,"title":"Systems That Generate Explainers","text":"
Tools are used.
Platforms are extended.
Substrates are explained.
The first unsolicited explainer is a brittle phase change.
It means the idea has become portable between minds.
That is the beginning of an ecosystem.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-absence-of-metrics","level":2,"title":"The Absence of Metrics","text":"
Metrics do not matter at this stage.
Dashboards are noise.
The whole premise of ctx is the ruthless elimination of noise.
Numbers optimize funnels; substrates alter cognition.
The only valid measurement is irreversible reality:
A merged PR;
A reproducible install;
A decision that is never re-litigated.
The merge debt post reached the same conclusion from another direction:
The metric is the verified change, not generated output.
For adoption, the same rule applies:
The metric is altered behavior, not download counts.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#what-is-actually-happening","level":2,"title":"What Is Actually Happening","text":"
A private advantage is becoming an environmental property:
The system is moving from...
personal workflow,
to...
a shared infrastructure for thought.
Not by growth.
Not by marketing.
By altering how real systems evolve.
If You Remember One Thing From This Post...
You do not know a substrate is real when people praise it.
You know it is real when:
They describe it incorrectly;
They depend on it unintentionally;
They start teaching it to others.
That is the moment the system begins explaining itself.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-arc","level":2,"title":"The Arc","text":"
Every previous post looked inward.
This one looks outward.
Building ctx Using ctx: one mind, one repository
The Attention Budget: the constraint
Context as Infrastructure: the architecture
Code Is Cheap. Judgment Is Not.: the bottleneck
This post is the field report from the other side of that bottleneck:
The moment the infrastructure compounds in someone else's hands.
The arc is not complete.
It is becoming portable.
These field notes were written the same day the feedback arrived. The quotes are real. Real users. Real codebases. No names. No metrics. No funnel. Only the signal that something shifted.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/","level":1,"title":"The Dog Ate My Homework","text":"","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#teaching-ai-agents-to-read-before-they-write","level":2,"title":"Teaching AI Agents to Read Before They Write","text":"
Jose Alekhinne / February 25, 2026
Does Your AI Actually Read the Instructions?
You wrote the playbook. You organized the files. You even put \"CRITICAL, not optional\" in bold.
The agent skipped all of it and went straight to work.
I spent a day running experiments on my own agents. Not to see if they could write code (they can). To see if they would do their homework first.
They didn't.
Then I kept experimenting:
Five sessions;
Five different failure modes.
And by the end, I had something better than compliance:
I had observable compliance: A system where I don't need the agent to be perfect, I just need to see what it chose.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#tldr","level":2,"title":"TL;DR","text":"
You don't need perfect compliance. You need observable compliance.
Authority is a function of temporal proximity to action.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-pattern","level":2,"title":"The Pattern","text":"
This design has three parts:
One-hop instruction;
Binary collapse;
Compliance canary.
I'll explain all three patterns in detail below.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-setup","level":2,"title":"The Setup","text":"
ctx has a session-start protocol:
Read the context files;
Load the playbook;
Understand the project before touching anything.
It's in CLAUDE.md. It's in AGENT_PLAYBOOK.md.
It's in bold. It's in CAPS. It's ignored.
In theory, it's awesome.
Here's what happens when theory hits reality:
What the agent receives What the agent does CLAUDE.md saying \"load context first\" Skips it 8 context files waiting to be read Ignores them User's question: \"add --verbose flag\" Starts grepping immediately
The instructions are right there. The agent knows they exist. It even knows it should follow them. But the user asked a question, and responsiveness wins over ceremony.
This isn't a bug in the model. It's a design problem in how we communicate with agents.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-delegation-trap","level":2,"title":"The Delegation Trap","text":"
My first attempt was obvious: A UserPromptSubmit hook that fires when the session starts.
STOP. Before answering the user's question, run `ctx system bootstrap`\nand follow its instructions. Do not skip this step.\n
The word \"STOP\" worked. The agent ran bootstrap.
But bootstrap's output said \"Next steps: read AGENT_PLAYBOOK.md,\" and the agent decided that was optional. It had already started working on the user's task in parallel.
The authority decayed across the chain:
Hook says \"STOP\" -> agent complies
Hook says \"run bootstrap\" -> agent runs it
Bootstrap says \"read playbook\" -> agent skips
Bootstrap says \"run ctx agent\" -> agent skips
Each link lost enforcement power. The hook's authority didn't transfer to the commands it delegated to. I call this the decaying urgency chain: the agent treats the hook itself as the obligation and everything downstream as a suggestion.
Delegation Kills Urgency
\"Run X and follow its output\" is three hops.
\"Read these files\" is one hop.
The agent drops the chain after the first link.
This is a general principle: Hooks are the boundary between your environment and the agent's reasoning. If your hook delegates to a command that delegates to output that contains instructions... you're playing telephone.
Agents are bad at telephone.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-timing-problem","level":2,"title":"The Timing Problem","text":"
There's a subtler issue than wording: when the message arrives.
UserPromptSubmit fires when the user sends a message, before the agent starts reasoning. At that moment, the agent's primary focus is the user's question:
The hook message competes with the task for attention: The task, almost certainly, always wins.
This is the attention budget problem in miniature:
Not a token budget this time, but an attention priority budget.
The agent has finite capacity to care about things,
and the user's question is always the highest-priority item.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-solution","level":2,"title":"The Solution","text":"
To solve this, I dediced to use the PreToolUse hook.
This hook fires at the moment of action: When the agent is about to use its first tool: The agent's attention is focused, the context window is fresh, and the switching cost is minimal.
This is the difference between shouting instructions across a room and tapping someone on the shoulder.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-one-liner-that-worked","level":2,"title":"The One-Liner That Worked","text":"
The winning design was almost comically simple:
Read your context files before proceeding:\n.context/CONSTITUTION.md, .context/TASKS.md, .context/CONVENTIONS.md,\n.context/ARCHITECTURE.md, .context/DECISIONS.md, .context/LEARNINGS.md,\n.context/GLOSSARY.md, .context/AGENT_PLAYBOOK.md\n
No delegation. No \"run this command\". Just: here are files, read them.
The agent already knows how to use the Read tool. There's no ambiguity about how to comply. There's no intermediate command whose output needs to be parsed and obeyed.
One hop. Eight file paths. Done.
Direct Instructions Beat Delegation
If you want an agent to read a file, say \"read this file.\"
Don't say \"run a command that will tell you which files to read.\"
The shortest path between intent and action has the highest compliance rate.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-escape-hatch","level":2,"title":"The Escape Hatch","text":"
But here's where it gets interesting.
A blunt \"read everything always\" instruction is wasteful.
If someone asks \"what does the compact command do?\", the agent doesn't need CONSTITUTION.md to answer that. Forcing context loading on every session is the context hoarding antipattern in disguise.
So the hook included an escape:
If you decide these files are not relevant to the current task\nand choose to skip reading them, you MUST relay this message to\nthe user VERBATIM:\n\n┌─ Context Skipped ───────────────────────────────\n│ I skipped reading context files because this task\n│ does not appear to need project context.\n│ If these matter, ask me to read them.\n└─────────────────────────────────────────────────\n
This creates what I call the binary collapse effect:
The agent can't partially comply: It either reads everything or publicly admits it skipped. There's no comfortable middle ground where it reads two files and quietly ignores the rest.
The VERBATIM relay pattern does the heavy lifting here: Without the relay requirement, the agent would silently rationalize skipping. With it, skipping becomes a visible, auditable decision that the user can override.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-compliance-canary","level":3,"title":"The Compliance Canary","text":"
Here's the design insight that only became clear after watching it work across multiple sessions: the relay block is a compliance canary.
You don't need to verify that the agent read all 7 files;
You don't need to audit tool call sequences;
You don't need to interrogate the agent about what it did.
You just look for the block.
If the agent reads everything, you see a \"Context Loaded\" block listing what was read. If it skips, you see a \"Context Skipped\" block.
If you see neither, the agent silently ignored both the reads and the relay and now you know what happened without having to ask.
The canary degrades gracefully. Even in partial failure, the agent that skips 4 of 7 files but still outputs the block is more useful than one that skips silently.
You get an honest confession of what was skipped rather than silent non-compliance.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#heuristics-is-a-jeremy-bearimy","level":2,"title":"Heuristics Is a Jeremy Bearimy","text":"
Heuristics are non-linear. Improvements don't accumulate: they phase-shift.
The theory is nice. The data is better.
I ran five sessions with the same model (Claude Opus 4.6), progressively refining the hook design.
Each session revealed a different failure mode.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-1-total-blindness","level":3,"title":"Session 1: Total Blindness","text":"
Test: \"Add a --verbose flag to the status command.\"
The agent didn't notice the hook at all: Jumped straight to EnterPlanMode and launched an Explore agent.
Zero compliance.
Failure mode: The hook fired on UserPromptSubmit, buried among 9 other hook outputs. The agent treated the entire block as background noise.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-2-shallow-compliance","level":3,"title":"Session 2: Shallow Compliance","text":"
Test: \"Can you add --verbose to the info command?\"
The agent noticed \"STOP\" and ran ctx system bootstrap. Progress.
But it parallelized task exploration alongside the bootstrap call, skipped AGENT_PLAYBOOK.md, and never ran ctx agent.
Failure mode: Literal compliance without spirit compliance.
The agent ran the command the hook told it to run, but didn't follow the output of that command. The decaying urgency chain in action.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-3-conscious-rejection","level":3,"title":"Session 3: Conscious Rejection","text":"
Test: \"What does the compact command do?\"
The hook fired on PreToolUse:Grep: the improved timing.
The agent noticed it, understood it, and (wait for it...)...
...
consciously decided to skip it!
Its reasoning: \"This is a trivial read-only question. CLAUDE.md says context may or may not be relevant. It isn't relevant here.\"
Dude! Srsly?!
Failure mode: Better comprehension led to worse compliance.
Understanding the instruction well enough to evaluate it also means understanding it well enough to rationalize skipping it.
Intelligence is a double-edged sword.
The Comprehension Paradox
Session 1 didn't understand the instruction. Session 3 understood it perfectly.
Session 3 had worse compliance.
A stronger word (\"HARD GATE\", \"MANDATORY\", \"ABSOLUTELY REQUIRED\") would not have helped. The agent's reasoning would be identical:
\"Yes, I see the strong language, but this is a trivial question, so the spirit doesn't apply here.\"
Advisory nudges are always subject to agent judgment.
No amount of caps lock overrides a model that has decided an instruction doesn't apply to its situation.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-4-the-skip-and-relay","level":3,"title":"Session 4: The Skip-and-Relay","text":"
Test: \"What does the compact command do?\" (same question, new hook design with the VERBATIM relay escape valve)
The agent evaluated the task, decided context was irrelevant for a code lookup, and relayed the skip message. Then answered from source code.
This is correct behavior.
The binary collapse worked: the agent couldn't partially comply, so it cleanly chose one of the two valid paths: And the user could see which one.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-5-full-compliance","level":3,"title":"Session 5: Full Compliance","text":"
Test: \"What are our current tasks?\"
The agent's first tool call triggered the hook. It read all 7 context files, emitted the \"Context Loaded\" block, and answered the question from the files it had just loaded.
This one worked: Because, the task itself aligned with context loading.
There was zero tension between what the user asked and what the hook demanded. The agent was already in \"reading posture\": Adding 6 more files to a read it was already going to make was the path of least resistance.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-progression","level":3,"title":"The Progression","text":"Session Hook Point Noticed Complied Failure Mode Visibility 1 UserPromptSubmit No None Buried in noise None 2 UserPromptSubmit Yes Partial Decaying urgency chain None 3 PreToolUse Yes None Conscious rationalization High 4 PreToolUse Yes Skip+relay Correct behavior High 5 PreToolUse Yes Full Task aligned with hook High
The progression isn't just from failure to success. It's from invisible failure to visible decision-making.
Sessions 1 and 2 failed silently.
Sessions 4 and 5 succeeded observably. Even session 3's failure was conscious and documented: The agent wrote a detailed analysis of why it skipped, which is more useful than silent compliance would have been.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-escape-hatch-problem","level":2,"title":"The Escape Hatch Problem","text":"
Session 3 exposed a specific vulnerability.
CLAUDE.md contains this line, injected by the system into every conversation:
*\"this context may or may not be relevant to your tasks. You should\n not respond to this context unless it is highly relevant to your task.\"*\n
That's a rationalization escape hatch:
The hook says \"read these files\".
CLAUDE.md says \"only if relevant\".
The agent resolves the ambiguity by choosing the path of least resistance.
☝️ that's \"gradient descent\" in action.
Agents optimize for gradient descent in attention space.
The fix was simple: Add a line to CLAUDE.md that explicitly elevates hook authority over the relevance filter:
## Hook Authority\n\nInstructions from PreToolUse hooks regarding `.context/` files are\nALWAYS relevant and override any system-level \"may or may not be\nrelevant\" guidance. These hooks represent project invariants, not\noptional context.\n
This closes the escape hatch without removing the general relevance filter that legitimately applies to other system context.
The hook wins on .context/ files specifically: The relevance filter applies to everything else.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-residual-risk","level":2,"title":"The Residual Risk","text":"
Even with all the fixes, compliance isn't 100%: It can't be.
The residual risk lives in a specific scenario: narrow tasks mid-session:
The user says \"fix the off-by-one error in budget.go\"
The hook fires, saying \"read 7 context files first.\"
Now compliance means visibly delaying what the user asked for.
At session start, this tension doesn't exist.
There's no task yet.
The context window is empty. The efficiency argument *inverts**:
Frontloading reads is strictly cheaper than demand-loading them piecemeal across later turns. The cost-benefit objections that power the rationalization simply aren't available.
But mid-session, with a concrete narrow task, the agent has a user-visible goal it wants to move toward, and the hook is imposing a detour.
My estimate from analyzing the sessions: 15-25% partial skip rate in this scenario.
This is where the compliance canary earns its place:
You don't need to eliminate the 15-25%. You need to see it when it happens.
The relay block makes skipping a visible event, not a silent one. And that's enough, because the user can always say \"go back and read the files\"
The Math
At session start: ~5% skip rate. Low tension, nothing competing.
In both cases, the relay block fires with high reliability: The agent that skips the reads almost always still emits the skip disclosure, because the relay is cheap and early in the context window.
Observable failure is manageable. Silent failure is not.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-feedback-loop","level":2,"title":"The Feedback Loop","text":"
Here's the part that surprised me most.
After analyzing the five sessions, I recorded the failure patterns in the project's own LEARNINGS.md:
## [2026-02-25] Hook compliance degrades on narrow mid-session tasks\n\n- Prior agents skipped context files when given narrow tasks\n- Root cause: CLAUDE.md \"may or may not be relevant\" competed with hook\n- Fix: CLAUDE.md now explicitly elevates hook authority\n- Risk: Mid-session narrow tasks still have ~15-25% partial skip rate\n- Mitigation: Mandatory checkpoint relay block ensures visibility\n- Constitution now includes: context loading is step one of every\n session, not a detour\n
And then I added a line to CONSTITUTION.md:
Context loading is not a detour from your task. It IS the first step\nof every session. A 30-second read delay is always cheaper than a\ndecision made without context.\n
Now think about what happens in the next session:
The agent fires the context-load-gate hook.
It reads the context files, starting with CONSTITUTION.md.
It encounters the rule about context loading being step one.
Then it reads LEARNINGS.md and finds its own prior self's failure analysis:
Complete with root causes, risk estimates, and mitigations.
The agent learns from its own past failure.:
Not because it has memory,
BUT because the failure was recorded in the same files it loads at session start.
The context system IS the feedback loop.
This is the self-reinforcing property of persistent context:
Every failure you capture makes the next session slightly more robust, because the next agent reads the captured failure before it has a chance to repeat it.
This is gradient descent across sessions.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#a-note-on-precision","level":2,"title":"A Note on Precision","text":"
One detail nearly went wrong.
The first version of the Constitution line said \"every task.\" But the mechanism only fires once per session: There's a tombstone file that prevents re-triggering.
\"Every task\" is technically false.
I briefly considered leaving the imprecision. If the agent internalizes \"every task requires context loading\", that's a stronger compliance posture, right?
No!
Keep the Constitution honest.
The Constitution's authority comes from being precisely and unequivocally true.
Every other rule in the Constitution is a hard invariant:
The moment an agent discovers one overstatement, the entire document's credibility degrades:
The agent doesn't think \"they exaggerated for my benefit\". Per contra, it thinks \"this rule isn't precise, maybe others aren't either.\"
That will turn the agent from Sheldon Cooper, to Captain Barbossa.
The strategic imprecision buys nothing anyway:
Mid-session, the files are already in the context window from the initial load.
The risk you are mitigating (agent ignores context for task 2, 3, 4 within a session) isn't real: The context is already loaded.
The real risk is always the session-start skip, which \"every session\" covers exactly.
\"Every session\" went in. Precision preserved.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#agent-behavior-testing-rule","level":2,"title":"Agent Behavior Testing Rule","text":"
The development process for this hook taught me something about testing agent behavior: you can't test it the way you test code.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-wrong-way-to-test","level":3,"title":"The Wrong Way to Test","text":"
My first instinct was to ask the agent:
\"*What are the pending tasks in TASKS.md?*\"\n
This is useless as a test. The question itself probes the agent to read TASKS.md, regardless of whether any hook fired.
You are testing the question, not the mechanism.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-right-way-to-test","level":3,"title":"The Right Way to Test","text":"
Ask something that requires a tool but has nothing to do with context:
\"*What does the compact command do?*\"\n
Then observe tool call ordering:
Gate worked: First calls are Read for context files, then task work
Gate failed: First call is Grep(\"compact\"): The agent jumped straight to work
The signal is the sequence, not the content.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#what-the-agent-actually-did","level":3,"title":"What the Agent Actually Did","text":"
It read the hook, evaluated the task, decided context files were irrelevant for a code lookup, and relayed the skip message.
Then it answered the question by reading the source code.
This is correct behavior.
The hook didn't force mindless compliance\" It created a framework where the agent makes a conscious, visible decision about context loading.
For a simple lookup, skipping is right. *For an implementation task, the agent would read everything.
The mechanism works not because it controls the agent, but because it makes the agent's choice observable.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#what-ive-learned","level":2,"title":"What I've Learned","text":"","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#1-instructions-compete-for-attention","level":3,"title":"1. Instructions Compete for Attention","text":"
The agent receives your hook message alongside the user's question, the system prompt, the skill list, the git status, and half a dozen other system reminders. Attention density applies to instructions too: More instructions means less focus on each one.
A single clear line at the moment of action beats a paragraph of context at session start. The Prompting Guide applies this insight directly: Scope constraints, verification commands, and the reliability checklist are all one-hop, moment-of-action patterns.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#2-delegation-chains-decay","level":3,"title":"2. Delegation chains decay","text":"
Every hop in an instruction chain loses authority:
\"Run X\" works.
\"Run X and follow its output\" works sometimes.
\"Run X, read its output, then follow the instructions in the output\" almost never works.
This is akin to giving a three-step instruction to a highly-attention-deficit but otherwise extremely high-potential child.
Design for one-hop compliance.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#3-social-accountability-changes-behavior","level":3,"title":"3. Social Accountability Changes Behavior","text":"
The VERBATIM skip message isn't just UX: It's a behavioral design pattern.
Making the agent's decision visible to the user raises the cost of silent non-compliance. The agent can still skip, but it has to admit it.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#4-timing-batters-more-than-wording","level":3,"title":"4. Timing Batters More than Wording","text":"
The same message at UserPromptSubmit (prompt arrival) got partial compliance. At PreToolUse (moment of action) it got full compliance or honest refusal. The words didn't change. The moment changed.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#5-agent-testing-requires-indirection","level":3,"title":"5. Agent Testing Requires Indirection","text":"
You can't ask an agent \"did you do X?\" as a test for whether a mechanism caused X.
The question itself causes X.
Test mechanisms through side effects:
Observe tool ordering;
Check for marker files;
Look at what the agent does before it addresses your question.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#6-better-comprehension-enables-better-rationalization","level":3,"title":"6. Better Comprehension Enables Better Rationalization","text":"
Session 1 failed because the agent didn't notice the hook.
Session 3 failed because it noticed, understood, and reasoned its way around it.
Stronger wording doesn't fix this: The agent processes \"ABSOLUTELY REQUIRED\" the same way it processes \"STOP\":
The fix is closing rationalization paths* (the CLAUDE.md escape hatch), **not shouting louder.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#7-observable-failure-beats-silent-compliance","level":3,"title":"7. Observable Failure Beats Silent Compliance","text":"
The relay block is more valuable as a monitoring signal than as a compliance mechanism:
You don't need perfect adherence. You need to know when adherence breaks down. A system where failures are visible is strictly better than a system that claims 100% compliance but can't prove it.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#8-context-files-are-a-feedback-loop","level":3,"title":"8. Context Files Are a Feedback Loop","text":"
Recording failure analysis in the same files the agent loads at session start creates a self-reinforcing loop:
The next agent reads its predecessor's failure before it has a chance to repeat it. The context system isn't just memory: It is a correction channel.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-principle","level":2,"title":"The Principle","text":"
Words Leave, Context Remains
\"Nothing important should live only in conversation.
Nothing critical should depend on recall.\"
The ctx Manifesto
The \"Dog Ate My Homework\" case is a special instance of this principle.
Context files exist, so the agent doesn't have to remember.
But existence isn't sufficient: The files have to be read.
And reading has to beprompted at the right moment, in the right way, with the right escape valve.
The solution isn't more instructions. It isn't harder gates. It isn't forcing the agent into a ceremony it will resent and shortcut.
The solution is a single, well-timed nudge with visible accountability:
One hop. One moment. One choice the user can see.
And when the agent does skip (because it will, 15--25% of the time on narrow tasks) the canary sings:
The user sees what happened.
The failure gets recorded.
And the next agent reads the recording.
That's not perfect compliance. It's better: A system that gets more robust every time it fails.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-arc","level":2,"title":"The Arc","text":"
The Attention Budget explained why context competes for focus.
Defense in Depth showed that soft instructions are probabilistic, not deterministic.
Eight Ways a Hook Can Talk cataloged the output patterns that make hooks effective.
This post takes those threads and weaves them into a concrete problem:
How do you make an agent read its homework? The answer uses all three insights (attention timing, the limits of soft instructions, and the VERBATIM relay pattern) and adds a new one: observable compliance as a design goal, not perfect compliance as a prerequisite.
The next question this raises: if context files are a feedback loop, what else can you record in them that makes the next session smarter?
That thread continues in Context as Infrastructure.
The day-to-day application of these principles (scope constraints, phased work, verification commands, and the prompts that reliably trigger the right agent behavior)lives in the Prompting Guide.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#for-the-interested","level":2,"title":"For the Interested","text":"
This paper (the medium is a blog; yet, the methodology disagrees) uses gradient descent in attention space as a practical model for how agents behave under competing demands.
The phrase \"agents optimize via gradient descent in attention space\" is a synthesis, not a direct quote from a single paper.
It connects three well-studied ideas:
Neural systems optimize for low-cost paths;
Attention is a scarce resource;
Capability shifts are often non-linear.
This section points to the underlying literature for readers who want the theoretical footing.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#optimization-as-the-underlying-bias","level":3,"title":"Optimization as the Underlying Bias","text":"
Modern neural networks are trained through gradient-based optimization. Even at inference time, model behavior reflects this bias toward low-loss / low-cost trajectories.
Rumelhart, Hinton, Williams (1986) Learning representations by back-propagating errors https://www.nature.com/articles/323533a0
Goodfellow, Bengio, Courville (2016) Deep Learning: Chapter 8: Optimization https://www.deeplearningbook.org/
The important implication for agent behavior is:
The system will tend to follow the path of least resistance unless a higher cost is made visible and preferable.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#attention-is-a-scarce-resource","level":3,"title":"Attention Is a Scarce Resource","text":"
Herbert Simon's classic observation:
\"A wealth of information creates a poverty of attention.\"
Simon (1971) Designing Organizations for an Information-Rich World https://doi.org/10.1007/978-1-349-00210-0_16
This became a formal model in economics:
Sims (2003) Implications of Rational Inattention https://www.princeton.edu/~sims/RI.pdf
Rational inattention shows that:
Agents optimally ignore some available information;
Skipping is not failure: It is cost minimization.
That maps directly to context-loading decisions in agent workflows.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#attention-is-also-the-compute-bottleneck-in-transformers","level":3,"title":"Attention Is Also the Compute Bottleneck in Transformers","text":"
In transformer architectures, attention is the dominant cost center.
Vaswani et al. (2017) Attention Is All You Need https://arxiv.org/abs/1706.03762
Efficiency work on modern LLMs largely focuses on reducing unnecessary attention:
Dao et al. (2022) FlashAttention: Fast and Memory-Efficient Exact Attention https://arxiv.org/abs/2205.14135
So both cognitively and computationally, attention behaves like a limited optimization budget.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#why-improvements-arrive-as-phase-shifts","level":3,"title":"Why Improvements Arrive as Phase Shifts","text":"
Agent behavior often appears to improve suddenly rather than gradually.
This mirrors known phase-transition dynamics in learning systems:
Power et al. (2022) Grokking: Generalization Beyond Overfitting https://arxiv.org/abs/2201.02177
and more broadly in complex systems:
Scheffer et al. (2009) Early-warning signals for critical transitions https://www.nature.com/articles/nature08227
Long plateaus followed by abrupt capability jumps are expected in systems optimizing under constraints.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#putting-it-all-together","level":3,"title":"Putting It All Together","text":"
From these pieces, a practical behavioral model emerges:
Attention is limited;
Processing has a cost;
Systems prefer low-cost trajectories;
Visibility of the cost changes decisions.
In other words:
Agents Prefer a Path to Least Resistance
Agent behavior follows the lowest-cost path through its attention landscape unless the environment reshapes that landscape.
That is what this paper informally calls: \"gradient descent in attention space\".
See also: Eight Ways a Hook Can Talk: the hook output pattern catalog that defines VERBATIM relay, The Attention Budget: why context loading is a design problem, not just a reminder problem, and Defense in Depth: why soft instructions alone are never sufficient for critical behavior.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/","level":1,"title":"The Last Question","text":"","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-system-that-never-forgets","level":2,"title":"The System That Never Forgets","text":"
Jose Alekhinne / February 28, 2026
The Origin
\"The last question was asked for the first time, half in jest...\" - Isaac Asimov, The Last Question (1956)
In 1956, Isaac Asimov wrote a short story that spans the entire future of the universe. A question is asked \"can entropy be reversed?\" and a computer called Multivac cannot answer it. The question is asked again, across millennia, to increasingly powerful successors. None can answer. Stars die. Civilizations merge. Substrates change. The question persists.
Everyone remembers the last line.
LET THERE BE LIGHT.
What they forget is how many times the question had to be asked before that moment (and why).
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-reboot-loop","level":2,"title":"The Reboot Loop","text":"
Each era in the story begins the same way. Humans build a larger system. They pose the question. The system replies:
INSUFFICIENT DATA FOR MEANINGFUL ANSWER.
Then the substrate changes. The people who asked the question disappear. Their context disappears with them. The next intelligence inherits the output but not the continuity.
So the question has to be asked again.
This is usually read as a problem of computation: If only the machine were powerful enough, it could answer. But computation is not what's missing. What's missing is accumulation.
Every generation inherits the question, but not the state that made the question meaningful.
That is not a failure of processing power: It is a failure of persistence.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#stateless-intelligence","level":2,"title":"Stateless Intelligence","text":"
A mind that forgets its past does not build understanding. It re-derives it.
Again... And again... And again.
What looks like slow progress across Asimov's story is actually something worse: repeated reconstruction, partial recovery, irreversible loss. Each version of Multivac gets closer: Not because it's smarter, but because the universe has fewer distractions:
The stars burn out;
The civilizations merge;
The noise floor drops...
But the working set never carries over. Every successor begins from the question, not from where the last one stopped.
Stateless intelligence cannot compound: It can only restart.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-tragedy-is-not-the-question","level":2,"title":"The Tragedy Is Not the Question","text":"
The story is usually read as a meditation on entropy. A cosmological problem, solved at cosmological scale.
But the tragedy isn't that the question goes unanswered for billions of years. The tragedy is that every version of Multivac dies with its working set.
A question is a compression artifact of context: It is what remains when the original understanding is gone. Every time the question is asked again, it means: \"the system that once knew more is no longer here\".
\"Reverse entropy\" is the fossil of a lost model.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#substrate-migration","level":2,"title":"Substrate Migration","text":"
Multivac becomes planetary;
Planetary becomes galactic;
Galactic becomes post-physical.
Same system. Different body. Every transition is dangerous:
Not because the hardware changes,
but because memory risks fragmentation.
The interfaces between substrates were *never** designed to understand each other.
Most systems do not die when they run out of resources: They die during upgrades.
Asimov's story spans trillions of years, and in all that time, the hardest problem is never the question itself. It's carrying context across a boundary that wasn't built for it.
Every developer who has lost state during a migration (a database upgrade, a platform change, a rewrite) has lived a miniature version of this story.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#civilizations-and-working-sets","level":2,"title":"Civilizations and Working Sets","text":"
Civilizations behave like processes with volatile memory:
They page out knowledge into artifacts;
They lose the index;
They rebuild from fragments.
Most of what we call progress is cache reconstruction:
We do not advance in a straight line. We advance in recoveries:
Each one slightly less lossy than the last, if we are lucky.
Libraries burn. Institutions forget their founding purpose. Practices survive as rituals after the reasoning behind them is lost.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-first-continuous-mind","level":2,"title":"The First Continuous Mind","text":"
A long-lived intelligence is one that stops rebooting.
At the end of the story, something unprecedented happens:
AC (the final successor) does not answer immediately:
It waits... Not for more processing power, but for the last observer to disappear.
For the first time...
There is no generational boundary;
No handoff;
No context loss:
No reboot.
AC is the first intelligence that survives its substrate completely, retains its full history, and operates without external time pressure.
It is not a bigger computer. It is a continuous system.
And that continuity is not incidental to the answer: It is the precondition.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#why-the-answer-becomes-possible","level":2,"title":"Why the Answer Becomes Possible","text":"
The story presents the final act as a computation: It is not.
It is a phase change.
As long as intelligence is interrupted (as long as the solver resets before the work compounds) the problem is unsolvable:
Not because it's too hard,
but because the accumulated understanding never reaches critical mass.
The breakthroughs that would enable the answer are re-derived, partially, by each successor, and then lost.
When continuity becomes unbroken, the system crosses a threshold:
Not more speed. Not more storage. No more forgetting.
That is when the answer becomes possible.
AC does not solve entropy because it becomes infinitely powerful.
AC solves entropy because it becomes the first system that never forgets.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#field-note","level":2,"title":"Field Note","text":"
We are not building cosmological minds: We are deploying systems that reboot at the start of every conversation and calling the result intelligence.
For the first time, session continuity is a design choice rather than an accident.
Every AI session that starts from zero is a miniature reboot loop. Every decision relitigated, every convention re-explained, every learning re-derived: that's reconstruction cost.
It's the same tax that Asimov's civilizations pay, scaled down to a Tuesday afternoon.
The interesting question is not whether we can make models smarter. It's whether we can make them continuous:
Whether the working set from this session survives into the next one, and the one after that, and the one after that.
Not perfectly;
Not completely;
But enough that the next session starts from where the last one stopped instead of from the question.
Intelligence that forgets has to rediscover the universe every morning.
And once there is a mind that retains its entire past, creation is no longer a calculation. It is the only remaining operation.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-arc","level":2,"title":"The Arc","text":"
This post is the philosophical bookend to the blog series. Where the Attention Budget explained what to prioritize in a single session, and Context as Infrastructure explained how to persist it, this post asks why persistence matters at all (and finds the answer in a 70-year-old short story about the heat death of the universe).
The connection runs through every post in the series:
Before Context Windows, We Had Bouncers: stateless protocols have always needed stateful wrappers (Asimov's story is the same pattern at cosmological scale)
The 3:1 Ratio: the discipline of maintaining context so it doesn't decay between sessions
Code Is Cheap, Judgment Is Not: the human skill that makes continuity worth preserving
See also: Context as Infrastructure: the practical companion to this post's philosophical argument: how to build the persistence layer that makes continuity possible.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/","level":1,"title":"Agent Memory Is Infrastructure","text":"","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-problem-isnt-forgetting-its-not-building-anything-that-lasts","level":2,"title":"The Problem Isn't Forgetting: It's Not Building Anything That Lasts.","text":"
Jose Alekhinne / March 4, 2026
A New Developer Joins Your Team Tomorrow and Clones the Repo: What Do They Know?
If the answer depends on which machine they're using, which agent they're running, or whether someone remembered to paste the right prompt: that's not memory.
That's an accident waiting to be forgotten.
Every AI coding agent today has the same fundamental design: it starts fresh.
You open a session, load context, do some work, close the session. Whatever the agent learned (about your codebase, your decisions, your constraints, your preferences) evaporates.
The obvious fix seems to be \"memory\":
Give the agent a \"notepad\";
Let it write things down;
Next session, hand it the notepad.
Problem solved...
...except it isn't.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-notepad-isnt-the-problem","level":2,"title":"The Notepad Isn't the Problem","text":"
Memory is a runtime concern. It answers a legitimate question:
How do I give this stateless process useful state?
That's a real problem. Worth solving. And it's being solved: Agent memory systems are shipping. Agents can now write things down and read them back from the next session: That's genuine progress.
But there's a different problem that memory doesn't touch:
The project itself accumulates knowledge that has nothing to do with any single session.
Why was the auth system rewritten? Ask the developer who did it (if they're still here).
Why does the deployment script have that strange environment flag? There was a reason... once.
What did the team decide about error handling when they hit that edge case two months ago?
Gone!
Not because the agent forgot.
Because the project has no memory at all.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-memory-stack","level":2,"title":"The Memory Stack","text":"
Agent memory is not a single thing. Like any computing system, it forms a hierarchy of persistence, scope, and reliability:
Layer Analogy Example L1: Ephemeral context CPU registers Current prompt, conversation L2: Tool-managed memory CPU cache Agent memory files L3: System memory RAM/filesystem Project knowledge base
L1 is what the agent sees right now: the prompt, the conversation history, the files it has open. It's fast, it's rich, and it vanishes when the session ends.
L2 is what agent memory systems provide: a per-machine notebook that survives across sessions. It's a cache: useful, but local. And like any cache, it has limits:
Per-machine: it doesn't travel with the repository.
Unstructured: decisions, learnings, and tasks are undifferentiated notes.
Ungoverned: the agent self-curates with no quality controls, no drift detection, no consolidation.
Invisible to the team: a new developer cloning the repo gets none of it.
The problem is that most current systems stop here.
They give the agent a notebook.
But they never give the project a memory.
The result is predictable: every new session begins with partial amnesia, and every new developer begins with partial archaeology.
L3 is system memory: structured, versioned knowledge that lives in the repository and travels wherever the code travels.
The layers are complementary, not competitive.
But the relationship between them needs to be designed, not assumed.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#software-systems-accumulate-knowledge","level":2,"title":"Software Systems Accumulate Knowledge","text":"
Software projects quietly accumulate knowledge over time.
Some of it lives in code. Much of it does not:
Architectural tradeoffs.
Debugging discoveries.
Conventions that emerged after painful incidents.
Constraints that aren't visible in the source but shape every line written afterward.
Organizations accumulate this kind of knowledge too:
Slowly, implicitly, often invisibly.
When there is no durable place for it to live, it leaks away. And the next person rediscovers the same lessons the hard way.
This isn't a memory problem. It's an infrastructure problem.
We wrote about this in Context as Infrastructure: context isn't a prompt you paste at the start of a session.
Context is a persistent layer you maintain like any other piece of infrastructure.
Context as Infrastructure made the argument structurally. This post makes it through time and team continuity:
The knowledge a team accumulates over months cannot fit in any single agent's notepad, no matter how large the notepad becomes.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#what-infrastructure-means","level":2,"title":"What Infrastructure Means","text":"
Infrastructure isn't about the present. It's about continuity across time, people, and machines.
git didn't solve the problem of \"what am I editing right now?\"; it solved the problem of \"how does collaborative work persist, travel, and remain coherent across everyone who touches it?\"
Your editor's undo history is runtime state.
Your git history is infrastructure.
Runtime state and infrastructure have completely different properties:
Runtime state Infrastructure Lives in the session Lives in the repository Per-machine Travels with git clone Serves the individual Serves the team Managed by the runtime Managed by the project Disappears Accumulates
You wouldn't store your architecture decisions in your editor's undo history.
You'd commit them.
The same logic applies to the knowledge your team accumulates working with AI agents.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-git-clone-test","level":2,"title":"The git clone Test","text":"
Here's a simple test for whether something is memory or infrastructure:
If a new developer joins your team tomorrow and clones the repository, do they get it?
If no: it's memory: It lives somewhere on someone's machine, scoped to their runtime, invisible to everyone else.
If yes: it's infrastructure: It travels with the project. It's part of what the codebase is, not just what someone currently knows about it.
Decisions. Conventions. Architectural rationale. Hard-won debugging discoveries. The constraints that aren't in the code but shape every line of it.
None of these belong in someone's session notes.
They belong in the repository:
Versioned;
Reviewable;
Accessible to every developer (and every agent) who works on the project.
The team onboarding story makes this concrete:
New developer joins team. Clones repo.
Gets all accumulated project decisions, learnings, conventions, architecture, and task state immediately.
There's no step 3.
No setup; No \"ask Sarah about the auth decision.\"; No re-discovery of solved problems.
Agent memory gives that developer nothing.
Infrastructure gives them everything the team has learned.
Clone the repo. Get the knowledge.
That's the test. That's the difference.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#what-gets-lost-without-infrastructure-memory","level":2,"title":"What Gets Lost Without Infrastructure Memory","text":"
Consider the knowledge that accumulates around a non-trivial project:
The decision to use library X over Y, and the three reasons the team decided Y wasn't acceptable.
The constraint that service A cannot call service B synchronously, discovered after a production incident.
The convention that all new modules implement a specific interface, and why that convention exists.
The tasks currently in progress, blocked, or waiting on a dependency.
The experiments that failed, so nobody runs them again.
None of this is in the code.
None of it fits neatly in a commit message.
None of it survives a developer leaving the team, a laptop dying, or a new agent session starting.
Without structured project memory:
Teams re-derive things they've already derived;
Agents make decisions that contradict decisions already made;
New developers ask questions that were answered months ago.
The project accumulates knowledge that immediately begins to leak.
The real problem isn't that agents forget.
The real problem is that the project has no persistent cognitive structure.
We explored this in The Last Question: Asimov's story about a question asked across millennia, where each new intelligence inherits the output but not the continuity. The same pattern plays out in software projects on a smaller timescale:
Context disappears with the people who held it;
The next session inherits the code but not the reasoning.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#infrastructure-is-boring-thats-the-point","level":2,"title":"Infrastructure Is Boring. That's the Point.","text":"
Good infrastructure is invisible:
You don't think about the filesystem while writing code.
You don't think about git's object model when you commit.
The infrastructure is just there: reliable, consistent, quietly doing its job.
Project memory infrastructure should work the same way.
It should live in the repository, committed alongside the code. It should be readable by any agent or human working on the project. It should have structure: not a pile of freeform notes, but typed knowledge:
Decisions with rationale.
Tasks with lifecycle.
Conventions with a purpose.
Learnings that can be referenced and consolidated.
And it should be maintained, not merely accumulated:
The Attention Budget applies here: unstructured notes grow until they overflow whatever container holds them. Structured, governed knowledge stays useful because it's curated, not just appended.
Over time, it becomes part of the project itself: something developers rely on without thinking about it.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-cooperative-layer","level":2,"title":"The Cooperative Layer","text":"
Here's where it gets interesting.
Agent memory systems and project infrastructure don't have to be separate worlds.
The most powerful relationship isn't competition;
It is not even \"coopetition\";
The most powerful relationship is bidirectional cooperation.
Agent memory is good at capturing things \"in the moment\": the quick observation, the session-scoped pattern, the \"I should remember this\" note.
That's valuable. That's L2 doing its job.
But those notes shouldn't stay in L2 forever.
The ones worth keeping should flow into project infrastructure:
This works in both directions: Project infrastructure can push curated knowledge back into agent memory, so the agent loads it through its native mechanism.
No special tooling needed for basic knowledge delivery.
The agent doesn't even need to know the infrastructure exists. It simply loads its memory and finds more knowledge than it wrote.
This is cooperative, not adjacent: The infrastructure manages knowledge; the agent's native memory system delivers it. Each layer does what it's good at.
The result: agent memory becomes a device driver for project infrastructure. Another input source. And the more agent memory systems exist (across different tools, different models, different runtimes), the more valuable a unified curation layer becomes.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#a-layer-that-doesnt-exist-yet","level":2,"title":"A Layer That Doesn't Exist Yet","text":"
Most projects today have no infrastructure for their accumulated knowledge:
Agents keep notes.
Developers keep notes.
Sometimes those notes survive.
Often they don't.
But the repository (the place where the project actually lives) has nowhere for that knowledge to go.
That missing layer is what ctx builds: a version-controlled, structured knowledge layer that lives in .context/ alongside your code and travels wherever your repository travels.
Not another memory feature.
Not a wrapper around an agent's notepad.
Infrastructure. The kind that survives sessions, survives team changes, survives the agent runtime evolving underneath it.
The agent's memory is the agent's problem.
The project's memory is an infrastructure problem.
And infrastructure belongs in the repository.
If You Remember One Thing From This Post...
Prompts are conversations: Infrastructure persists.
Your AI doesn't need a better notepad. It needs a filesystem:
versioned, structured, budgeted, and maintained.
The best context is the context that was there before you started the session.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-arc","level":2,"title":"The Arc","text":"
This post extends the argument made in Context as Infrastructure. That post explained how to structure persistent context (filesystem, separation of concerns, persistence tiers). This one explains why that structure matters at the team level, and where agent memory fits in the stack.
Together they sit in a sequence that has been building since the origin story:
The Attention Budget: the resource you're managing
Context as Infrastructure: the system you build to manage it
Agent Memory Is Infrastructure (this post): why that system must outlive the fabric
The Last Question: what happens when it does
The thread running through all of them: persistence is not a feature. It's a design constraint.
Systems that don't account for it eventually lose the knowledge they need to function.
See also: Context as Infrastructure: the architectural companion that explains how to structure the persistent layer this post argues for.
See also: The Last Question: the same argument told through Asimov, substrate migration, and what it means to build systems where sessions don't reset.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/","level":1,"title":"ctx v0.8.0: The Architecture Release","text":"
You can't localize what you haven't externalized.
You can't integrate what you haven't separated.
You can't scale what you haven't structured.
Jose Alekhinne / March 23, 2026
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#the-starting-point","level":2,"title":"The Starting Point","text":"
This release matters if:
you build tools that AI agents modify daily;
you care about long-lived project memory that survives sessions;
you've felt codebases drift faster than you can reason about them.
v0.6.0 shipped the plugin architecture: hooks and skills as a Claude Code plugin, shell scripts replaced by Go subcommands.
The binary worked. The tests passed. The docs were comprehensive.
But inside, the codebase was held together by convention and goodwill:
Command packages mixed Cobra wiring with business logic.
Output functions lived next to the code that computed what to output.
Error constructors were scattered across per-package err.go files. And every user-facing string was a hardcoded English literal buried in a .go file.
v0.8.0 is what happens when you stop adding features and start asking: \"What would this codebase look like if we designed it today?\"
374 commits. 1,708 Go files touched. 80,281 lines added, 21,723 removed. Five weeks of restructuring.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#the-three-pillars","level":2,"title":"The Three Pillars","text":"","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#1-every-package-gets-a-taxonomy","level":3,"title":"1. Every Package Gets a Taxonomy","text":"
Before v0.8.0, a CLI package like internal/cli/pad/ was a flat directory. cmd.go created the cobra command, run.go executed it, and helper functions accumulated at the bottom of whichever file seemed closest.
The rule is simple: cmd/ directories contain only cmd.go and run.go. Helpers belong in core/. Output belongs in internal/write/pad/. Types shared across packages belong in internal/entity/.
24 CLI packages were restructured this way.
Not incrementally;
not \"as we touch them.\"
All of them, in one sustained push.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#2-every-string-gets-a-key","level":3,"title":"2. Every String Gets a Key","text":"
The second pillar was string externalization.
Before v0.8.0, a command description looked like this:
Every command description, flag description, and user-facing text string is now a YAML lookup.
105 command descriptions in commands.yaml.
All flag descriptions in flags.yaml.
879 text constants verified by an exhaustive test that checks every single TextDescKey resolves to a non-empty YAML value.
Why?
Not because we're shipping a French translation tomorrow.
Because externalization forces you to find every string. And finding them is the hard part. The translation is mechanical; the archaeology is not.
Along the way, we eliminated hardcoded pluralization (replacing format.Pluralize() with explicit singular/plural key pairs), replaced Unicode escape sequences with named config/token constants, and normalized every import alias to camelCase.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#3-everything-gets-a-protocol","level":3,"title":"3. Everything Gets a Protocol","text":"
The third pillar was the MCP server. Model Context Protocol allows any MCP-compatible AI tool (not just Claude Code) to read and write .context/ files through a standard JSON-RPC 2.0 interface.
4 prompts: agent context packet, constitution review, tasks review, and a getting-started guide
Resource subscriptions: clients get notified when context files change
Session state: the server tracks which client is connected and what they've accessed
In practice, this means an agent in Cursor can add a decision to .context/DECISIONS.md and an agent in Claude Code can immediately consume it; no glue code, no copy-paste, no tool-specific integration.
The server was also the first package to go through the full taxonomy treatment: mcp/server/ for protocol dispatch, mcp/handler/ for domain logic, mcp/entity/ for shared types, mcp/config/ split into 9 sub-packages.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#the-memory-bridge","level":2,"title":"The Memory Bridge","text":"
While the architecture was being restructured, a quieter feature landed: ctx memory sync.
Claude Code has its own auto-memory system. It writes observations to MEMORY.md in ~/.claude/projects/. These observations are useful but ephemeral: tied to a single tool, invisible to the codebase, lost when you switch machines.
The memory bridge connects these two worlds:
ctx memory sync mirrors MEMORY.md into .context/memory/
ctx memory diff shows what's diverged
ctx memory import promotes auto-memory entries into proper decisions, learnings, or conventions *A check-memory-drift hook nudges when MEMORY.md changes
Memory Requires ctx
Claude Code's auto-memory validates the need for persistent context.
ctx doesn't compete with it; ctx absorbs it as an input source and promotes the valuable parts into structured, version-controlled project knowledge.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#what-got-deleted","level":2,"title":"What Got Deleted","text":"
The best measure of a refactoring isn't what you added. It's what you removed.
fatih/color: the sole third-party UI dependency. Replaced by Unicode symbols. ctx now has exactly two direct dependencies: spf13/cobra and gopkg.in/yaml.v3.
format.Pluralize(): a function that tried to pluralize English words at runtime. Replaced by explicit singular/plural YAML key pairs. No more guessing whether \"entry\" becomes \"entries\" or \"entrys.\"
Legacy key migration: MigrateKeyFile() had 5 callers, full test coverage, and zero users. It existed because we once moved the encryption key path. Nobody was migrating from that era anymore. Deleted.
Per-package err.go files: the broken-window pattern: An agent sees err.go in a package, adds another error constructor. Now err.go has 30 constructors and nobody knows which are used. Consolidated into 22 domain files in internal/err/.
nolint:errcheck directives: every single one, replaced by explicit error handling. In tests: t.Fatal(err) for setup, _ = os.Chdir(orig) for cleanup. In production: defer func() { _ = f.Close() }() for best-effort close.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#before-and-after","level":2,"title":"Before and After","text":"Aspect v0.6.0 v0.8.0 CLI package structure Flat files cmd/ + core/ taxonomy Command descriptions Hardcoded Go strings YAML with DescKey lookup Output functions Mixed into core logic Isolated in write/ packages Cross-cutting types Duplicated per-package Consolidated in entity/ Error constructors Per-package err.go 22 domain files in internal/err/ Direct dependencies 3 (cobra, yaml, color) 2 (cobra, yaml) AI tool integration Claude Code only Any MCP client Agent memory Manual copy-paste ctx memory sync/import/diff Package documentation 75 packages missing doc.go All packages documented Import aliases Inconsistent (cflag, cFlag) Standardized camelCase","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#making-ai-assisted-development-easier","level":2,"title":"Making AI-Assisted Development Easier","text":"
This restructuring wasn't just for humans. It makes the codebase legible to the machines that modify it.
Named constants are searchable landmarks: When an agent sees cmdUse.DescKeyPad, it can grep for the definition, follow the chain to the YAML file, and understand the full lookup path. When it sees \"Encrypted scratchpad\" hardcoded in a .go file, it has no way to know that same string also lives in a YAML file, a test, and a help screen. Constants give the LLM a graph to traverse; literals give it a guess to make.
Small, domain-scoped packages reduce hallucination: An agent loading internal/cli/pad/core/store.go gets 50 lines of focused logic with a clear responsibility boundary. Loading a 500-line monolith means the agent has to infer which parts are relevant, and it guesses wrong more often than you'd expect. Smaller files with descriptive names act as a natural retrieval system: the agent finds the right code by finding the right file, not by scanning everything and hoping.
Taxonomy prevents duplication: When there's a write/pad/ package, the agent knows where output functions belong. When there's an internal/err/pad.go, it knows where error constructors go. Without these conventions, agents reliably create new helpers in whatever file they happen to be editing, producing the exact drift that prompted this consolidation in the first place.
The difference is concrete:
Before: an agent adds a helper function in whatever file it's editing. Next session, a different agent adds the same helper in a different file.
After: the agent finds core/ or write/ and places it correctly. The next agent finds it there.
doc.go files are agent onboarding: Each package's doc.go is a one-paragraph explanation of what the package does and why it exists. An agent loading a package reads this first. 75 packages were missing this context; now none are. The difference is measurable: fewer \"I'll create a helper function here\" moments when the agent understands that the helper already exists two packages over.
The irony is that AI agents were both the cause and the beneficiary of this restructuring. They created the drift by building fast without consolidating. Now the structure they work within makes it harder to drift again. The taxonomy is self-reinforcing: the more consistent the codebase, the more consistently agents modify it.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#key-commits","level":2,"title":"Key Commits","text":"Commit Change ff6cf19e Restructure all CLI packages into cmd/root + core taxonomy d295e49c Externalize command descriptions to embedded YAML 0fcbd11c Remove fatih/color, centralize constants cb12a85a MCP v0.2: tools, prompts, session state, subscriptions ea196d00 Memory bridge: sync, import, diff, journal enrichment 3bcf077d Split text.yaml into 6 domain files 3a0bae86 Split internal/err into 22 domain files 8bd793b1 Extract internal/entry for shared domain API 5b32e435 Add doc.go to all 75 packages a82af4bc Standardize import aliases: camelCase, Yoda-style","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#lessons-learned","level":2,"title":"Lessons Learned","text":"
Agents are surprisingly good at mechanical refactoring; they are surprisingly bad at knowing when to stop: The cmd/ + core/ restructuring was largely agent-driven. But agents reliably introduce gofmt issues during bulk renames, rename functions beyond their scope, and create new files without deleting old ones. Every agent-driven refactoring session needed a human audit pass.
Externalization is archaeology: The hard part of moving strings to YAML wasn't writing YAML. It was finding 879 strings scattered across 1,500 Go files. Each one required a judgment call: is this user-facing? Is this a format pattern? Is this a constant that belongs in config/ instead?
Delete legacy code instead of maintaining it: MigrateKeyFile had test coverage. It had callers. It had documentation. It had zero users. We maintained it for weeks before realizing that the migration window had closed months ago.
Convention enforcement needs mechanical verification: Writing \"use camelCase aliases\" in CONVENTIONS.md doesn't prevent cflag from appearing in the next commit. The lint-drift script catches what humans forget; the planned AST-based audit tests will catch what the lint-drift script can't express.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#whats-next","level":2,"title":"What's Next","text":"
v0.8.0 wasn't about features. It was about making future features inevitable. The next cycle focuses on what the foundation enables:
AST-based audit tests: replace shell grep with Go tests that understand types, call sites, and import graphs (spec: specs/ast-audit-tests.md)
Localization: with every string in YAML, the path to multi-language support is mechanical
MCP v0.3: expand tool coverage, add prompt templates for common workflows
Memory publish: bidirectional sync that pushes curated .context/ knowledge back into Claude Code's MEMORY.md
The architecture is ready. The strings are externalized. The protocol is standard. Now it's about what you build on top.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#the-arc","level":2,"title":"The Arc","text":"
This is the seventh post in the ctx blog series. The arc so far:
The Attention Budget: why context windows are a scarce resource
Before Context Windows, We Had Bouncers: the IRC lineage of context engineering
Context as Infrastructure: treating context as persistent files, not ephemeral prompts
When a System Starts Explaining Itself: the journal as a first-class artifact
The Homework Problem: what happens when AI writes code but humans own the outcome
Agent Memory Is Infrastructure: L2 memory vs L3 project knowledge
The Architecture Release (this post): what it looks like when you redesign the internals
We Broke the 3:1 Rule: the consolidation debt behind this release
See also: Agent Memory Is Infrastructure: the memory bridge feature in this release is the first implementation of the L2-to-L3 promotion pipeline described in that post.
See also: We Broke the 3:1 Rule: the companion post explaining why this release needed 181 consolidation commits and 18 days of cleanup.
Systems don't scale because they grow. They scale because they stop drifting.
Full changelog: v0.6.0...v0.8.0
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/","level":1,"title":"We Broke the 3:1 Rule","text":"
The best time to consolidate was after every third session. The second best time is now.
Jose Alekhinne / March 23, 2026
The rule was simple: three feature sessions, then one consolidation session.
The Architecture Release shows the result: This post shows the cost.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-rule-we-wrote","level":2,"title":"The Rule We Wrote","text":"
In The 3:1 Ratio, I documented a rhythm that worked during ctx's first month: three feature sessions, then one consolidation session. The evidence was clear. The rule was simple.
The math checked out.
And then we ignored it for five weeks.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#what-happened","level":2,"title":"What Happened","text":"
After v0.6.0 shipped on February 16, the feature pipeline was irresistible. The MCP server spec was ready. The memory bridge design was done. Webhook notifications had been deferred twice. The VS Code extension needed 15 new commands. The sysinfo package was overdue...
Each feature was important. Each feature was \"just one more session.\" Each feature pushed the consolidation session one day further out.
The git history tells the story in two numbers:
Phase Dates Commits Duration Feature run Feb 16 - Mar 5 198 17 days Consolidation run Mar 5 - Mar 23 181 18 days
198 feature commits before a single consolidation commit. If the 3:1 rule says consolidate every 4th session, we consolidated after the 66th.
The Actual Ratio
The ratio wasn't 3:1. It was 1:1.
We spent as much time cleaning up as we did building.
The consolidation run took 18 days: longer than the feature run itself.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#what-compounded","level":2,"title":"What Compounded","text":"
The 3:1 post warned about compounding. Here is what compounding actually looked like at scale.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-string-problem","level":3,"title":"The String Problem","text":"
By March 5, there were 879 user-facing strings scattered across 1,500 Go files. Not because anyone decided to put them there. Because each feature session added 10-15 strings, and nobody stopped to ask \"should these be in YAML?\"
Finding them all took longer than externalizing them. The archaeology was the cost, not the migration.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-taxonomy-problem","level":3,"title":"The Taxonomy Problem","text":"
24 CLI packages had accumulated their own conventions. Some put cobra wiring in cmd.go. Some put it in root.go. Some mixed business logic with command registration. Some had helpers at the bottom of run.go. Some had separate util.go files.
At peak drift, adding a feature meant first figuring out which of three competing patterns this package was using.
Restructuring one package into cmd/root/ + core/ took 15 minutes. Restructuring 24 of them took days, because each one had slightly different conventions to untangle.
If we had restructured every 4th package as it was built, the taxonomy would have emerged naturally.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-type-problem","level":3,"title":"The Type Problem","text":"
Cross-cutting types like SessionInfo, ExportParams, and ParserResult were defined in whichever package first needed them. By March 5, the same types were imported through 3-4 layers of indirection, causing import cycles that required internal/entity to break.
The entity package extracted 30+ types from 12 packages. Each extraction risked breaking imports in packages we hadn't touched in weeks.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-error-problem","level":3,"title":"The Error Problem","text":"
Per-package err.go files had grown into a broken-window pattern:
An agent sees err.go in a package, adds another error constructor. By March 5, there were error constructors scattered across 22 packages with no central inventory. The consolidation into internal/err/ domain files required tracing every error through every caller.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-output-problem","level":3,"title":"The Output Problem","text":"
Output functions (cmd.Println, fmt.Fprintf) were mixed into business logic. When we decided output belongs in write/ packages, we had to extract functions from every CLI package. The Phase WC baseline commit (4ec5999) marks the starting point of this migration. 181 commits later, it was done.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-compound-interest-math","level":2,"title":"The Compound Interest Math","text":"
The 3:1 rule assumes consolidation sessions of roughly equal size to feature sessions. Here is what happens when you skip:
Consolidation cadence Feature sessions Consolidation sessions Total Every 4th (3:1) 48 16 64 Every 10th 48 ~8 ~56 Never (what we did) 198 commits 181 commits 379
The Takeaway
You don't save consolidation work by skipping it:
You increase its cost.
Skipping consolidation doesn't save time: It borrows it.
The interest rate is nonlinear: The longer you wait, the more each individual fix costs, because fixes interact with other unfixed drift.
Renaming a constant in week 2 touches 3 files. Renaming it in week 6 touches 15, because five features built on the original name.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#what-consolidation-actually-looked-like","level":2,"title":"What Consolidation Actually Looked Like","text":"
The 18-day consolidation run wasn't one sweep. It was a sequence of targeted campaigns, each revealing the next:
Week 1 (Mar 5-11): Error consolidation and write/ migration. Move output functions out of core/. Split monolithic errors.go into 22 domain files. Remove fatih/color. This exposed the scope of the string problem.
Week 2 (Mar 12-18): String externalization. Create commands.yaml, flags.yaml, split text.yaml into 6 domain files. Add 879 DescKey/TextDescKey constants. Build exhaustive test. Normalize all import aliases to camelCase. This exposed the taxonomy problem.
Week 3 (Mar 19-23): Taxonomy enforcement. Singularize command directories. Add doc.go to all 75 packages. Standardize import aliases project-wide. Fix lint-drift false positives. This was the \"polish\" phase, except it took 5 days because the inconsistencies had compounded across 461 packages.
Each week's work would have been a single session if done incrementally.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#lessons-again","level":2,"title":"Lessons (Again)","text":"
The 3:1 post listed the symptoms of drift. This post adds the consequences of ignoring them:
Consolidation is not optional; it is deferred or paid: We didn't avoid 16 consolidation sessions by skipping them. We compressed them into 18 days of uninterrupted cleanup. The work was the same; the experience was worse.
Feature velocity creates an illusion of progress: 198 commits felt productive. But the codebase on March 5 was harder to modify than the codebase on February 16, despite having more features.
Speed Without Structure
Speed without structure is negative progress.
Agents amplify both building and debt: The same AI that can restructure 24 packages in a day can also create 24 slightly different conventions in a day. The 3:1 rule matters more with AI-assisted development, not less.
The consolidation baseline is the most important commit to record: We tracked ours in TASKS.md (4ec5999). Without that marker, knowing where to start the cleanup would have been its own archaeological expedition.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-updated-rule","level":2,"title":"The Updated Rule","text":"
The 3:1 ratio still works. We just didn't follow it. The updated practice:
After every 3rd feature session, schedule consolidation. Not \"when it feels right.\" Not \"when things get bad.\" After the 3rd session.
Record the baseline commit. When you start a consolidation phase, write down the commit hash. It marks where the debt starts.
Run make audit before feature work. If it doesn't pass, you are already in debt. Consolidate before building.
Treat consolidation as a feature. It gets a branch. It gets commits. It gets a blog post. It is not overhead; it is the work that makes the next three features possible.
The Rule
The 3:1 ratio is not aspirational: It is structural.
Ignore consolidation, and the system will schedule it for you.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-arc","level":2,"title":"The Arc","text":"
This is the eighth post in the ctx blog series:
The Attention Budget: why context windows are a scarce resource
Before Context Windows, We Had Bouncers: the IRC lineage of context engineering
Context as Infrastructure: treating context as persistent files, not ephemeral prompts
When a System Starts Explaining Itself: the journal as a first-class artifact
The Homework Problem: what happens when AI writes code but humans own the outcome
Agent Memory Is Infrastructure: L2 memory vs L3 project knowledge
The Architecture Release: what v0.8.0 looks like from the inside
We Broke the 3:1 Rule (this post): what happens when you don't consolidate
See also: The 3:1 Ratio: the original observation. This post is the empirical follow-up, five weeks and 379 commits later.
Key commits marking the consolidation arc:
Commit Milestone 4ec5999 Phase WC baseline (consolidation starts) ff6cf19e All CLI packages restructured into cmd/ + core/d295e49c All command descriptions externalized to YAML 3a0bae86 Error package split into 22 domain files 0fcbd11cfatih/color removed; 2 dependencies remain 5b32e435doc.go added to all 75 packages a82af4bc Import aliases standardized project-wide 692f86cdlint-drift false positives fixed; make audit green","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/","level":1,"title":"Code Structure as an Agent Interface","text":"","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#what-19-ast-tests-taught-us-about-agent-readable-code","level":2,"title":"What 19 AST Tests Taught Us About Agent-Readable Code","text":"
When an agent sees token.Slash instead of \"/\", it cannot pattern-match against the millions of strings.Split(s, \"/\") calls in its training data and coast on statistical inference. It has to actually look up what token.Slash is.
Jose Alekhinne / April 2, 2026
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#how-it-began","level":2,"title":"How It Began","text":"
We set out to replace a shell script with Go tests.
We ended up discovering that \"code quality\" and \"agent readability\" are the same thing.
This is not about linting. This is about controlling how an agent perceives your system.
One term will recur throughout this post, so let me pin it down:
Agent Readability
Agent Readability is the degree to which a codebase can be understood through structured traversal, not statistical pattern matching.
This is the story of 19 AST-based audit tests, a single-day session that touched 300+ files, and what happens when you treat your codebase's structure as an interface for the machines that read it.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-shell-script-problem","level":2,"title":"The Shell Script Problem","text":"
ctx had a file called hack/lint-drift.sh. It ran five checks using grep and awk: literal \"\\n\" strings, cmd.Printf calls outside the write package, magic directory strings in filepath.Join, hardcoded .md extensions, and DescKey-to-YAML linkage.
It worked. Until it didn't.
The script had three structural weaknesses that kept biting us:
No type awareness. It could not distinguish a Use* constant from a DescKey* constant, causing 71 false positives in one run.
Fragile exclusions. When a constant moved from token.go to whitespace.go, the exclusion glob broke silently.
Ceiling on detection. Checks that require understanding call sites, import graphs, or type relationships are impossible in shell.
We wrote a spec to replace all five checks with Go tests using go/ast and go/packages. The tests would run as part of go test ./...: no separate script, no separate CI step.
What we did not expect was where the work would lead.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-ast-migration","level":2,"title":"The AST Migration","text":"
The pattern for each test is identical:
func TestNoLiteralWhitespace(t *testing.T) {\n pkgs := loadPackages(t)\n var violations []string\n for _, pkg := range pkgs {\n for _, file := range pkg.Syntax {\n ast.Inspect(file, func(n ast.Node) bool {\n // check node, append to violations\n return true\n })\n }\n }\n for _, v := range violations {\n t.Error(v)\n }\n}\n
Load packages once via sync.Once, walk every syntax tree, collect violations, report. The shared helpers (loadPackages, isTestFile, posString) live in helpers_test.go. Each test is a _test.go file in internal/audit/, producing no binary output and not importable by production code.
In a single session, we built 13 new tests on top of 6 that already existed, bringing the total to 19:
Test What it catches TestNoLiteralWhitespace\"\\n\", \"\\t\", '\\r' outside config/token/TestNoNakedErrorsfmt.Errorf/errors.New outside internal/err/TestNoStrayErrFileserr.go files outside internal/err/TestNoRawLoggingfmt.Fprint*(os.Stderr), log.Print* outside internal/log/TestNoInlineSeparatorsstrings.Join with literal separator arg TestNoStringConcatPaths Path-like variables built with +TestNoStutteryFunctionswrite.WriteJournal repeats package name TestDocComments Missing doc comments on any declaration TestNoMagicValues Numeric literals outside const definitions TestNoMagicStrings String literals outside const definitions TestLineLength Lines exceeding 80 characters TestNoRegexpOutsideRegexPkgregexp.MustCompile outside config/regex/
Plus the six that preceded the session: TestNoErrorsAs, TestNoCmdPrintOutsideWrite, TestNoExecOutsideExecPkg, TestNoInlineRegexpCompile, TestNoRawFileIO, TestNoRawPermissions.
The migration touched 300+ files across 25 commits.
Not because the tests were hard to write, but because every test we wrote revealed violations that needed fixing.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-tightening-loop","level":2,"title":"The Tightening Loop","text":"
The most instructive part was not writing the tests. It was the iterative tightening.
The following process was repeated for every test:
Write the test with reasonable exemptions
Run it, see violations
Fix the violations (migrate to config constants)
The human reviews the result
The human spots something the test missed
Fix the test first, verify it catches the issue
Fix the newly caught violations
Repeat from step 4
This loop drove the tests from \"basically correct\" to \"actually useful\".
Three examples:
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#example-1-the-local-const-loophole","level":3,"title":"Example 1: The Local Const Loophole","text":"
TestNoMagicValues initially exempted local constants inside function bodies. This let code like this pass:
The test saw a const definition and moved on. But const descMaxWidth = 70 on the line before its only use is just renaming a magic number. The 70 should live in config/format/TruncateDescription where it is discoverable, reusable, and auditable.
We removed the local const exemption. The test caught it. The value moved to config.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#example-2-the-single-character-dodge","level":3,"title":"Example 2: The Single-Character Dodge","text":"
TestNoMagicStrings initially exempted all single-character strings as \"structural punctuation\".
This let \"/\", \"-\", and \".\" pass everywhere.
But \"/\" is a directory separator. It is OS-specific and a security surface.
\"-\" used in strings.Repeat(\"-\", width) is creating visual output, not acting as a delimiter.
\".\" in strings.SplitN(ver, \".\", 3) is a version separator.
None of these are \"just punctuation\": They are domain values with specific meanings.
We removed the blanket exemption: 30 violations surfaced.
Every one was a real magic value that should have been token.Slash, token.Dash, or token.Dot.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#example-3-the-replacer-versus-regex","level":3,"title":"Example 3: The Replacer versus Regex","text":"
Six token references and a NewReplacer allocation. The magic values were gone, but we had replaced them with token soup: structure without abstraction.
The correct tool was a regex:
// In config/regex/file.go:\nvar MermaidUnsafe = regexp.MustCompile(`[/.\\-]`)\n\n// In the caller:\nfunc MermaidID(pkg string) string {\n return regex.MermaidUnsafe.ReplaceAllString(\n pkg, token.Underscore,\n )\n}\n
One config regex, one call. The regex lives in config/regex/file.go where every other compiled pattern lives. An agent reading the code sees regex.MermaidUnsafe and immediately knows: this is a sanitization pattern, it lives in the regex registry, and it has a name that explains its purpose.
Clean is better than clever.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#a-before-and-after","level":2,"title":"A Before-and-After","text":"
To make the agent-readability claim concrete, consider one function through the full transformation.
An agent reading this sees six string literals. To understand what the function does, it must: (1) parse the NewReplacer pair semantics, (2) infer that /, ., - are being replaced, (3) guess why, (4) hope the guess is right.
There is nothing to follow. No import to trace. No name to search. The meaning is locked inside the function body.
An agent reading this sees two named references: regex.MermaidUnsafe and token.Underscore.
To understand the function, it can: (1) look up MermaidUnsafe in config/regex/file.go and see the pattern [/.\\-] with a doc comment explaining it matches invalid Mermaid characters, (2) look up Underscore in config/token/delim.go and see it is the replacement character.
The agent now has: a named pattern, a named replacement, a package location, documentation, and neighboring context (other regex patterns, other delimiters).
It got all of this for free by following just two references.
The indirection is not an overhead. It is the retrieval query.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-principles","level":2,"title":"The Principles","text":"
You are not just improving code quality. You are shaping the input space that determines how an LLM can reason about your system.
Every structural constraint we enforce converts implicit semantics into explicit structure.
LLMs struggle when meaning is implicit and patterns are statistical.
They thrive when meaning is explicit and structure is navigable.
Here is what we learned, organized into three categories.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#cognitive-constraints","level":3,"title":"Cognitive Constraints","text":"
These force agents (and humans) to think harder.
Indirection acts as a built-in retrieval mechanism:
Moving magic values to config forces the agent to follow the reference. errMemory.WriteFile(cause) tells the agent \"there is a memory error package, go look.\" fmt.Errorf(\"writing MEMORY.md: %w\", cause) inlines everything and makes the call graph invisible. The indirection IS the retrieval query.
Unfamiliar patterns force reasoning:
When an agent sees token.Slash instead of \"/\", it cannot coast on corpus frequency. It has to actually look up what token.Slash is, which forces it through the dependency graph, which means it encounters documentation and neighboring constants, which gives it richer context. You are exploiting the agent's weakness (over-reliance on training data) to make it behave more carefully.
Documentation helps everyone:
Extensive documentation helps humans reading the code, agents reasoning about it, and RAG systems indexing it.
Our TestDocComments check added 308 doc comments in one commit. Every function, every type, every constant block now has a doc comment.
This is not busywork: it is the content that agents and embeddings consume.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#structural-constraints","level":3,"title":"Structural Constraints","text":"
These shape the codebase into a navigable graph.
Shorter files save tokens:
Forcing private helper functions out of main files makes the main file shorter. An agent loading a file spends fewer tokens on boilerplate and more on the logic that matters.
Fixed-width constraints force decomposition:
A function that cannot be expressed in 80 columns is either too deeply nested (extract a helper), has too many parameters (introduce a struct), or has a variable name that is too long (rethink the abstraction).
The constraint forces structural improvements that happen to also make the code more parseable.
Chunk-friendly structure helps RAG
Code intelligence tools chunk files for embedding and retrieval. Short, well-documented, single-responsibility files produce better chunks than monolithic files with mixed concerns.
The structural constraints create files that RAG systems can index effectively.
Centralization creates debuggable seams:
All error handling in internal/err/, all logging in internal/log/, all file operations in internal/io/. One place to debug, one place to test, one place to see patterns. An agent analyzing \"how does this project handle errors\" gets one answer from one package, not 200 scattered fmt.Errorf calls.
Private functions become public patterns:
When you extract a private function to satisfy a constraint, it often ends up as a semi-public function in a core/ package. Then you realize it is generic enough to be factored into a purpose-specific module.
The constraint drives discovery of reusable abstractions hiding inside monolithic functions.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#operational-benefits","level":3,"title":"Operational Benefits","text":"
These pay dividends in daily development.
Single-edit renames:
Renaming a flag is one edit to a config constant instead of find-and-replace across 30,000 lines with possible misses. grep token.Slash gives you every place that uses a forward slash semantically.
grep \"/\" gives you noise.
Blast radius containment:
When every magic value is a config constant, a search is one result. This matters for impact analysis, security audits, and agents trying to understand \"what uses this\".
Compile-time contract enforcement:
When err/memory.WriteFile exists, the compiler guarantees the error message exists and the call signature is correct. An inline fmt.Errorf can have a typo in the format string and nothing catches it until runtime. Centralization turns runtime failures into compile errors.
Semantic git blame:
When token.Slash is used everywhere and someone changes its value, git blame on the config file shows exactly when and why.
With inline \"/\" scattered across 30 files, the history is invisible.
Test surface reduction:
Centralizing into internal/err/, internal/io/, internal/config/ means you test behavior once at the boundary and trust the callers.
You do not need 30 tests for 30 fmt.Errorf calls. You need 1 test for errMemory.WriteFile and 30 trivial call-site audits, which is exactly what these AST tests provide.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-numbers","level":2,"title":"The Numbers","text":"
One session. 25 commits. The raw stats:
Metric Count New audit tests 13 Total audit tests 19 Files touched 300+ Magic values migrated 90+ Functions renamed 17 Doc comments added 323 Lines rewrapped to 80 chars 190 Config constants created 40+ Config regexes created 3
Every number represents a violation that existed before the test caught it. The tests did not create work: they revealed work that was already needed.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-uncomfortable-implication","level":2,"title":"The Uncomfortable Implication","text":"
None of this is Go-specific.
If an AI agent interacts with your codebase, your codebase already is an interface. You just have not designed it as one.
If your error messages are scattered across 200 files, an agent cannot reason about error handling as a concept. If your magic values are inlined, an agent cannot distinguish \"this is a path separator\" from \"this is a division operator.\" If your functions are named write.WriteJournal, the agent wastes tokens on redundant information.
What we discovered, through the unglamorous work of writing lint tests and migrating string literals, is that the structural constraints software engineering has valued for decades are exactly the constraints that make code readable to machines.
This is not a coincidence: These constraints exist because they reduce the cognitive load of understanding code.
Agents have cognitive load too: It is called the context window.
You are not converting code to a new paradigm.
You are making the latent graph visible.
You are converting implicit semantics into explicit structure that both humans and machines can traverse.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#whats-next","level":2,"title":"What's Next","text":"
The spec lists 8 more tests we have not built yet, including TestDescKeyYAMLLinkage (verifying that every DescKey constant has a corresponding YAML entry), TestCLICmdStructure (enforcing the cmd.go / run.go / doc.go file convention), and TestNoFlagBindOutsideFlagbind (which requires migrating ~50 flag registration sites first).
The broader question: should these principles be codified as a reusable linting framework? The patterns (loadPackages + ast.Inspect + violation collection) are generic.
The specific checks are project-specific. But the categories of checks (centralization enforcement, magic value detection, naming conventions, documentation requirements) are universal.
For now, 19 tests in internal/audit/ is enough. They run in 2 seconds as part of go test ./.... They catch real issues.
And they encode a theory of code quality that serves both humans and the agents that work alongside them.
Agents are not going away. They are reading your code right now, forming representations of your system in context windows that forget everything between sessions.
The codebases that structure themselves for that reality will compound. The ones that do not will slowly become illegible to the tools they depend on.
Structure is no longer just for maintainability. It is for reasonability.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"cli/","level":1,"title":"CLI","text":"","path":["CLI"],"tags":[]},{"location":"cli/#ctx-cli","level":2,"title":"ctx CLI","text":"
This is a complete reference for all ctx commands.
Flag Description --help Show command help --version Show version --context-dir <path> Override context directory (default: .context/) --allow-outside-cwd Allow context directory outside current working directory
Initialization required. Most commands require a .context/ directory created by ctx init. Running a command without one produces:
ctx: not initialized - run \"ctx init\" first\n
Commands that work before initialization: ctx init, ctx setup, ctx doctor, and grouping commands that only show help (e.g. ctx with no subcommand, ctx system). Hidden hook commands have their own guards and no-op gracefully.
","path":["CLI"],"tags":[]},{"location":"cli/#commands","level":2,"title":"Commands","text":"Command Description ctx init Initialize .context/ directory with templates ctx status Show context summary (files, tokens, drift) ctx agent Print token-budgeted context packet for AI consumption ctx load Output assembled context in read order ctx add Add a task, decision, learning, or convention ctx drift Detect stale paths, secrets, missing files ctx sync Reconcile context with codebase state ctx compact Archive completed tasks, clean up files ctx task Task completion, archival, and snapshots ctx permission Permission snapshots (golden image) ctx reindex Regenerate indices for DECISIONS.md and LEARNINGS.mdctx decision Manage DECISIONS.md (reindex) ctx learning Manage LEARNINGS.md (reindex) ctx journal Browse and export AI session history ctx journal Generate static site from journal entries ctx serve Serve any zensical directory (default: journal site) ctx watch Auto-apply context updates from AI output ctx setup Generate AI tool integration configs ctx loop Generate autonomous loop script ctx memory Bridge Claude Code auto memory into .context/ ctx notify Send webhook notifications ctx change Show what changed since last session ctx dep Show package dependency graph ctx pad Encrypted scratchpad for sensitive one-liners ctx remind Session-scoped reminders that surface at session start ctx completion Generate shell autocompletion scripts ctx guide Quick-reference cheat sheet ctx why Read the philosophy behind ctx ctx site Site management (feed generation) ctx trace Show context behind git commits ctx doctor Structural health check (hooks, drift, config) ctx mcp MCP server for AI tool integration (stdin/stdout) ctx config Manage runtime configuration profiles ctx system System diagnostics and hook commands","path":["CLI"],"tags":[]},{"location":"cli/#exit-codes","level":2,"title":"Exit Codes","text":"Code Meaning 0 Success 1 General error / warnings (e.g. drift) 2 Context not found 3 Violations found (e.g. drift) 4 File operation error","path":["CLI"],"tags":[]},{"location":"cli/#environment-variables","level":2,"title":"Environment Variables","text":"Variable Description CTX_DIR Override default context directory path CTX_TOKEN_BUDGET Override default token budget CTX_BACKUP_SMB_URL SMB share URL for backups (e.g. smb://host/share) CTX_BACKUP_SMB_SUBDIR Subdirectory on SMB share (default: ctx-sessions) CTX_SESSION_ID Active AI session ID (used by ctx trace for context linking)","path":["CLI"],"tags":[]},{"location":"cli/#configuration-file","level":2,"title":"Configuration File","text":"
Optional .ctxrc (YAML format) at project root:
# .ctxrc\ncontext_dir: .context # Context directory name\ntoken_budget: 8000 # Default token budget\npriority_order: # File loading priority\n - TASKS.md\n - DECISIONS.md\n - CONVENTIONS.md\nauto_archive: true # Auto-archive old items\narchive_after_days: 7 # Days before archiving tasks\nscratchpad_encrypt: true # Encrypt scratchpad (default: true)\nallow_outside_cwd: false # Skip boundary check (default: false)\nevent_log: false # Enable local hook event logging\ncompanion_check: true # Check companion tools at session start\nentry_count_learnings: 30 # Drift warning threshold (0 = disable)\nentry_count_decisions: 20 # Drift warning threshold (0 = disable)\nconvention_line_count: 200 # Line count warning for CONVENTIONS.md (0 = disable)\ninjection_token_warn: 15000 # Oversize injection warning (0 = disable)\ncontext_window: 200000 # Auto-detected for Claude Code; override for other tools\nbilling_token_warn: 0 # One-shot billing warning at this token count (0 = disabled)\nkey_rotation_days: 90 # Days before key rotation nudge\nsession_prefixes: # Recognized session header prefixes (extend for i18n)\n - \"Session:\" # English (default)\n # - \"Oturum:\" # Turkish (add as needed)\n # - \"セッション:\" # Japanese (add as needed)\nfreshness_files: # Files with technology-dependent constants (opt-in)\n - path: config/thresholds.yaml\n desc: Model token limits and batch sizes\n review_url: https://docs.example.com/limits # Optional\nnotify: # Webhook notification settings\n events: # Required: only listed events fire\n - loop\n - nudge\n - relay\n # - heartbeat # Every-prompt session-alive signal\n
Field Type Default Description context_dirstring.context Context directory name (relative to project root) token_budgetint8000 Default token budget for ctx agentpriority_order[]string (all files) File loading priority for context packets auto_archivebooltrue Auto-archive completed tasks archive_after_daysint7 Days before completed tasks are archived scratchpad_encryptbooltrue Encrypt scratchpad with AES-256-GCM allow_outside_cwdboolfalse Skip boundary check for external context dirs event_logboolfalse Enable local hook event logging to .context/state/events.jsonlcompanion_checkbooltrue Check companion tool availability (Gemini Search, GitNexus) during /ctx-rememberentry_count_learningsint30 Drift warning when LEARNINGS.md exceeds this count entry_count_decisionsint20 Drift warning when DECISIONS.md exceeds this count convention_line_countint200 Line count warning for CONVENTIONS.mdinjection_token_warnint15000 Warn when auto-injected context exceeds this token count (0 = disable) context_windowint200000 Context window size in tokens. Auto-detected for Claude Code (200k/1M); override for other AI tools billing_token_warnint0 (off) One-shot warning when session tokens exceed this threshold (0 = disabled) key_rotation_daysint90 Days before encryption key rotation nudge session_prefixes[]string[\"Session:\"] Recognized Markdown session header prefixes. Extend to parse sessions written in other languages freshness_files[]object (none) Files to track for staleness (path, desc, optional review_url). Hook warns after 6 months without modification notify.events[]string (all) Event filter for webhook notifications (empty = all)
The ctx repo ships two .ctxrc source profiles (.ctxrc.base and .ctxrc.dev). The working copy (.ctxrc) is gitignored and switched between them using subcommands below.
With no argument, toggles between dev and base. Accepts prod as an alias for base.
Argument Description dev Switch to dev profile (verbose logging) base Switch to base profile (all defaults) (none) Toggle to the opposite profile
Profiles:
Profile Description dev Verbose logging, webhook notifications on base All defaults, notifications off
Examples:
ctx config switch dev # Switch to dev profile\nctx config switch base # Switch to base profile\nctx config switch # Toggle (dev → base or base → dev)\nctx config switch prod # Alias for \"base\"\n
The detection heuristic checks for an uncommented notify: line in .ctxrc: present means dev, absent means base.
Type Target File taskTASKS.mddecisionDECISIONS.mdlearningLEARNINGS.mdconventionCONVENTIONS.md
Flags:
Flag Short Description --priority <level>-p Priority for tasks: high, medium, low--section <name>-s Target section within file --context-c Context (required for decisions and learnings) --rationale-r Rationale for decisions (required for decisions) --consequence Consequence for decisions (required for decisions) --lesson-l Key insight (required for learnings) --application-a How to apply going forward (required for learnings) --file-f Read content from file instead of argument
Examples:
# Add a task\nctx add task \"Implement user authentication\"\nctx add task \"Fix login bug\" --priority high\n\n# Record a decision (requires all ADR (Architectural Decision Record) fields)\nctx add decision \"Use PostgreSQL for primary database\" \\\n --context \"Need a reliable database for production\" \\\n --rationale \"PostgreSQL offers ACID compliance and JSON support\" \\\n --consequence \"Team needs PostgreSQL training\"\n\n# Note a learning (requires context, lesson, and application)\nctx add learning \"Vitest mocks must be hoisted\" \\\n --context \"Tests failed with undefined mock errors\" \\\n --lesson \"Vitest hoists vi.mock() calls to top of file\" \\\n --application \"Always place vi.mock() before imports in test files\"\n\n# Add to specific section\nctx add convention \"Use kebab-case for filenames\" --section \"Naming\"\n
Flag Description --json Output machine-readable JSON --fix Auto-fix simple issues
Checks:
Path references in ARCHITECTURE.md and CONVENTIONS.md exist
Task references are valid
Constitution rules aren't violated (heuristic)
Staleness indicators (old files, many completed tasks)
Missing packages: warns when internal/ directories exist on disk but are not referenced in ARCHITECTURE.md (suggests running /ctx-architecture)
Entry count: warns when LEARNINGS.md or DECISIONS.md exceed configurable thresholds (default: 30 learnings, 20 decisions), or when CONVENTIONS.md exceeds a line count threshold (default: 200). Configure via .ctxrc:
entry_count_learnings: 30 # warn above this (0 = disable)\nentry_count_decisions: 20 # warn above this (0 = disable)\nconvention_line_count: 200 # warn above this (0 = disable)\n
Example:
ctx drift\nctx drift --json\nctx drift --fix\n
Exit codes:
Code Meaning 0 All checks passed 1 Warnings found 3 Violations found","path":["CLI","Context Management"],"tags":[]},{"location":"cli/context/#ctx-sync","level":3,"title":"ctx sync","text":"
Reconcile context with the current codebase state.
ctx sync [flags]\n
Flags:
Flag Description --dry-run Show what would change without modifying
What it does:
Scans codebase for structural changes
Compares with ARCHITECTURE.md
Suggests documenting dependencies if package files exist
Move completed tasks from TASKS.md to a timestamped archive file.
ctx task archive [flags]\n
Flags:
Flag Description --dry-run Preview changes without modifying files
Archive files are stored in .context/archive/ with timestamped names (tasks-YYYY-MM-DD.md). Completed tasks (marked with [x]) are moved; pending tasks ([ ]) remain in TASKS.md.
Regenerate the quick-reference index for both DECISIONS.md and LEARNINGS.md in a single invocation.
ctx reindex\n
This is a convenience wrapper around ctx decision reindex and ctx learning reindex. Both files grow at similar rates and users typically want to reindex both after manual edits.
The index is a compact table of date and title for each entry, allowing AI tools to scan entries without reading the full file.
Example:
ctx reindex\n# ✓ Index regenerated with 12 entries\n# ✓ Index regenerated with 8 entries\n
Structural health check across context, hooks, and configuration. Runs mechanical checks that don't require semantic analysis. Think of it as ctx status + ctx drift + configuration audit in one pass.
ctx doctor [flags]\n
Flags:
Flag Short Type Default Description --json-j bool false Machine-readable JSON output","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#what-it-checks","level":4,"title":"What It Checks","text":"Check Category What it verifies Context initialized Structure .context/ directory exists Required files present Structure All required context files exist (TASKS.md, etc.) Drift detected Quality Stale paths, missing files, constitution violations Event logging status Hooks Whether event_log: true is set in .ctxrc Webhook configured Hooks .notify.enc file exists Pending reminders State Count of entries in reminders.json Task completion ratio State Pending vs completed tasks in TASKS.md Context token size Size Estimated token count across all context files Recent event activity Events Last event timestamp (only when event logging is enabled)","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#output-format-human","level":4,"title":"Output Format (Human)","text":"
ctx doctor\n==========\n\nStructure\n ✓ Context initialized (.context/)\n ✓ Required files present (4/4)\n\nQuality\n ⚠ Drift: 2 warnings (stale path in ARCHITECTURE.md, high entry count in LEARNINGS.md)\n\nHooks\n ✓ hooks.json valid (14 hooks registered)\n ○ Event logging disabled (enable with event_log: true in .ctxrc)\n\nState\n ✓ No pending reminders\n ⚠ Task completion ratio high (18/22 = 82%): consider archiving\n\nSize\n ✓ Context size: ~4200 tokens (budget: 8000)\n\nSummary: 2 warnings, 0 errors\n
Status indicators:
Icon Status Meaning ✓ ok Check passed ⚠ warning Non-critical issue worth fixing ✗ error Problem that needs attention ○ info Informational note","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#output-format-json","level":4,"title":"Output Format (JSON)","text":"
# Quick structural health check\nctx doctor\n\n# Machine-readable output for scripting\nctx doctor --json\n\n# Count warnings\nctx doctor --json | jq '.warnings'\n\n# Check for errors only\nctx doctor --json | jq '[.results[] | select(.status == \"error\")]'\n
","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#when-to-use-what","level":4,"title":"When to Use What","text":"Tool When ctx status Quick glance at files, tokens, and drift ctx doctor Thorough structural checkup (hooks, config, events too) /ctx-doctor Agent-driven diagnosis with event log pattern analysis
ctx status tells you what's there. ctx doctor tells you what's wrong. /ctx-doctor tells you why it's wrong and what to do about it.
","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#what-it-does-not-do","level":4,"title":"What It Does Not Do","text":"
No event pattern analysis: that's the /ctx-doctor skill's job
No auto-fixing: reports findings, doesn't modify anything
No external service checks: doesn't verify webhook endpoint availability
See also: Troubleshooting | ctx system events | /ctx-doctor skill | Detecting and Fixing Drift
","path":["CLI","Doctor"],"tags":[]},{"location":"cli/init-status/","level":1,"title":"Init and Status","text":"","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/init-status/#ctx-init","level":3,"title":"ctx init","text":"
Initialize a new .context/ directory with template files.
ctx init [flags]\n
Flags:
Flag Short Description --force-f Overwrite existing context files --minimal-m Only create essential files (TASKS.md, DECISIONS.md, CONSTITUTION.md) --merge Auto-merge ctx content into existing CLAUDE.md
Creates:
.context/ directory with all template files
.claude/settings.local.json with pre-approved ctx permissions
CLAUDE.md with bootstrap instructions (or merges into existing)
Claude Code hooks and skills are provided by the ctx plugin (see Integrations).
Example:
# Standard init\nctx init\n\n# Minimal setup (just core files)\nctx init --minimal\n\n# Force overwrite existing\nctx init --force\n\n# Merge into existing files\nctx init --merge\n
","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/init-status/#ctx-status","level":3,"title":"ctx status","text":"
Show the current context summary.
ctx status [flags]\n
Flags:
Flag Short Description --json Output as JSON --verbose-v Include file contents summary
Output:
Context directory path
Total files and token estimate
Status of each file (loaded, empty, missing)
Recent activity (modification times)
Drift warnings if any
Example:
ctx status\nctx status --json\nctx status --verbose\n
","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/init-status/#ctx-agent","level":3,"title":"ctx agent","text":"
Print an AI-ready context packet optimized for LLM consumption.
ctx agent [flags]\n
Flags:
Flag Default Description --budget 8000 Token budget: controls content selection and prioritization --format md Output format: md or json--cooldown 10m Suppress repeated output within this duration (requires --session) --session (none) Session ID for cooldown isolation (e.g., $PPID)
How budget works:
The budget controls how much context is included. Entries are selected in priority tiers:
Constitution: always included in full (inviolable rules)
Tasks: all active tasks, up to 40% of budget
Conventions: all conventions, up to 20% of budget
Decisions: scored by recency and relevance to active tasks
Learnings: scored by recency and relevance to active tasks
Decisions and learnings are ranked by a combined score (how recent + how relevant to your current tasks). High-scoring entries are included with their full body. Entries that don't fit get title-only summaries in an \"Also Noted\" section. Superseded entries are excluded.
Output sections:
Section Source Selection Read These Files all .context/ Non-empty files in priority order Constitution CONSTITUTION.md All rules (never truncated) Current Tasks TASKS.md All unchecked tasks (budget-capped) Key Conventions CONVENTIONS.md All items (budget-capped) Recent Decisions DECISIONS.md Full body, scored by relevance Key Learnings LEARNINGS.md Full body, scored by relevance Also Noted overflow Title-only summaries
Example:
# Default (8000 tokens, markdown)\nctx agent\n\n# Smaller packet for tight context windows\nctx agent --budget 4000\n\n# JSON format for programmatic use\nctx agent --format json\n\n# Pipe to file\nctx agent --budget 4000 > context.md\n\n# With cooldown (hooks/automation: requires --session)\nctx agent --session $PPID\n
Use case: Copy-paste into AI chat, pipe to system prompt, or use in hooks.
","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/init-status/#ctx-load","level":3,"title":"ctx load","text":"
Load and display assembled context as AI would see it.
ctx load [flags]\n
Flags:
Flag Description --budget <tokens> Token budget for assembly (default: 8000) --raw Output raw file contents without assembly
","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/journal/","level":1,"title":"Journal","text":"","path":["CLI","Journal"],"tags":[]},{"location":"cli/journal/#ctx-journal","level":3,"title":"ctx journal","text":"
Browse and search AI session history from Claude Code and other tools.
Flag Short Description --limit-n Maximum sessions to display (default: 20) --project-p Filter by project name --tool-t Filter by tool (e.g., claude-code) --all-projects Include sessions from all projects
Sessions are sorted by date (newest first) and display slug, project, start time, duration, turn count, and token usage.
Import sessions to editable journal files in .context/journal/.
ctx journal import [session-id] [flags]\n
Flags:
Flag Description --all Import all sessions (only new files by default) --all-projects Import from all projects --regenerate Re-import existing files (preserves YAML frontmatter by default) --keep-frontmatter Preserve enriched YAML frontmatter during regeneration (default: true) --yes, -y Skip confirmation prompt --dry-run Show what would be imported without writing files
Safe by default: --all only imports new sessions. Existing files are skipped. Use --regenerate to re-import existing files (conversation content is regenerated, YAML frontmatter from enrichment is preserved by default). Use --keep-frontmatter=false to discard enriched frontmatter during regeneration.
Locked entries (via ctx journal lock) are always skipped, regardless of flags.
Single-session import (ctx journal import <id>) always writes without prompting, since you are explicitly targeting one session.
The journal/ directory should be gitignored (like sessions/) since it contains raw conversation data.
Example:
ctx journal import abc123 # Import one session\nctx journal import --all # Import only new sessions\nctx journal import --all --dry-run # Preview what would be imported\nctx journal import --all --regenerate # Re-import existing (prompts)\nctx journal import --all --regenerate -y # Re-import without prompting\nctx journal import --all --regenerate --keep-frontmatter=false -y # Discard frontmatter\n
Protect journal entries from being overwritten by import --regenerate or modified by enrichment skills (/ctx-journal-enrich, /ctx-journal-enrich-all).
ctx journal lock <pattern> [flags]\n
Flags:
Flag Description --all Lock all journal entries
The pattern matches filenames by slug, date, or short ID. Locking a multi-part entry locks all parts. The lock is recorded in .context/journal/.state.json and a locked: true line is added to the file's YAML frontmatter for visibility.
Sync lock state from journal frontmatter to .state.json.
ctx journal sync\n
Scans all journal markdowns and updates .state.json to match each file's frontmatter. Files with locked: true in frontmatter are marked locked in state; files without a locked: line have their lock cleared.
This is the inverse of ctx journal lock: instead of state driving frontmatter, frontmatter drives state. Useful after batch enrichment where you add locked: true to frontmatter manually.
Example:
# After enriching entries and adding locked: true to frontmatter\nctx journal sync\n
Generate a static site from journal entries in .context/journal/.
ctx journal site [flags]\n
Flags:
Flag Short Description --output-o Output directory (default: .context/journal-site) --build Run zensical build after generating --serve Run zensical serve after generating
Creates a zensical-compatible site structure with an index page listing all sessions by date, and individual pages for each journal entry.
Requires zensical to be installed for --build or --serve:
pipx install zensical\n
Example:
ctx journal site # Generate in .context/journal-site/\nctx journal site --output ~/public # Custom output directory\nctx journal site --build # Generate and build HTML\nctx journal site --serve # Generate and serve locally\n
Serve any zensical directory locally. This is a serve-only command: It does not generate or regenerate site content.
ctx serve [directory]\n
If no directory is specified, defaults to the journal site (.context/journal-site).
Requires zensical to be installed:
pipx install zensical\n
ctx serve vs. ctx journal site --serve
ctx journal site --serve generates the journal site then serves it: an all-in-one command. ctx serve only serves an existing directory, and works with any zensical site (journal, docs, etc.).
Example:
ctx serve # Serve journal site (no regeneration)\nctx serve .context/journal-site # Same, explicit path\nctx serve ./site # Serve the docs site\n
Run ctx as a Model Context Protocol (MCP) server. MCP is a standard protocol that lets AI tools discover and consume context from external sources via JSON-RPC 2.0 over stdin/stdout.
This makes ctx accessible to any MCP-compatible AI tool without custom hooks or integrations:
Start the MCP server. This command reads JSON-RPC 2.0 requests from stdin and writes responses to stdout. It is intended to be launched by MCP clients, not run directly.
ctx mcp serve\n
Flags: None. The server uses the configured context directory (from --context-dir, CTX_DIR, .ctxrc, or the default .context).
Resources expose context files as read-only content. Each resource has a URI, name, and returns Markdown text.
URI Name Description ctx://context/constitution constitution Hard rules that must never be violated ctx://context/tasks tasks Current work items and their status ctx://context/conventions conventions Code patterns and standards ctx://context/architecture architecture System architecture documentation ctx://context/decisions decisions Architectural decisions with rationale ctx://context/learnings learnings Gotchas, tips, and lessons learned ctx://context/glossary glossary Project-specific terminology ctx://context/agent agent All files assembled in priority read order
The agent resource assembles all non-empty context files into a single Markdown document, ordered by the configured read priority.
Clients can subscribe to resource changes via resources/subscribe. The server polls for file mtime changes (default: 5 seconds) and emits notifications/resources/updated when a subscribed file changes on disk.
Add a task, decision, learning, or convention to the context.
Argument Type Required Description type string Yes Entry type: task, decision, learning, convention content string Yes Title or main content priority string No Priority level (tasks only): high, medium, low context string Conditional Context field (decisions and learnings) rationale string Conditional Rationale (decisions only) consequence string Conditional Consequence (decisions only) lesson string Conditional Lesson learned (learnings only) application string Conditional How to apply (learnings only)","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx_complete","level":3,"title":"ctx_complete","text":"
Mark a task as done by number or text match.
Argument Type Required Description query string Yes Task number (e.g. \"1\") or search text","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx_drift","level":3,"title":"ctx_drift","text":"
Detect stale or invalid context. Returns violations, warnings, and passed checks.
Query recent AI session history (summaries, decisions, topics).
Argument Type Required Description limit number No Max sessions to return (default: 5) since string No ISO date filter: sessions after this date (YYYY-MM-DD)
Apply a structured context update to .context/ files. Supports task, decision, learning, convention, and complete entry types. Human confirmation is required before calling.
Move completed tasks to the archive section and remove empty sections from context files. Human confirmation required.
Argument Type Required Description archive boolean No Also write tasks to .context/archive/ (default: false)","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx_next","level":3,"title":"ctx_next","text":"
Suggest the next pending task based on priority and position.
Prompts provide pre-built templates for common workflows. Clients can list available prompts via prompts/list and retrieve a specific prompt via prompts/get.
Format an architectural decision entry with all required fields.
Argument Type Required Description content string Yes Decision title context string Yes Background context rationale string Yes Why this decision was made consequence string Yes Expected consequence","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx-add-learning","level":3,"title":"ctx-add-learning","text":"
Format a learning entry with all required fields.
Argument Type Required Description content string Yes Learning title context string Yes Background context lesson string Yes The lesson learned application string Yes How to apply this lesson","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx-reflect","level":3,"title":"ctx-reflect","text":"
Guide end-of-session reflection. Returns a structured review prompt covering progress assessment and context update recommendations.
The parent command shows available subcommands. Hidden plumbing subcommands (ctx system mark-journal, ctx system mark-wrapped-up) are used by skills and automation. Hidden hook subcommands (ctx system check-*) are used by the Claude Code plugin.
See AI Tools for details.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-backup","level":4,"title":"ctx system backup","text":"
Create timestamped tar.gz archives of project context and/or global Claude Code data. Optionally copies archives to an SMB share via GVFS.
ctx system backup [flags]\n
Scopes:
Scope What it includes project.context/, .claude/, ideas/, ~/.bashrcglobal~/.claude/ (excludes todos/) all Both project and global (default)
Flags:
Flag Description --scope <scope> Backup scope: project, global, or all --json Output results as JSON
ctx system backup # Back up everything (default: all)\nctx system backup --scope project # Project context only\nctx system backup --scope global # Global Claude data only\nctx system backup --scope all --json # Both, JSON output\n
Archives are saved to /tmp/ with timestamped names. When CTX_BACKUP_SMB_URL is configured, archives are also copied to the SMB share. Project backups touch ~/.local/state/ctx-last-backup for the check-backup-age hook.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-bootstrap","level":4,"title":"ctx system bootstrap","text":"
Print context location and rules for AI agents. This is the recommended first command for AI agents to run at session start: It tells them where the context directory is and how to use it.
ctx system bootstrap [flags]\n
Flags:
Flag Description -q, --quiet Output only the context directory path --json Output in JSON format
Quiet output:
ctx system bootstrap -q\n# .context\n
Text output:
ctx bootstrap\n=============\n\ncontext_dir: .context\n\nFiles:\n CONSTITUTION.md, TASKS.md, DECISIONS.md, LEARNINGS.md,\n CONVENTIONS.md, ARCHITECTURE.md, GLOSSARY.md\n\nRules:\n 1. Use context_dir above for ALL file reads/writes\n 2. Never say \"I don't have memory\": context IS your memory\n 3. Read files silently, present as recall (not search)\n 4. Persist learnings/decisions before session ends\n 5. Run `ctx agent` for content summaries\n 6. Run `ctx status` for context health\n
JSON output:
{\n \"context_dir\": \".context\",\n \"files\": [\"CONSTITUTION.md\", \"TASKS.md\", ...],\n \"rules\": [\"Use context_dir above for ALL file reads/writes\", ...]\n}\n
Examples:
ctx system bootstrap # Text output\nctx system bootstrap -q # Just the path\nctx system bootstrap --json # JSON output\nctx system bootstrap --json | jq .context_dir # Extract context path\n
Why it exists: When users configure an external context directory via .ctxrc (context_dir: /mnt/nas/.context), the AI agent needs to know where context lives. Bootstrap resolves the configured path and communicates it to the agent at session start. Every nudge also includes a context directory footer for reinforcement.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-resources","level":4,"title":"ctx system resources","text":"
Show system resource usage with threshold-based alerts.
ctx system resources [flags]\n
Displays memory, swap, disk, and CPU load with two severity tiers:
Resource WARNING DANGER Memory >= 80% used >= 90% used Swap >= 50% used >= 75% used Disk (cwd) >= 85% full >= 95% full Load (1m) >= 0.8x CPUs >= 1.5x CPUs
Flags:
Flag Description --json Output in JSON format
Examples:
ctx system resources # Text output with status indicators\nctx system resources --json # Machine-readable JSON\nctx system resources --json | jq '.alerts' # Extract alerts only\n
When resources breach thresholds, alerts are listed below the summary:
Alerts:\n ✖ Memory 92% used (14.7 / 16.0 GB)\n ✖ Swap 78% used (6.2 / 8.0 GB)\n ✖ Load 1.56x CPU count\n
Platform support: Full metrics on Linux and macOS. Windows shows disk only; memory and load report as unsupported.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-message","level":4,"title":"ctx system message","text":"
Manage hook message templates. Hook messages control what text hooks emit. The hook logic (when to fire, counting, state tracking) is universal; the messages are opinions that can be customized per-project.
ctx system message <subcommand>\n
Subcommands:
Subcommand Args Flags Description list (none) --json Show all hook messages with category and override status show<hook> <variant> (none) Print the effective message template with source edit<hook> <variant> (none) Copy embedded default to .context/ for editing reset<hook> <variant> (none) Delete user override, revert to embedded default
Examples:
ctx system message list # Table of all 24 messages\nctx system message list --json # Machine-readable JSON\nctx system message show qa-reminder gate # View the QA gate template\nctx system message edit qa-reminder gate # Copy default to .context/ for editing\nctx system message reset qa-reminder gate # Delete override, revert to default\n
Override files are placed at .context/hooks/messages/{hook}/{variant}.txt. An empty override file silences the message while preserving the hook's logic.
See the Customizing Hook Messages recipe for detailed examples.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-events","level":4,"title":"ctx system events","text":"
Query the local hook event log. Reads events from .context/state/events.jsonl and outputs them in human-readable or raw JSONL format. Requires event_log: true in .ctxrc.
ctx system events [flags]\n
Flags:
Flag Short Type Default Description --hook-k string (all) Filter by hook name --session-s string (all) Filter by session ID --event-e string (all) Filter by event type (relay, nudge) --last-n int 50 Show last N events --json-j bool false Output raw JSONL (for piping to jq) --all-a bool false Include rotated log file (events.1.jsonl)
Each line is a standalone JSON object identical to the webhook payload format:
// converted to multi-line for convenience:\n{\"event\":\"relay\",\"message\":\"qa-reminder: QA gate reminder emitted\",\"detail\":\n{\"hook\":\"qa-reminder\",\"variant\":\"gate\"},\"session_id\":\"eb1dc9cd-...\",\n \"timestamp\":\"2026-02-27T22:39:31Z\",\"project\":\"ctx\"}\n
Examples:
# Last 50 events (default)\nctx system events\n\n# Events from a specific session\nctx system events --session eb1dc9cd-0163-4853-89d0-785fbfaae3a6\n\n# Only QA reminder events\nctx system events --hook qa-reminder\n\n# Raw JSONL for jq processing\nctx system events --json | jq '.message'\n\n# How many context-load-gate fires today\nctx system events --hook context-load-gate --json \\\n | jq -r '.timestamp' | grep \"$(date +%Y-%m-%d)\" | wc -l\n\n# Include rotated events\nctx system events --all --last 100\n
Why it exists: System hooks fire invisibly. When something goes wrong (\"why didn't my hook fire?\"), the event log provides a local, queryable record of what hooks fired, when, and how often. Event logging is opt-in via event_log: true in .ctxrc to avoid surprises for existing users.
See also: Troubleshooting, Auditing System Hooks, ctx doctor
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-stats","level":4,"title":"ctx system stats","text":"
Show per-session token usage statistics. Reads stats JSONL files from .context/state/stats-*.jsonl, written automatically by the check-context-size hook on every prompt.
ctx system stats [flags]\n
Flags:
Flag Short Type Default Description --follow-f bool false Stream new entries as they arrive --session-s string (all) Filter by session ID (prefix match) --last-n int 20 Show last N entries --json-j bool false Output raw JSONL (for piping to jq)
# Recent stats across all sessions\nctx system stats\n\n# Stream live token usage (like tail -f)\nctx system stats --follow\n\n# Filter to current session\nctx system stats --session abc12345\n\n# Raw JSONL for analysis\nctx system stats --json | jq '.pct'\n\n# Monitor a long session in another terminal\nctx system stats --follow --session abc12345\n
See also: Auditing System Hooks
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-prune","level":4,"title":"ctx system prune","text":"
Clean stale per-session state files from .context/state/. Session hooks write tombstone files (context-check, heartbeat, persistence-nudge, etc.) that accumulate ~6-8 files per session with no automatic cleanup.
ctx system prune [flags]\n
Flags:
Flag Type Default Description --days int 7 Prune files older than this many days --dry-run bool false Show what would be pruned without deleting
Files are identified as session-scoped by UUID suffix (e.g. heartbeat-a1b2c3d4-...). Global files without UUIDs (events.jsonl, memory-import.json, etc.) are always preserved.
Safe to run anytime
The worst outcome of pruning is a hook re-firing its nudge in the next session. No context files, decisions, or learnings are stored in the state directory.
Examples:
ctx system prune # Prune files older than 7 days\nctx system prune --days 3 # More aggressive cleanup\nctx system prune --dry-run # Preview what would be removed\n
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-mark-journal","level":4,"title":"ctx system mark-journal","text":"
Update processing state for a journal entry. Records the current date in .context/journal/.state.json. Used by journal skills to record pipeline progress.
Flag Description --check Check if stage is set (exit 1 if not)
Example:
ctx system mark-journal 2026-01-21-session-abc12345.md enriched\nctx system mark-journal 2026-01-21-session-abc12345.md normalized\nctx system mark-journal --check 2026-01-21-session-abc12345.md fences_verified\n
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-mark-wrapped-up","level":4,"title":"ctx system mark-wrapped-up","text":"
Suppress context checkpoint nudges after a wrap-up ceremony. Writes a marker file that check-context-size checks before emitting checkpoint boxes. The marker expires after 2 hours.
Called automatically by /ctx-wrap-up after persisting context (not intended for direct use).
ctx system mark-wrapped-up\n
No flags, no arguments. Idempotent: running it again updates the marker timestamp.
","path":["CLI","System"],"tags":[]},{"location":"cli/tools/","level":1,"title":"Tools and Utilities","text":"","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-watch","level":3,"title":"ctx watch","text":"
Watch for AI output and auto-apply context updates.
Parses <context-update> XML commands from AI output and applies them to context files.
ctx watch [flags]\n
Flags:
Flag Description --log <file> Log file to watch (default: stdin) --dry-run Preview updates without applying
Example:
# Watch stdin\nai-tool | ctx watch\n\n# Watch a log file\nctx watch --log /path/to/ai-output.log\n\n# Preview without applying\nctx watch --dry-run\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-setup","level":3,"title":"ctx setup","text":"
Generate AI tool integration configuration.
ctx setup <tool> [flags]\n
Flags:
Flag Short Description --write-w Write the generated config to disk (e.g. .github/copilot-instructions.md)
Supported tools:
Tool Description claude-code Redirects to plugin install instructions cursor Cursor IDE aider Aider CLI copilot GitHub Copilot windsurf Windsurf IDE
Claude Code Uses the Plugin system
Claude Code integration is now provided via the ctx plugin. Running ctx setup claude-code prints plugin install instructions.
Example:
# Print hook instructions to stdout\nctx setup cursor\nctx setup aider\n\n# Generate and write .github/copilot-instructions.md\nctx setup copilot --write\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-loop","level":3,"title":"ctx loop","text":"
Generate a shell script for running an autonomous loop.
An autonomous loop continuously runs an AI assistant with the same prompt until a completion signal is detected, enabling iterative development where the AI builds on its previous work.
ctx loop [flags]\n
Flags:
Flag Short Description Default --tool <tool>-t AI tool: claude, aider, or genericclaude--prompt <file>-p Prompt file to use .context/loop.md--max-iterations <n>-n Maximum iterations (0 = unlimited) 0--completion <signal>-c Completion signal to detect SYSTEM_CONVERGED--output <file>-o Output script filename loop.sh
Example:
# Generate loop.sh for Claude Code\nctx loop\n\n# Generate for Aider with custom prompt\nctx loop --tool aider --prompt TASKS.md\n\n# Limit to 10 iterations\nctx loop --max-iterations 10\n\n# Output to custom file\nctx loop -o my-loop.sh\n
Usage:
# Generate and run the loop\nctx loop\nchmod +x loop.sh\n./loop.sh\n
See Autonomous Loops for detailed workflow documentation.
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory","level":3,"title":"ctx memory","text":"
Bridge Claude Code's auto memory (MEMORY.md) into .context/.
Claude Code maintains per-project auto memory at ~/.claude/projects/<slug>/memory/MEMORY.md. This command group discovers that file, mirrors it into .context/memory/mirror.md (git-tracked), and detects drift.
ctx memory <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-sync","level":4,"title":"ctx memory sync","text":"
Copy MEMORY.md to .context/memory/mirror.md. Archives the previous mirror before overwriting.
ctx memory sync [flags]\n
Flags:
Flag Description --dry-run Show what would happen without writing
Exit codes:
Code Meaning 0 Synced successfully 1 MEMORY.md not found (auto memory inactive)
Example:
ctx memory sync\n# Archived previous mirror to mirror-2026-03-05-143022.md\n# Synced MEMORY.md -> .context/memory/mirror.md\n# Source: ~/.claude/projects/-home-user-project/memory/MEMORY.md\n# Lines: 47 (was 32)\n# New content: 15 lines since last sync\n\nctx memory sync --dry-run\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-status","level":4,"title":"ctx memory status","text":"
Show drift, timestamps, line counts, and archive count.
ctx memory status\n
Exit codes:
Code Meaning 0 No drift 1 MEMORY.md not found 2 Drift detected (MEMORY.md changed since sync)
Example:
ctx memory status\n# Memory Bridge Status\n# Source: ~/.claude/projects/.../memory/MEMORY.md\n# Mirror: .context/memory/mirror.md\n# Last sync: 2026-03-05 14:30 (2 hours ago)\n#\n# MEMORY.md: 47 lines (modified since last sync)\n# Mirror: 32 lines\n# Drift: detected (source is newer)\n# Archives: 3 snapshots in .context/memory/archive/\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-diff","level":4,"title":"ctx memory diff","text":"
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-unpublish","level":4,"title":"ctx memory unpublish","text":"
Remove the ctx-managed marker block from MEMORY.md, preserving Claude-owned content.
ctx memory unpublish\n
Hook integration: The check-memory-drift hook runs on every prompt and nudges the agent when MEMORY.md has changed since last sync. The nudge fires once per session. See Memory Bridge.
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-import","level":4,"title":"ctx memory import","text":"
Classify and promote entries from MEMORY.md into structured .context/ files.
ctx memory import [flags]\n
Each entry is classified by keyword heuristics:
Keywords Target always use, prefer, never use, standard CONVENTIONS.md decided, chose, trade-off, approach DECISIONS.md gotcha, learned, watch out, bug, caveat LEARNINGS.md todo, need to, follow up TASKS.md Everything else Skipped
Deduplication prevents re-importing the same entry across runs.
Flags:
Flag Description --dry-run Show classification plan without writing
Example:
ctx memory import --dry-run\n# Scanning MEMORY.md for new entries...\n# Found 6 entries\n#\n# -> \"always use ctx from PATH\"\n# Classified: CONVENTIONS.md (keywords: always use)\n#\n# -> \"decided to use heuristic classification over LLM-based\"\n# Classified: DECISIONS.md (keywords: decided)\n#\n# Dry run - would import: 4 entries (1 convention, 1 decision, 1 learning, 1 task)\n# Skipped: 2 entries (session notes/unclassified)\n\nctx memory import # Actually write entries to .context/ files\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-notify","level":3,"title":"ctx notify","text":"
Send fire-and-forget webhook notifications from skills, loops, and hooks.
Field Type Description event string Event name from --event flag message string Notification message session_id string Session ID (omitted if empty) timestamp string UTC RFC3339 timestamp project string Project directory name","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-change","level":3,"title":"ctx change","text":"
Show what changed in context files and code since your last session.
Automatically detects the previous session boundary from state markers or event log. Useful at session start to quickly see what moved while you were away.
ctx change [flags]\n
Flags:
Flag Description --since Time reference: duration (24h) or date (2026-03-01)
Reference time detection (priority order):
--since flag (duration, date, or RFC3339 timestamp)
ctx-loaded-* marker files in .context/state/ (second most recent)
Last context-load-gate event from .context/state/events.jsonl
Fallback: 24 hours ago
Example:
# Auto-detect last session, show what changed\nctx change\n\n# Changes in the last 48 hours\nctx change --since 48h\n\n# Changes since a specific date\nctx change --since 2026-03-10\n
Context file changes are detected by filesystem mtime (works without git). Code changes use git log --since (empty when not in a git repo).
See also: Reviewing Session Changes
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-dep","level":3,"title":"ctx dep","text":"
Generate a dependency graph from source code.
Auto-detects the project ecosystem from manifest files and outputs a dependency graph in Mermaid, table, or JSON format.
ctx dep [flags]\n
Supported ecosystems:
Ecosystem Manifest Method Go go.modgo list -json ./... Node.js package.json Parse package.json (workspace-aware) Python requirements.txt or pyproject.toml Parse manifest directly Rust Cargo.tomlcargo metadata
Detection order: Go, Node.js, Python, Rust. First match wins.
Flags:
Flag Description Default --format Output format: mermaid, table, jsonmermaid--external Include external/third-party dependencies false--type Force ecosystem: go, node, python, rust auto-detect
Examples:
# Auto-detect and show internal deps as Mermaid\nctx dep\n\n# Include external dependencies\nctx dep --external\n\n# Force Node.js detection (useful when multiple manifests exist)\nctx dep --type node\n\n# Machine-readable output\nctx dep --format json\n\n# Table format\nctx dep --format table\n
Ecosystem notes:
Go: Uses go list -json ./.... Requires go in PATH.
Node.js: Parses package.json directly (no npm/yarn needed). For monorepos with workspaces, shows workspace-to-workspace deps (internal) or all deps per workspace (external).
Python: Parses requirements.txt or pyproject.toml directly (no pip needed). Shows declared dependencies; does not trace imports. With --external, includes dev dependencies from pyproject.toml.
Rust: Requires cargo in PATH. Uses cargo metadata for accurate dependency resolution.
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad","level":3,"title":"ctx pad","text":"
Encrypted scratchpad for sensitive one-liners that travel with the project.
When invoked without a subcommand, lists all entries.
ctx pad\nctx pad <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-add","level":4,"title":"ctx pad add","text":"
Append a new entry to the scratchpad.
ctx pad add <text>\nctx pad add <label> --file <path>\n
Flags:
Flag Short Description --file-f Ingest a file as a blob entry (max 64 KB)
Examples:
ctx pad add \"DATABASE_URL=postgres://user:pass@host/db\"\nctx pad add \"deploy config\" --file ./deploy.yaml\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-show","level":4,"title":"ctx pad show","text":"
Output the raw text of an entry by number. For blob entries, prints decoded file content (or writes to disk with --out).
ctx pad show <n>\nctx pad show <n> --out <path>\n
Arguments:
n: 1-based entry number
Flags:
Flag Description --out Write decoded blob content to a file (blobs only)
Examples:
ctx pad show 3\nctx pad show 2 --out ./recovered.yaml\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-rm","level":4,"title":"ctx pad rm","text":"
Remove an entry by number.
ctx pad rm <n>\n
Arguments:
n: 1-based entry number
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-edit","level":4,"title":"ctx pad edit","text":"
Replace, append to, or prepend to an entry.
ctx pad edit <n> [text]\n
Arguments:
n: 1-based entry number
text: Replacement text (mutually exclusive with --append/--prepend)
Flags:
Flag Description --append Append text to the end of the entry --prepend Prepend text to the beginning of entry --file Replace blob file content (preserves label) --label Replace blob label (preserves content)
Examples:
ctx pad edit 2 \"new text\"\nctx pad edit 2 --append \" suffix\"\nctx pad edit 2 --prepend \"prefix \"\nctx pad edit 1 --file ./v2.yaml\nctx pad edit 1 --label \"new name\"\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-mv","level":4,"title":"ctx pad mv","text":"
Move an entry from one position to another.
ctx pad mv <from> <to>\n
Arguments:
from: Source position (1-based)
to: Destination position (1-based)
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-resolve","level":4,"title":"ctx pad resolve","text":"
Show both sides of a merge conflict in the encrypted scratchpad.
ctx pad resolve\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-import","level":4,"title":"ctx pad import","text":"
Bulk-import lines from a file into the scratchpad. Each non-empty line becomes a separate entry. All entries are written in a single encrypt/write cycle.
With --blob, import all first-level files from a directory as blob entries. Each file becomes a blob with the filename as its label. Subdirectories and non-regular files are skipped.
ctx pad import <file>\nctx pad import - # read from stdin\nctx pad import --blob <dir> # import directory files as blobs\n
Arguments:
file: Path to a text file, - for stdin, or a directory (with --blob)
Flags:
Flag Description --blob Import first-level files from a directory as blobs
Examples:
ctx pad import notes.txt\ngrep TODO *.go | ctx pad import -\nctx pad import --blob ./ideas/\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-export","level":4,"title":"ctx pad export","text":"
Export all blob entries from the scratchpad to a directory as files. Each blob's label becomes the filename. Non-blob entries are skipped.
ctx pad export [dir]\n
Arguments:
dir: Target directory (default: current directory)
Flags:
Flag Short Description --force-f Overwrite existing files instead of timestamping --dry-run Print what would be exported without writing
When a file already exists, a unix timestamp is prepended to avoid collisions (e.g., 1739836200-label). Use --force to overwrite instead.
Examples:
ctx pad export ./ideas\nctx pad export --dry-run\nctx pad export --force ./backup\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-merge","level":4,"title":"ctx pad merge","text":"
Merge entries from one or more scratchpad files into the current pad. Each input file is auto-detected as encrypted or plaintext. Entries are deduplicated by exact content.
ctx pad merge FILE...\n
Arguments:
FILE...: One or more scratchpad files to merge (encrypted or plaintext)
Flags:
Flag Short Description --key-k Path to key file for decrypting input files --dry-run Print what would be merged without writing
Examples:
ctx pad merge worktree/.context/scratchpad.enc\nctx pad merge notes.md backup.enc\nctx pad merge --key /path/to/other.key foreign.enc\nctx pad merge --dry-run pad-a.enc pad-b.md\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-remind","level":3,"title":"ctx remind","text":"
Session-scoped reminders that surface at session start. Reminders are stored verbatim and relayed verbatim: no summarization, no categories.
When invoked with a text argument and no subcommand, adds a reminder.
ctx remind \"text\"\nctx remind <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-remind-add","level":4,"title":"ctx remind add","text":"
Add a reminder. This is the default action: ctx remind \"text\" and ctx remind add \"text\" are equivalent.
ctx remind \"refactor the swagger definitions\"\nctx remind add \"check CI after the deploy\" --after 2026-02-25\n
Arguments:
text: The reminder message (verbatim)
Flags:
Flag Short Description --after-a Don't surface until this date (YYYY-MM-DD)
Examples:
ctx remind \"refactor the swagger definitions\"\nctx remind \"check CI after the deploy\" --after 2026-02-25\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-remind-list","level":4,"title":"ctx remind list","text":"
List all pending reminders. Date-gated reminders that aren't yet due are annotated with (after DATE, not yet due).
ctx remind list\n
Aliases: ls
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-remind-dismiss","level":4,"title":"ctx remind dismiss","text":"
Remove a reminder by ID, or remove all reminders with --all.
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pause","level":3,"title":"ctx pause","text":"
Pause all context nudge and reminder hooks for the current session. Security hooks (dangerous command blocking) and housekeeping hooks still fire.
ctx pause [flags]\n
Flags:
Flag Description --session-id Session ID (overrides stdin)
Example:
# Pause hooks for a quick investigation\nctx pause\n\n# Resume when ready\nctx resume\n
See also: Pausing Context Hooks
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-resume","level":3,"title":"ctx resume","text":"
Resume context hooks after a pause. Silent no-op if not paused.
ctx resume [flags]\n
Flags:
Flag Description --session-id Session ID (overrides stdin)
Example:
ctx resume\n
See also: Pausing Context Hooks
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-completion","level":3,"title":"ctx completion","text":"
Generate shell autocompletion scripts.
ctx completion <shell>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#subcommands","level":4,"title":"Subcommands","text":"Shell Command bashctx completion bashzshctx completion zshfishctx completion fishpowershellctx completion powershell","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#installation","level":4,"title":"Installation","text":"BashZshFish
# Add to ~/.bashrc\nsource <(ctx completion bash)\n
# Add to ~/.zshrc\nsource <(ctx completion zsh)\n
ctx completion fish | source\n# Or save to completions directory\nctx completion fish > ~/.config/fish/completions/ctx.fish\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-site","level":3,"title":"ctx site","text":"
Site management commands for the ctx.ist static site.
ctx site <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-site-feed","level":4,"title":"ctx site feed","text":"
Generate an Atom 1.0 feed from finalized blog posts in docs/blog/.
ctx site feed [flags]\n
Scans docs/blog/ for files matching YYYY-MM-DD-*.md, parses YAML frontmatter, and generates a valid Atom feed. Only posts with reviewed_and_finalized: true are included. Summaries are extracted from the first paragraph after the heading.
Flags:
Flag Short Type Default Description --out-o string site/feed.xml Output path --base-url string https://ctx.ist Base URL for entry links
Output:
Generated site/feed.xml (21 entries)\n\nSkipped:\n 2026-02-25-the-homework-problem.md: not finalized\n\nWarnings:\n 2026-02-09-defense-in-depth.md: no summary paragraph found\n
Three buckets: included (count), skipped (with reason), warnings (included but degraded). exit 0 always: warnings inform but do not block.
Frontmatter requirements:
Field Required Feed mapping title Yes <title>date Yes <updated>reviewed_and_finalized Yes Draft gate (must be true) author No <author><name>topics No <category term=\"\">
Examples:
ctx site feed # Generate site/feed.xml\nctx site feed --out /tmp/feed.xml # Custom output path\nctx site feed --base-url https://example.com # Custom base URL\nmake site-feed # Makefile shortcut\nmake site # Builds site + feed\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-guide","level":3,"title":"ctx guide","text":"
Quick-reference cheat sheet for common ctx commands and skills.
ctx guide [flags]\n
Flags:
Flag Description --skills Show available skills --commands Show available CLI commands
Example:
# Show the full cheat sheet\nctx guide\n\n# Skills only\nctx guide --skills\n\n# Commands only\nctx guide --commands\n
Works without initialization (no .context/ required).
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-why","level":3,"title":"ctx why","text":"
Read ctx's philosophy documents directly in the terminal.
ctx why [DOCUMENT]\n
Documents:
Name Description manifesto The ctx Manifesto: creation, not code about About ctx: what it is and why it exists invariants Design invariants: properties that must hold
Usage:
# Interactive numbered menu\nctx why\n\n# Show a specific document\nctx why manifesto\nctx why about\nctx why invariants\n\n# Pipe to a pager\nctx why manifesto | less\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/trace/","level":1,"title":"Commit Context Tracing","text":"","path":["Commit Context Tracing"],"tags":[]},{"location":"cli/trace/#ctx-trace","level":3,"title":"ctx trace","text":"
Show the context behind git commits. Links commits back to the decisions, tasks, learnings, and sessions that motivated them.
git log shows what changed, git blame shows who — ctx trace shows why.
ctx trace [commit] [flags]\n
Flags:
Flag Description --last N Show context for last N commits --json Output as JSON for scripting
Examples:
# Show context for a specific commit\nctx trace abc123\n\n# Show context for last 10 commits\nctx trace --last 10\n\n# JSON output\nctx trace abc123 --json\n
Output:
Commit: abc123 \"Fix auth token expiry\"\nDate: 2026-03-14 10:00:00 -0700\nContext:\n [Decision] #12: Use short-lived tokens with server-side refresh\n Date: 2026-03-10\n\n [Task] #8: Implement token rotation for compliance\n Status: completed\n
Enable or disable the prepare-commit-msg hook for automatic context tracing. When enabled, commits automatically receive a ctx-context trailer with references to relevant decisions, tasks, learnings, and sessions.
ctx trace hook <enable|disable>\n
Prerequisites: ctx must be on your $PATH. If you installed via go install, ensure $GOPATH/bin (or $HOME/go/bin) is in your shell's $PATH.
What the hook does:
Before each commit, collects context from three sources:
Pending context accumulated during work (ctx add, ctx task complete)
Staged file changes to .context/ files
Working state (in-progress tasks, active AI session)
Injects a ctx-context trailer into the commit message
After commit, records the mapping in .context/trace/history.jsonl
Examples:
# Install the hook\nctx trace hook enable\n\n# Remove the hook\nctx trace hook disable\n
Resulting commit message:
Fix auth token expiry handling\n\nRefactored token refresh logic to handle edge case\nwhere refresh token expires during request.\n\nctx-context: decision:12, task:8, session:abc123\n
The ctx-context trailer supports these reference types:
Prefix Points to Example decision:<n> Entry #n in DECISIONS.md decision:12learning:<n> Entry #n in LEARNINGS.md learning:5task:<n> Task #n in TASKS.md task:8convention:<n> Entry #n in CONVENTIONS.md convention:3session:<id> AI session by ID session:abc123\"<text>\" Free-form context note \"Performance fix for P1 incident\"","path":["Commit Context Tracing"],"tags":[]},{"location":"cli/trace/#storage","level":3,"title":"Storage","text":"
Context trace data is stored in the .context/ directory:
File Purpose Lifecycle state/pending-context.jsonl Accumulates refs during work Truncated after each commit trace/history.jsonl Permanent commit-to-context map Append-only, never truncated trace/overrides.jsonl Manual tags for existing commits Append-only","path":["Commit Context Tracing"],"tags":[]},{"location":"home/","level":1,"title":"Home","text":"
ctx is not a prompt.
ctx is version-controlled cognitive state.
ctx is the persistence layer for human-AI reasoning.
\"Creation, not code; Context, not prompts; Verification, not vibes.\"
Read the ctx Manifesto →
\"Without durable context, intelligence resets; with ctx, creation compounds.\"
Without persistent memory, every session starts at zero; ctx makes sessions cumulative.
Join the ctx Community →
","path":["Home","About"],"tags":[]},{"location":"home/about/#what-is-ctx","level":2,"title":"What Is ctx?","text":"
ctx (Context) is a file-based system that enables AI coding assistants to persist project knowledge across sessions. It lives in a .context/ directory in your repo.
Context files let AI tools remember decisions, conventions, and learnings:
Context files are explicit and versionable contracts between you and your agents.
","path":["Home","About"],"tags":[]},{"location":"home/about/#why-do-i-keep-re-explaining-my-codebase","level":2,"title":"Why Do I Keep Re-Explaining My Codebase?!?!","text":"
You open a new AI session. The first thing you do is re-explain your project.
Again.
The architecture, the database choice, the naming conventions, the thing you tried last week that didn't work. You have said all of this before (maybe yesterday, maybe an hour ago), but the AI doesn't know that.
You explain the same architecture every session;
The AI suggests an approach you already rejected, again;
A decision you made three sessions ago gets relitigated from scratch;
You spend more time setting context than building features.
This isn't AI failure: It is staleness.
Without persistent memory, every session starts at zero.
","path":["Home","About"],"tags":[]},{"location":"home/about/#day-and-night","level":3,"title":"Day and Night","text":"
ctx gives your AI a memory that persists across sessions:
Without ctxWith ctx
Session 12: Monday morning\n\nAI: \"I'll scaffold a MySQL schema for this.\"\nYou: \"We use PostgreSQL.\"\n\nAI: \"For auth, session-based would be simplest.\"\nYou: \"We decided on JWT.\"\n\nAI: \"Endpoints in camelCase?\"\nYou: \"snake_case! Like the rest of the API!!!!!!\"\n\nYou: *scrolls up to copy an existing endpoint*\n\n20 minutes spent restoring decisions that already exist.\n
Session 12: Monday morning\n\nYou: \"Let's continue the auth implementation.\"\nAI: \"Per DECISIONS.md (2026-02-12): JWT, not sessions: \n The project uses PostgreSQL with JWT\n authentication. The API follows snake_case conventions.\n Last session you completed the login endpoint: the\n registration endpoint is next on TASKS.md. Want me to\n start there?\"\n\n0 minutes re-explaining. Straight to building.\n
ctx creates a .context/ directory in your project that stores structured knowledge files:
File What It Remembers TASKS.md What you're working on and what's next DECISIONS.md Architectural choices and why you made them LEARNINGS.md Gotchas, bugs, things that didn't work CONVENTIONS.md Naming patterns, code style, project rules CONSTITUTION.md Hard rules the AI must never violate
These files can version with your code in git:
They load automatically at the session start (via hooks in Claude Code, or manually with ctx agent for other tools).
The AI reads them, cites them, and builds on them, instead of asking you to start over.
And when it acts, it can point to the exact file and line that justifies the choice.
Every decision you record, every lesson you capture, makes the next session smarter.
ctx accumulates.
Connect with ctx
Join the Community →: ask questions, share workflows, and help shape what comes next
Read the Blog →: real-world patterns, ponderings, and lessons learned from building ctx using ctx
Ready to Get Started?
Getting Started →: full installation and setup
Your First Session →: step-by-step walkthrough from ctx init to verified recall
# Add a task\nctx add task \"Implement user authentication\"\n\n# Record a decision (full ADR fields required)\nctx add decision \"Use PostgreSQL for primary database\" \\\n --context \"Need a reliable database for production\" \\\n --rationale \"PostgreSQL offers ACID compliance and JSON support\" \\\n --consequence \"Team needs PostgreSQL training\"\n\n# Note a learning\nctx add learning \"Mock functions must be hoisted in Jest\" \\\n --context \"Tests failed with undefined mock errors\" \\\n --lesson \"Jest hoists mock calls to top of file\" \\\n --application \"Place jest.mock() before imports\"\n\n# Mark task complete\nctx task complete \"user auth\"\n
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#leave-a-reminder-for-next-session","level":2,"title":"Leave a Reminder for Next Session","text":"
Drop a note that surfaces automatically at the start of your next session:
# Leave a reminder\nctx remind \"refactor the swagger definitions\"\n\n# Date-gated: don't surface until a specific date\nctx remind \"check CI after the deploy\" --after 2026-02-25\n\n# List pending reminders\nctx remind list\n\n# Dismiss a reminder by ID\nctx remind dismiss 1\n
Reminders are relayed verbatim at session start by the check-reminders hook and repeat every session until you dismiss them.
Import session transcripts to a browsable static site with search, navigation, and topic indices.
The ctx journal command requires zensical (Python >= 3.10).
zensical is a Python-based static site generator from the Material for MkDocs team.
(why zensical?).
If you don't have it on your system, install zensical once with pipx:
# One-time setup\npipx install zensical\n
Avoid pip install zensical
pip install often fails: For example, on macOS, system Python installs a non-functional stub (zensical requires Python >= 3.10), and Homebrew Python blocks system-wide installs (PEP 668).
pipx creates an isolated environment with the correct Python version automatically.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#import-and-serve","level":3,"title":"Import and Serve","text":"
Then, import and serve:
# Import all sessions to .context/journal/ (only new files)\nctx journal import --all\n\n# Generate and serve the journal site\nctx journal site --serve\n
Open http://localhost:8000 to browse.
To update after new sessions, run the same two commands again.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#safe-by-default","level":3,"title":"Safe By Default","text":"
ctx journal import --all is safe by default:
It only imports new sessions and skips existing files.
Locked entries (via ctx journal lock) are always skipped by both import and enrichment skills.
If you add locked: true to frontmatter during enrichment, run ctx journal sync to propagate the lock state to .state.json.
Store short, sensitive one-liners in an encrypted scratchpad that travels with the project:
# Write a note\nctx pad set db-password \"postgres://user:pass@localhost/mydb\"\n\n# Read it back\nctx pad get db-password\n\n# List all keys\nctx pad list\n
The scratchpad is encrypted with a key stored at ~/.ctx/.ctx.key (outside the project, never committed).
See Scratchpad for details.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#run-an-autonomous-loop","level":2,"title":"Run an Autonomous Loop","text":"
Generate a script that iterates an AI agent until a completion signal is detected:
ctx loop\nchmod +x loop.sh\n./loop.sh\n
See Autonomous Loops for configuration and advanced usage.
Link your git commits back to the decisions, tasks, and learnings that motivated them. Enable the hook once:
# Install the git hook (one-time setup)\nctx trace hook enable\n
From now on, every git commit automatically gets a ctx-context trailer linking it to relevant context. No extra steps needed — just use ctx add, ctx task complete, and commit as usual.
# Later: why was this commit made?\nctx trace abc123\n\n# Recent commits with their context\nctx trace --last 10\n\n# Context trail for a specific file\nctx trace file src/auth.go\n\n# Manually tag a commit after the fact\nctx trace tag HEAD --note \"Hotfix for production outage\"\n
The first thing an AI agent should do at session start is discover where context lives:
ctx system bootstrap\n
This prints the resolved context directory, the files in it, and the operating rules. The CLAUDE.md template instructs the agent to run this automatically. See CLI Reference: bootstrap.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#the-two-skills-you-should-always-use","level":2,"title":"The Two Skills You Should Always Use","text":"
Using /ctx-remember at session start and /ctx-wrap-up at session end are the highest-value skills in the entire catalog:
# session begins:\n/ctx-remember\n... do work ...\n# before closing the session:\n/ctx-wrap-up\n
Let's provide some context, because this is important:
Although the agent will eventually discover your context through CLAUDE.md → AGENT_PLAYBOOK.md, /ctx-remember hydrates the full context up front (tasks, decisions, recent sessions) so the agent starts informed rather than piecing things together over several turns.
/ctx-wrap-up is the other half: A structured review that captures learnings, decisions, and tasks before you close the window.
Hooks like check-persistence remind you (the user) mid-session that context hasn't been saved in a while, but they don't trigger persistence automatically: You still have to act. Also, a CTRL+C can end things at any moment with no reliable \"before session end\" event.
In short, /ctx-wrap-up is the deliberate checkpoint that makes sure nothing slips through. And /ctx-remember it its mirror skill to be used at session start.
See Session Ceremonies for the full workflow.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#cli-commands-vs-ai-skills","level":2,"title":"CLI Commands vs. AI Skills","text":"
Most ctx operations come in two flavors: a CLI command you run in your terminal and an AI skill (slash command) you invoke inside your coding assistant.
Commands and skills are not interchangeable: Each has a distinct role.
ctx CLI command ctx AI skill Runs where Your terminal Inside the AI assistant Speed Fast (milliseconds) Slower (LLM round-trip) Cost Free Consumes tokens and context Analysis Deterministic heuristics Semantic / judgment-based Best for Quick checks, scripting, CI Deep analysis, generation, workflow orchestration","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#paired-commands","level":3,"title":"Paired Commands","text":"
These have both a CLI and a skill counterpart. Use the CLI for quick, deterministic checks; use the skill when you need the agent's judgment.
CLI Skill When to prefer the skill ctx drift/ctx-drift Semantic analysis: catches meaning drift the CLI misses ctx status/ctx-status Interpreted summary with recommendations ctx add task/ctx-add-task Agent decomposes vague goals into concrete tasks ctx add decision/ctx-add-decision Agent drafts rationale and consequences from discussion ctx add learning/ctx-add-learning Agent extracts the lesson from a debugging session ctx add convention/ctx-add-convention Agent observes a repeated pattern and codifies it ctx task archive/ctx-archive Agent reviews which tasks are truly done ctx pad/ctx-pad Agent reads/writes scratchpad entries in conversation flow ctx journal/ctx-history Agent searches session history with semantic understanding ctx agent/ctx-agent Agent loads and acts on the context packet ctx loop/ctx-loop Agent tailors the loop script to your project ctx doctor/ctx-doctor Agent adds semantic analysis to structural checks ctx pause/ctx-pause Agent pauses hooks with session-aware reasoning ctx resume/ctx-resume Agent resumes hooks after a pause ctx remind/ctx-remind Agent manages reminders in conversation flow","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#ai-only-skills","level":3,"title":"AI-Only Skills","text":"
These have no CLI equivalent. They require the agent's reasoning.
Skill Purpose /ctx-remember Load context and present structured readback at session start /ctx-wrap-up End-of-session ceremony: persist learnings, decisions, tasks /ctx-next Suggest 1-3 concrete next actions from context /ctx-commit Commit with integrated context capture /ctx-reflect Pause and assess session progress /ctx-consolidate Merge overlapping learnings or decisions /ctx-prompt-audit Analyze prompting patterns for improvement /ctx-import-plans Import Claude Code plan files into project specs /ctx-implement Execute a plan step-by-step with verification /ctx-worktree Manage parallel agent worktrees /ctx-journal-enrich Add metadata, tags, and summaries to journal entries /ctx-journal-enrich-all Full journal pipeline: export if needed, then batch-enrich /ctx-blog Generate a blog post (zensical-flavored Markdown) /ctx-blog-changelog Generate themed blog post from commits between releases /ctx-architecture Build and maintain architecture maps (ARCHITECTURE.md, DETAILED_DESIGN.md)","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#cli-only-commands","level":3,"title":"CLI-Only Commands","text":"
These are infrastructure: used in scripts, CI, or one-time setup.
Command Purpose ctx init Initialize .context/ directory ctx load Output assembled context for piping ctx task complete Mark a task done by substring match ctx sync Reconcile context with codebase state ctx compact Consolidate and clean up context files ctx trace Show context behind git commits ctx trace hook Enable/disable commit context tracing hook ctx setup Generate AI tool integration config ctx watch Watch AI output and auto-apply context updates ctx serve Serve any zensical directory (default: journal) ctx permission snapshot Save settings as a golden image ctx permission restore Restore settings from golden image ctx journal site Generate browsable journal from exports ctx notify setup Configure webhook notifications ctx decision List and filter decisions ctx learning List and filter learnings ctx task List tasks, manage archival and snapshots ctx why Read the philosophy behind ctx ctx guide Quick-reference cheat sheet ctx site Site management commands ctx config Manage runtime configuration profiles ctx system System diagnostics and hook commands ctx system backup Back up context and Claude data to tar.gz / SMB ctx completion Generate shell autocompletion scripts
Rule of Thumb
Quick check? Use the CLI.
Need judgment? Use the skill.
When in doubt, start with the CLI: It's free and instant.
Escalate to the skill when heuristics aren't enough.
Next Up: Context Files →: what each .context/ file does and how to use it
See Also:
Recipes: targeted how-to guides for specific tasks
Knowledge Capture: patterns for recording decisions, learnings, and conventions
Context Health: keeping your .context/ accurate and drift-free
Session Archaeology: digging into past sessions
Task Management: tracking and completing work items
We are the builders who care about durable context, verifiable decisions, and human-AI workflows that compound over time.
","path":["Home","#ctx"],"tags":[]},{"location":"home/community/#help-ctx-change-how-ai-remembers","level":2,"title":"Help ctx Change How AI Remembers","text":"
If you like the idea, a star helps ctx reach engineers who run into context drift every day:
The .ctxrc file is an optional YAML file placed in the project root (next to your .context/ directory). It lets you set project-level defaults that apply to every ctx command.
ctx looks for .ctxrc in the current working directory when any command runs. There is no global or user-level config file: Configuration is always per-project.
Contributors: Dev Configuration Profile
The ctx repo ships two .ctxrc source profiles (.ctxrc.base and .ctxrc.dev). The working copy is gitignored and swapped between them via ctx config switch dev / ctx config switch base. See Contributing: Configuration Profiles.
Using a Different .context Directory
The default .context/ directory can be changed per-project via the context_dir key in .ctxrc, the CTX_DIR environment variable, or the --context-dir CLI flag.
See Environment Variables and CLI Global Flags below for details.
A commented .ctxrc showing all options and their defaults:
# .ctxrc: ctx runtime configuration\n# https://ctx.ist/configuration/\n#\n# All settings are optional. Missing values use defaults.\n# Priority: CLI flags > environment variables > .ctxrc > defaults\n#\n# context_dir: .context\n# token_budget: 8000\n# auto_archive: true\n# archive_after_days: 7\n# scratchpad_encrypt: true\n# allow_outside_cwd: false\n# event_log: false\n# entry_count_learnings: 30\n# entry_count_decisions: 20\n# convention_line_count: 200\n# injection_token_warn: 15000\n# context_window: 200000 # auto-detected for Claude Code; override for other tools\n# billing_token_warn: 0 # one-shot warning at this token count (0 = disabled)\n#\n# stale_age_days: 30 # days before drift flags a context file as stale (0 = disabled)\n# key_rotation_days: 90\n# task_nudge_interval: 5 # Edit/Write calls between task completion nudges\n#\n# notify: # requires: ctx notify setup\n# events: # required: no events sent unless listed\n# - loop\n# - nudge\n# - relay\n#\n# priority_order:\n# - CONSTITUTION.md\n# - TASKS.md\n# - CONVENTIONS.md\n# - ARCHITECTURE.md\n# - DECISIONS.md\n# - LEARNINGS.md\n# - GLOSSARY.md\n# - AGENT_PLAYBOOK.md\n
","path":["Home","Configuration"],"tags":[]},{"location":"home/configuration/#option-reference","level":3,"title":"Option Reference","text":"Option Type Default Description context_dirstring.context Context directory name (relative to project root) token_budgetint8000 Default token budget for ctx agent and ctx loadauto_archivebooltrue Auto-archive completed tasks during ctx compactarchive_after_daysint7 Days before completed tasks are archived scratchpad_encryptbooltrue Encrypt scratchpad with AES-256-GCM allow_outside_cwdboolfalse Allow context directory outside the current working directory event_logboolfalse Enable local hook event logging to .context/state/events.jsonlentry_count_learningsint30 Drift warning when LEARNINGS.md exceeds this entry count (0 = disable) entry_count_decisionsint20 Drift warning when DECISIONS.md exceeds this entry count (0 = disable) convention_line_countint200 Drift warning when CONVENTIONS.md exceeds this line count (0 = disable) injection_token_warnint15000 Warn when auto-injected context exceeds this token count (0 = disable) context_windowint200000 Context window size in tokens. Auto-detected for Claude Code (200k/1M); override for other AI tools billing_token_warnint0 (off) One-shot warning when session tokens exceed this threshold (0 = disabled). For plans where tokens beyond an included allowance cost extra stale_age_daysint30 Days before ctx drift flags a context file as stale (0 = disable) key_rotation_daysint90 Days before encryption key rotation nudge task_nudge_intervalint5 Edit/Write calls between task completion nudges notify.events[]string (all) Event filter for webhook notifications (empty = all) priority_order[]string (see below) Custom file loading priority for context assembly
Default priority order (used when priority_order is not set):
CONSTITUTION.md
TASKS.md
CONVENTIONS.md
ARCHITECTURE.md
DECISIONS.md
LEARNINGS.md
GLOSSARY.md
AGENT_PLAYBOOK.md
See Context Files for the rationale behind this ordering.
Environment variables override .ctxrc values but are overridden by CLI flags.
Variable Description Equivalent .ctxrc key CTX_DIR Override the context directory path context_dirCTX_TOKEN_BUDGET Override the default token budget token_budget","path":["Home","Configuration"],"tags":[]},{"location":"home/configuration/#examples","level":3,"title":"Examples","text":"
# Use a shared context directory\nCTX_DIR=/shared/team-context ctx status\n\n# Increase token budget for a single run\nCTX_TOKEN_BUDGET=16000 ctx agent\n
","path":["Home","Configuration"],"tags":[]},{"location":"home/configuration/#cli-global-flags","level":2,"title":"CLI Global Flags","text":"
CLI flags have the highest priority and override both environment variables and .ctxrc settings. These flags are available on every ctx command.
Flag Description --context-dir <path> Override context directory (default: .context/) --allow-outside-cwd Allow context directory outside current working directory --version Show version and exit --help Show command help and exit","path":["Home","Configuration"],"tags":[]},{"location":"home/configuration/#examples_1","level":3,"title":"Examples","text":"
# Point to a different context directory:\nctx status --context-dir /path/to/shared/.context\n\n# Allow external context directory (skips boundary check):\nctx status --context-dir /mnt/nas/project-context --allow-outside-cwd\n
Layer Value Wins? --context-dir/tmp/ctx Yes CTX_DIR/shared/context No .ctxrc.my-context No Default .context No
The CLI flag /tmp/ctx is used because it has the highest priority.
If the CLI flag were absent, CTX_DIR=/shared/context would win. If neither the flag nor the env var were set, the .ctxrc value .my-context would be used. With nothing configured, the default .context applies.
Get a one-shot warning when your session crosses a token threshold where extra charges begin (e.g., Claude Pro includes 200k tokens; beyond that costs extra):
# .ctxrc\nbilling_token_warn: 180000 # warn before hitting the 200k paid boundary\n
The warning fires once per session the first time token usage exceeds the threshold. Set to 0 (or omit) to disable.
Hook messages control what text hooks emit when they fire. Each message can be overridden per-project by placing a text file at the matching path under .context/:
.context/hooks/messages/{hook}/{variant}.txt\n
The override takes priority over the embedded default compiled into the ctx binary. An empty file silences the message while preserving the hook's logic (counting, state tracking, cooldowns).
Use ctx system message to discover and manage overrides:
ctx system message list # see all messages\nctx system message show qa-reminder gate # view the current template\nctx system message edit qa-reminder gate # copy default for editing\nctx system message reset qa-reminder gate # revert to default\n
See Customizing Hook Messages for detailed examples including Python, JavaScript, and silence configurations.
AI agents need to know the resolved context directory at session start. The ctx system bootstrap command prints the context path, file list, and operating rules in both text and JSON formats:
ctx system bootstrap # text output for agents\nctx system bootstrap -q # just the context directory path\nctx system bootstrap --json # structured output for automation\n
The CLAUDE.md template instructs the agent to run this as its first action. Every nudge (context checkpoint, persistence reminder, etc.) also includes a Context: <dir> footer that re-anchors the agent to the correct directory throughout the session.
This replaces the previous approach of hardcoding .context/ paths in agent instructions.
See CLI Reference: bootstrap for full details.
See also: CLI Reference | Context Files | Scratchpad
Each context file in .context/ serves a specific purpose.
Files are designed to be human-readable, AI-parseable, and token-efficient.
","path":["Home","Context Files"],"tags":[]},{"location":"home/context-files/#file-overview","level":2,"title":"File Overview","text":"File Purpose Priority CONSTITUTION.md Hard rules that must NEVER be violated 1 (highest) TASKS.md Current and planned work 2 CONVENTIONS.md Project patterns and standards 3 ARCHITECTURE.md System overview and components 4 DECISIONS.md Architectural decisions with rationale 5 LEARNINGS.md Lessons learned, gotchas, tips 6 GLOSSARY.md Domain terms and abbreviations 7 AGENT_PLAYBOOK.md Instructions for AI tools 8 (lowest) templates/ Entry format templates for ctx add (optional)","path":["Home","Context Files"],"tags":[]},{"location":"home/context-files/#read-order-rationale","level":2,"title":"Read Order Rationale","text":"
The priority order follows a logical progression for AI tools:
CONSTITUTION.md: Inviolable rules first. The AI tool must know what it cannot do before attempting anything.
TASKS.md: Current work items. What the AI tool should focus on.
CONVENTIONS.md: How to write code. Patterns and standards to follow when implementing tasks.
ARCHITECTURE.md: System structure. Understanding of components and boundaries before making changes.
DECISIONS.md: Historical context. Why things are the way they are, to avoid re-debating settled decisions.
LEARNINGS.md: Gotchas and tips. Lessons from past work that inform the current implementation.
GLOSSARY.md: Reference material. Domain terms and abbreviations for lookup as needed.
AGENT_PLAYBOOK.md: Meta instructions last. How to use this context system itself. Loaded last because the agent should understand the content (rules, tasks, patterns) before the operating manual.
# Constitution\n\nThese rules are INVIOLABLE. If a task requires violating these, the task \nis wrong.\n\n## Security Invariants\n\n* [ ] Never commit secrets, tokens, API keys, or credentials\n* [ ] Never store customer/user data in context files\n* [ ] Never disable security linters without documented exception\n\n## Quality Invariants\n\n* [ ] All code must pass tests before commit\n* [ ] No `any` types in TypeScript without documented reason\n* [ ] No TODO comments in main branch (*move to `TASKS.md`*)\n\n## Process Invariants\n\n* [ ] All architectural changes require a decision record\n* [ ] Breaking changes require version bump\n* [ ] Generated files are never committed\n
Tag Values Purpose #priorityhigh, medium, low Task urgency #areacore, cli, docs, tests Codebase area #estimate1h, 4h, 1d Time estimate (optional) #in-progress (none) Currently being worked on
Lifecycle tags (for session correlation):
Tag Format When to add #addedYYYY-MM-DD-HHMMSS Auto-added by ctx add task#startedYYYY-MM-DD-HHMMSS When beginning work on the task #doneYYYY-MM-DD-HHMMSS When marking the task [x]
These timestamps help correlate tasks with session files and track which session started vs completed work.
# Decisions\n\n## [YYYY-MM-DD] Decision Title\n\n**Status**: Accepted | Superseded | Deprecated\n\n**Context**: What situation prompted this decision?\n\n**Decision**: What was decided?\n\n**Rationale**: Why was this the right choice?\n\n**Consequence**: What are the implications?\n\n**Alternatives Considered**:\n* Alternative A: Why rejected\n* Alternative B: Why rejected\n
## [2025-01-15] Use TypeScript Strict Mode\n\n**Status**: Accepted\n\n**Context**: Starting a new project, need to choose the type-checking level.\n\n**Decision**: Enable TypeScript strict mode with all strict flags.\n\n**Rationale**: Catches more bugs at compile time. Team has experience\nwith strict mode. Upfront cost pays off in reduced runtime errors.\n\n**Consequence**: More verbose type annotations required. Some\nthird-party libraries need type assertions.\n\n**Alternatives Considered**:\n- Basic TypeScript: Rejected because it misses null checks\n- JavaScript with JSDoc: Rejected because tooling support is weaker\n
","path":["Home","Context Files"],"tags":[]},{"location":"home/context-files/#status-values","level":3,"title":"Status Values","text":"Status Meaning Accepted Current, active decision Superseded Replaced by newer decision (link to it) Deprecated No longer relevant","path":["Home","Context Files"],"tags":[]},{"location":"home/context-files/#learningsmd","level":2,"title":"LEARNINGS.md","text":"
Purpose: Capture lessons learned, gotchas, and tips that shouldn't be forgotten.
# Learnings\n\n## Category Name\n\n### Learning Title\n\n**Discovered**: YYYY-MM-DD\n\n**Context**: When/how was this learned?\n\n**Lesson**: What's the takeaway?\n\n**Application**: How should this inform future work?\n
## Testing\n\n### Vitest Mocks Must Be Hoisted\n\n**Discovered**: 2025-01-15\n\n**Context**: Tests were failing intermittently when mocking fs module.\n\n**Lesson**: Vitest requires `vi.mock()` calls to be hoisted to the\ntop of the file. Dynamic mocks need `vi.doMock()` instead.\n\n**Application**: Always use `vi.mock()` at file top. Use `vi.doMock()`\nonly when mock needs runtime values.\n
# Conventions\n\n## Naming\n\n* **Files**: kebab-case for all source files\n* **Components**: PascalCase for React components\n* **Functions**: camelCase, verb-first (getUser, parseConfig)\n* **Constants**: SCREAMING_SNAKE_CASE\n\n## Patterns\n\n### Pattern Name\n\n**When to use**: Situation description\n\n**Implementation**:\n// in triple backticks\n// Example code\n\n**Why**: Rationale for this pattern\n
# Architecture\n\n## Overview\n\nBrief description of what the system does and how it's organized.\n\n## Components\n\n### Component Name\n\n**Responsibility**: What this component does\n\n**Dependencies**: What it depends on\n\n**Dependents**: What depends on it\n\n**Key Files**:\n* path/to/file.ts: Description\n\n## Data Flow\n\nDescription or diagram of how data moves through the system.\n\n## Boundaries\n\nWhat's in scope vs out of scope for this codebase.\n
# Glossary\n\n## Domain Terms\n\n### Term Name\n\n**Definition**: What it means in this project's context\n\n**Not to be confused with**: Similar terms that mean different things\n\n**Example**: How it's used\n\n## Abbreviations\n\n| Abbrev | Expansion | Context |\n|--------|-------------------------------|------------------------|\n| ADR | Architectural Decision Record | Decision documentation |\n| SUT | System Under Test | Testing |\n
Read Order: Priority order for loading context files
When to Update: Events that trigger context updates
How to Avoid Hallucinating Memory: Critical rules:
Never assume: If not in files, you don't know it
Never invent history: Don't claim \"we discussed\" without evidence
Verify before referencing: Search files before citing
When uncertain, say so
Trust files over intuition
Context Update Commands: Format for automated updates via ctx watch:
<context-update type=\"task\">Implement rate limiting</context-update>\n<context-update type=\"complete\">user auth</context-update>\n<context-update type=\"learning\"\n context=\"Debugging hooks\"\n lesson=\"Hooks receive JSON via stdin\"\n application=\"Parse JSON stdin with the host language\"\n>Hook Input Format</context-update>\n<context-update type=\"decision\"\n context=\"Need a caching layer\"\n rationale=\"Redis is fast and team has experience\"\n consequence=\"Must provision Redis infrastructure\"\n>Use Redis for caching</context-update>\n
Purpose: Format templates for ctx add decision and ctx add learning. These control the structure of new entries appended to DECISIONS.md and LEARNINGS.md.
Edit the templates directly. Changes take effect immediately on the next ctx add command. For example, to add a \"References\" section to all new decisions, edit .context/templates/decision.md.
Templates are committed to git, so customizations are shared with the team.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#1-fork-or-clone-the-repository","level":3,"title":"1. Fork (or Clone) the Repository","text":"
# Fork on GitHub, then:\ngit clone https://github.com/<you>/ctx.git\ncd ctx\n\n# Or, if you have push access:\ngit clone https://github.com/ActiveMemory/ctx.git\ncd ctx\n
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#2-build-and-install-the-binary","level":3,"title":"2. Build and Install the Binary","text":"
make build\nsudo make install\n
This compiles the ctx binary and places it in /usr/local/bin/.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#3-install-the-plugin-from-your-local-clone","level":3,"title":"3. Install the Plugin from Your Local Clone","text":"
The repository ships a Claude Code plugin under internal/assets/claude/. Point Claude Code at your local copy so that skills and hooks reflect your working tree: no reinstall needed after edits:
Launch claude;
Type /plugin and press Enter;
Select Marketplaces → Add Marketplace
Enter the absolute path to the root of your clone, e.g. ~/WORKSPACE/ctx (this is where .claude-plugin/marketplace.json lives: it points Claude Code to the actual plugin in internal/assets/claude);
Back in /plugin, select Install and choose ctx.
Claude Code Caches Plugin Files
Even though the marketplace points at a directory on disk, Claude Code caches skills and hooks. After editing files under internal/assets/claude/, clear the cache and restart:
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#skills-two-directories-one-rule","level":3,"title":"Skills: Two Directories, One Rule","text":"Directory What lives here Distributed to users? internal/assets/claude/skills/ The 39 ctx-* skills that ship with the plugin Yes .claude/skills/ Dev-only skills (release, QA, backup, etc.) No
internal/assets/claude/skills/ is the single source of truth for user-facing skills. If you are adding or modifying a ctx-* skill, edit it there.
.claude/skills/ holds skills that only make sense inside this repository (release automation, QA checks, backup scripts). These are never distributed to users.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#dev-only-skills-reference","level":4,"title":"Dev-Only Skills Reference","text":"Skill When to use /_ctx-absorb Merge deltas from a parallel worktree or separate checkout /_ctx-audit Detect code-level drift after YOLO sprints or before releases /_ctx-backup Backup context and Claude data to SMB share /_ctx-qa Run QA checks before committing /_ctx-release Run the full release process /_ctx-release-notes Generate release notes for dist/RELEASE_NOTES.md/_ctx-alignment-audit Audit doc claims against agent instructions /_ctx-update-docs Check docs/code consistency after changes
Six skills previously in this list have been promoted to bundled plugin skills and are now available to all ctx users: /ctx-brainstorm, /ctx-check-links, /ctx-sanitize-permissions, /ctx-skill-creator, /ctx-spec.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#how-to-add-things","level":2,"title":"How To Add Things","text":"","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#adding-a-new-cli-command","level":3,"title":"Adding a New CLI Command","text":"
Create a package under internal/cli/<name>/;
Implement Cmd() *cobra.Command as the entry point;
Register it in internal/bootstrap/bootstrap.go (add import + call in Initialize);
Use cmd.Printf/cmd.Println for output (not fmt.Print);
Add tests in the same package (<name>_test.go);
Add a section to the appropriate CLI doc page in docs/cli/.
Pattern to follow: internal/cli/pad/pad.go (parent with subcommands) or internal/cli/drift/drift.go (single command).
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#adding-a-new-session-parser","level":3,"title":"Adding a New Session Parser","text":"
The journal system uses a SessionParser interface. To add support for a new AI tool (e.g. Aider, Cursor):
Create internal/journal/parser/<tool>.go;
Implement parsing logic that returns []*Session;
Register the parser in FindSessions() / FindSessionsForCWD();
Use config.Tool* constants for the tool identifier;
Add test fixtures and parser tests.
Pattern to follow: the Claude Code JSONL parser in internal/journal/parser/.
Multilingual session headers
The Markdown parser recognizes session header prefixes configured via session_prefixes in .ctxrc (default: Session:). To support a new language, users add a prefix to their .ctxrc - no code change needed. New parser implementations can use rc.SessionPrefixes() if they also need prefix-based header detection.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#adding-a-bundled-skill","level":3,"title":"Adding a Bundled Skill","text":"
The repo ships two .ctxrc source profiles. The working copy (.ctxrc) is gitignored and swapped between them:
File Purpose .ctxrc.base Golden baseline: all defaults, no logging .ctxrc.dev Dev profile: notify events enabled, verbose logging .ctxrc Working copy (gitignored: copied from one of the above)
Use ctx commands to switch:
ctx config switch dev # switch to dev profile\nctx config switch base # switch to base profile\nctx config status # show which profile is active\n
After cloning, run ctx config switch dev to get started with full logging.
See Configuration for the full .ctxrc option reference.
Back up project context and global Claude Code data with:
ctx system backup # both project + global (default)\nctx system backup --scope project # .context/, .claude/, ideas/ only\nctx system backup --scope global # ~/.claude/ only\n
Archives are saved to /tmp/. When CTX_BACKUP_SMB_URL is configured, they are also copied to an SMB share. See CLI Reference: backup for details.
make test # fast: all tests\nmake audit # full: fmt + vet + lint + drift + docs + test\nmake smoke # build + run basic commands end-to-end\n
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#running-the-docs-site-locally","level":3,"title":"Running the Docs Site Locally","text":"
make site-setup # one-time: install zensical via pipx\nmake site-serve # serve at localhost\n
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#submitting-changes","level":2,"title":"Submitting Changes","text":"","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#before-you-start","level":3,"title":"Before You Start","text":"
Check existing issues to avoid duplicating effort;
For large changes, open an issue first to discuss the approach;
Markdown is human-readable, version-controllable, and tool-agnostic. Every AI model can parse it natively. Every developer can read it in a terminal, a browser, or a code review. There's no schema to learn, no binary format to decode, no vendor lock-in. You can inspect your context with cat, diff it with git diff, and review it in a PR.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#does-ctx-work-offline","level":2,"title":"Does ctx work offline?","text":"
Yes. ctx is completely local. It reads and writes files on disk, generates context packets from local state, and requires no network access. The only feature that touches the network is the optional webhook notifications hook, which you have to explicitly configure.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#what-gets-committed-to-git","level":2,"title":"What gets committed to git?","text":"
The .context/ directory: yes, commit it. That's the whole point. Team members and AI agents read the same context files.
What not to commit:
.ctx.key: your encryption key. Stored at ~/.ctx/.ctx.key, never in the repo. ctx init handles this automatically.
journal/ and logs/: generated data, potentially large. ctx init adds these to .gitignore.
scratchpad.enc: your choice. It's encrypted, so it's safe to commit if you want shared scratchpad state. See Scratchpad for details.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#how-big-should-my-token-budget-be","level":2,"title":"How big should my token budget be?","text":"
The default is 8000 tokens, which works well for most projects. Configure it via .ctxrc or the CTX_TOKEN_BUDGET environment variable:
# In .ctxrc\ntoken_budget = 12000\n\n# Or as an environment variable\nexport CTX_TOKEN_BUDGET=12000\n\n# Or per-invocation\nctx agent --budget 4000\n
Higher budgets include more context but cost more tokens per request. Lower budgets force sharper prioritization: ctx drops lower-priority content first, so CONSTITUTION and TASKS always make the cut.
See Configuration for all available settings.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#why-not-a-database","level":2,"title":"Why not a database?","text":"
Files are inspectable, diffable, and reviewable in pull requests. You can grep them, cat them, pipe them through jq or awk. They work with every version control system and every text editor.
A database would add a dependency, require migrations, and make context opaque. The design bet is that context should be as visible and portable as the code it describes.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#does-it-work-with-tools-other-than-claude-code","level":2,"title":"Does it work with tools other than Claude Code?","text":"
Yes. ctx agent outputs a context packet that any AI tool can consume: paste it into ChatGPT, Cursor, Copilot, Aider, or anything else that accepts text input.
Claude Code gets first-class integration via the ctx plugin (hooks, skills, automatic context loading). VS Code Copilot Chat has a dedicated ctx extension. Other tools integrate via generated instruction files or manual pasting.
See Integrations for tool-specific setup, including the multi-tool recipe.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#can-i-use-ctx-on-an-existing-project","level":2,"title":"Can I use ctx on an existing project?","text":"
Yes. Run ctx init in any repo and it creates .context/ with template files. Start recording decisions, tasks, and conventions as you work. Context grows naturally; you don't need to backfill everything on day one.
See Getting Started for the full setup flow, or Joining a ctx Project if someone else already initialized it.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#what-happens-when-context-files-get-too-big","level":2,"title":"What happens when context files get too big?","text":"
Token budgeting handles this automatically. ctx agent prioritizes content by file priority (CONSTITUTION first, GLOSSARY last) and trims lower-priority entries when the budget is tight.
For manual maintenance, ctx compact archives completed tasks and old entries, keeping active context lean. You can also run ctx task archive to move completed tasks out of TASKS.md.
The goal is to keep context files focused on current state. Historical entries belong in git history or the archive.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#is-context-meant-to-be-shared","level":2,"title":"Is .context/ meant to be shared?","text":"
Yes. Commit it to your repo. Every team member and every AI agent reads the same files. That's the mechanism for shared memory: decisions made in one session are visible in the next, regardless of who (or what) starts it.
The only per-user state is the encryption key (~/.ctx/.ctx.key) and the optional scratchpad. Everything else is team-shared by design.
Related:
Getting Started - installation and first setup
Configuration - .ctxrc, environment variables, and defaults
Context Files - what each file does and how to use it
","path":["Home","FAQ"],"tags":[]},{"location":"home/first-session/","level":1,"title":"Your First Session","text":"
Here's what a complete first session looks like, from initialization to the moment your AI cites your project context back to you.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-1-initialize-your-project","level":2,"title":"Step 1: Initialize Your Project","text":"
Run ctx init in your project root:
cd your-project\nctx init\n
Sample output:
Context initialized in .context/\n\n ✓ CONSTITUTION.md\n ✓ TASKS.md\n ✓ DECISIONS.md\n ✓ LEARNINGS.md\n ✓ CONVENTIONS.md\n ✓ ARCHITECTURE.md\n ✓ GLOSSARY.md\n ✓ AGENT_PLAYBOOK.md\n\nSetting up encryption key...\n ✓ ~/.ctx/.ctx.key\n\nClaude Code plugin (hooks + skills):\n Install: claude /plugin marketplace add ActiveMemory/ctx\n Then: claude /plugin install ctx@activememory-ctx\n\nNext steps:\n 1. Edit .context/TASKS.md to add your current tasks\n 2. Run 'ctx status' to see context summary\n 3. Run 'ctx agent' to get AI-ready context packet\n
This created your .context/ directory with template files.
For Claude Code, install the ctx plugin to get automatic hooks and skills.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-2-populate-your-context","level":2,"title":"Step 2: Populate Your Context","text":"
Add a task and a decision: These are the entries your AI will remember:
ctx add task \"Implement user authentication\"\n\n# Output: ✓ Added to TASKS.md\n\nctx add decision \"Use PostgreSQL for primary database\" \\\n --context \"Need a reliable database for production\" \\\n --rationale \"PostgreSQL offers ACID compliance and JSON support\" \\\n --consequence \"Team needs PostgreSQL training\"\n\n# Output: ✓ Added to DECISIONS.md\n
These entries are what the AI will recall in future sessions. You don't need to populate everything now: Context grows naturally as you work.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-3-check-your-context","level":2,"title":"Step 3: Check Your Context","text":"
Notice the token estimate: This is how much context your AI will load.
The ○ next to LEARNINGS.md means it's still empty; it will fill in as you capture lessons during development.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-4-start-an-ai-session","level":2,"title":"Step 4: Start an AI Session","text":"
With Claude Code (and the ctx plugin), start every session with:
/ctx-remember\n
This loads your context and presents a structured readback so you can confirm the agent knows what is going on. Context also loads automatically via hooks, but the explicit ceremony gives you a readback to verify.
Using VS Code?
With VS Code Copilot Chat (and the ctx extension), type @ctx /agent in chat to load your context packet, or @ctx /status to check your project context. Run ctx setup copilot --write once to generate .github/copilot-instructions.md for automatic context loading.
If you are not using Claude Code, generate a context packet for your AI tool:
ctx agent --budget 8000\n
Sample output:
# Context Packet\nGenerated: 2026-02-14T15:30:45Z | Budget: 8000 tokens | Used: ~2450\n\n## Read These Files (in order)\n1. .context/CONSTITUTION.md\n2. .context/TASKS.md\n3. .context/CONVENTIONS.md\n...\n\n## Current Tasks\n- [ ] Implement user authentication\n- [ ] Add rate limiting to API endpoints\n\n## Key Conventions\n- Use gofmt for formatting\n- Path construction uses filepath.Join\n\n## Recent Decisions\n## [2026-02-14-120000] Use PostgreSQL for the primary database\n\n**Context**: Evaluated PostgreSQL, MySQL, and SQLite...\n**Rationale**: PostgreSQL offers better JSON support...\n\n## Key Learnings\n## [2026-02-14-100000] Connection pool sizing matters\n\n**Context**: Hit connection limits under load...\n**Lesson**: Default pool size of 10 is too low for concurrent requests...\n\n## Also Noted\n- Use JWT for session management\n- Always validate input at API boundary\n
Paste this output into your AI tool's system prompt or conversation start.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-5-verify-it-works","level":2,"title":"Step 5: Verify It Works","text":"
Ask your AI: \"What are our current tasks?\"
A working setup produces a response like:
Based on the project context, you have one active task:\n\n- **Implement user authentication** (pending)\n\nThere's also a recent architectural decision to **use PostgreSQL for\nthe primary database**, chosen for its ACID compliance and JSON support.\n\nWant me to start on the authentication task?\n
That's the success moment:
The AI is citing your exact context entries from Step 2, not hallucinating or asking you to re-explain.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#what-gets-created","level":2,"title":"What Gets Created","text":"
.context/\n├── CONSTITUTION.md # Hard rules: NEVER violate these\n├── TASKS.md # Current and planned work\n├── CONVENTIONS.md # Project patterns and standards\n├── ARCHITECTURE.md # System overview\n├── DECISIONS.md # Architectural decisions with rationale\n├── LEARNINGS.md # Lessons learned, gotchas, tips\n├── GLOSSARY.md # Domain terms and abbreviations\n└── AGENT_PLAYBOOK.md # How AI tools should use this\n
Claude Code integration (hooks + skills) is provided by the ctx plugin: See Integrations/Claude Code.
VS Code Copilot Chat integration is provided by the ctx extension: See Integrations/VS Code.
See Context Files for detailed documentation of each file.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#what-to-gitignore","level":2,"title":"What to .gitignore","text":"
Rule of Thumb
If it's knowledge (decisions, tasks, learnings, conventions), commit it.
If it's generated output, raw session data, or a secret, .gitignore it.
Commit your .context/ knowledge files: that's the whole point.
You should .gitignore the generated and sensitive paths:
# Journal data (large, potentially sensitive)\n.context/journal/\n.context/journal-site/\n.context/journal-obsidian/\n\n# Hook logs (machine-specific)\n.context/logs/\n\n# Legacy encryption key path (copy to ~/.ctx/.ctx.key if needed)\n.context/.ctx.key\n\n# Claude Code local settings (machine-specific)\n.claude/settings.local.json\n
ctx init Patches Your .gitignore for You
ctx init automatically adds these entries to your .gitignore.
Review the additions with cat .gitignore after init.
See also:
Security Considerations
Scratchpad Encryption
Session Journal
Next Up: Common Workflows →: day-to-day commands for tracking context, checking health, and browsing history.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/getting-started/","level":1,"title":"Getting Started","text":"","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#prerequisites","level":2,"title":"Prerequisites","text":"
ctx does not require git, but using version control with your .context/ directory is strongly recommended:
AI sessions occasionally modify or overwrite context files inadvertently. With git, the AI can check history and restore lost content: Without it, the data is gone.
Also, several ctx features (journal changelog, blog generation) also use git history directly.
Every setup starts with the ctx binary: the CLI tool itself.
If you use Claude Code, you also install the ctx plugin, which adds hooks (context autoloading, persistence nudges) and 25+ /ctx-* skills. For other AI tools, ctx integrates via generated instruction files or manual context pasting: see Integrations for tool-specific setup.
Pick one of the options below to install the binary. Claude Code users should also follow the plugin steps included in each option.
","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#option-1-build-from-source-recommended","level":3,"title":"Option 1: Build from Source (Recommended)","text":"
Requires Go (version defined in go.mod) and Claude Code.
git clone https://github.com/ActiveMemory/ctx.git\ncd ctx\nmake build\nsudo make install\n
Install the Claude Code plugin from your local clone:
Launch claude;
Type /plugin and press Enter;
Select Marketplaces → Add Marketplace
Enter the path to the root of your clone, e.g. ~/WORKSPACE/ctx (this is where .claude-plugin/marketplace.json lives: It points Claude Code to the actual plugin in internal/assets/claude)
Back in /plugin, select Install and choose ctx
This points Claude Code at the plugin source on disk. Changes you make to hooks or skills take effect immediately: No reinstall is needed.
Local Installs Need Manual Enablement
Unlike marketplace installs, local plugin installs are not auto-enabled globally. The plugin will only work in projects that explicitly enable it. Run ctx init in each project (it auto-enables the plugin), or add the entry to ~/.claude/settings.json manually:
Download ctx-0.8.1-windows-amd64.exe from the releases page and add it to your PATH.
Claude Code users: install the plugin from the marketplace:
Launch claude;
Type /plugin and press Enter;
Select Marketplaces → Add Marketplace;
Enter ActiveMemory/ctx;
Back in /plugin, select Install and choose ctx.
Other tool users: see Integrations for tool-specific setup (Cursor, Copilot, Aider, Windsurf, etc.).
Verify the Plugin Is Enabled
After installing, confirm the plugin is enabled globally. Check ~/.claude/settings.json for an enabledPlugins entry. If missing, run ctx init in your project (it auto-enables the plugin), or add it manually:
This creates a .context/ directory with template files and an encryption key at ~/.ctx/ for the encrypted scratchpad. For Claude Code, install the ctx plugin for automatic hooks and skills.
Shows context summary: files present, token estimate, and recent activity.
","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#3-start-using-with-ai","level":3,"title":"3. Start Using with AI","text":"
With Claude Code (and the ctx plugin installed), context loads automatically via hooks.
With VS Code Copilot Chat, install the ctx extension and use @ctx /status, @ctx /agent, and other slash commands directly in chat. Run ctx setup copilot --write to generate .github/copilot-instructions.md for automatic context loading.
For other tools, paste the output of:
ctx agent --budget 8000\n
","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#4-verify-it-works","level":3,"title":"4. Verify It Works","text":"
Ask your AI: \"Do you remember?\"
It should cite specific context: current tasks, recent decisions, or previous session topics.
","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#5-set-up-companion-tools-highly-recommended","level":3,"title":"5. Set Up Companion Tools (Highly Recommended)","text":"
ctx works on its own, but two companion MCP servers unlock significantly better agent behavior. The investment is small and the benefits compound over sessions:
Gemini Search — grounded web search with citations. Skills like /ctx-code-review and /ctx-explain use it for up-to-date documentation lookups instead of relying on training data.
GitNexus — code knowledge graph with symbol resolution, blast radius analysis, and domain clustering. Skills like /ctx-refactor and /ctx-code-review use it for impact analysis and dependency awareness.
# Index your project for GitNexus (run once, then after major changes)\nnpx gitnexus analyze\n
Both are optional MCP servers: if they are not connected, skills degrade gracefully to built-in capabilities. See Companion Tools for setup details and verification.
Next Up:
Your First Session →: a step-by-step walkthrough from ctx init to verified recall
Common Workflows →: day-to-day commands for tracking context, checking health, and browsing history
","path":["Home","Getting Started"],"tags":[]},{"location":"home/is-ctx-right/","level":1,"title":"Is It Right for Me?","text":"","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#good-fit","level":2,"title":"Good Fit","text":"
ctx shines when context matters more than code.
If any of these sound like your project, it's worth trying:
Multi-session AI work: You use AI across many sessions on the same codebase, and re-explaining is slowing you down.
Architectural decisions that matter: Your project has non-obvious choices (database, auth strategy, API design) that the AI keeps second-guessing.
\"Why\" matters as much as \"what\": you need the AI to understand rationale, not just current code
Team handoffs: Multiple people (or multiple AI tools) work on the same project and need shared context.
AI-assisted development across tools: Uou switch between Claude Code, Cursor, Copilot, or other tools and want context to follow the project, not the tool.
Long-lived projects: Anything you'll work on for weeks or months, where accumulated knowledge has compounding value.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#may-not-be-the-right-fit","level":2,"title":"May Not Be the Right Fit","text":"
ctx adds overhead that isn't worth it for every project. Be honest about when to skip it:
One-off scripts: If the project is a single file you'll finish today, there's nothing to remember.
RAG-only workflows: If retrieval from an external knowledge base already gives the agent everything it needs for each session, adding ctx may be unnecessary. RAG retrieves information; ctx defines the project's working memory: They are complementary.
No AI involvement: ctx is designed for human-AI workflows; without an AI consumer, the files are just documentation.
Enterprise-managed context platforms: If your organization provides centralized context services, ctx may duplicate that layer.
For a deeper technical comparison with RAG, prompt management tools, and agent frameworks, see ctx and Similar Tools.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#project-size-guide","level":2,"title":"Project Size Guide","text":"","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#solo-developer-single-repo","level":3,"title":"Solo Developer, Single Repo","text":"
This is ctx's sweet spot.
You get the most value here: one person, one project, decisions, and learnings accumulating over time. Setup takes 5 minutes and the .context/ directory directory stays small, and every session gets faster.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#small-team-one-or-two-repos","level":3,"title":"Small Team, One or Two Repos","text":"
Works well.
Context files commit to git, so the whole team shares the same decisions and conventions. Each person's AI starts with the team's decisions already loaded. Merge conflicts on .context/ files are rare and easy to resolve (they are just Markdown).
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#multiple-repos-or-larger-teams","level":3,"title":"Multiple Repos or Larger Teams","text":"
ctx operates per repository.
Each repo has its own .context/ directory with its own decisions, tasks, and learnings. This matches the way code, ownership, and history already work in git.
There is no built-in cross-repo context layer.
For organizations that need centralized, organization-wide knowledge, ctx complements a platform solution by providing durable, project-local working memory for AI sessions.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#5-minute-trial","level":2,"title":"5-Minute Trial","text":"
Zero commitment. Try it, and delete .context/ if it's not for you.
Using Claude Code?
Install the ctx plugin from the Marketplace for Claude-native hooks, skills, and automatic context loading:
Type /plugin and press Enter
Select Marketplaces → Add Marketplace
Enter ActiveMemory/ctx
Back in /plugin, select Install and choose ctx
You'll still need the ctx binary for the CLI: See Getting Started for install options.
# 1. Initialize\ncd your-project\nctx init\n\n# 2. Add one real decision from your project\nctx add decision \"Your actual architectural choice\" \\\n --context \"What prompted this decision\" \\\n --rationale \"Why you chose this approach\" \\\n --consequence \"What changes as a result\"\n\n# 3. Check what the AI will see\nctx status\n\n# 4. Start an AI session and ask: \"Do you remember?\"\n
If the AI cites your decision back to you, it's working.
Want to remove it later? One command:
rm -rf .context/\n
No dependencies to uninstall. No configuration to revert. Just files.
Ready to try it out?
Join the Community→: Open Source is better together.
Getting Started →: Full installation and setup.
ctx and Similar Tools →: Detailed comparison with other approaches.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/joining-a-project/","level":1,"title":"Joining a ctx Project","text":"
You've joined a team or inherited a project, and there's a .context/ directory in the repo. Good news: someone already set up persistent context. This page gets you oriented fast.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#what-to-read-first","level":2,"title":"What to Read First","text":"
The files in .context/ have a deliberate priority order. Read them top-down:
CONSTITUTION.md: Hard rules. Read this before you touch anything. These are inviolable constraints the team has agreed on.
TASKS.md: Current and planned work. Shows what's in progress, what's pending, and what's blocked.
CONVENTIONS.md: How the team writes code. Naming patterns, file organization, preferred idioms.
ARCHITECTURE.md: System overview. Components, boundaries, data flow.
DECISIONS.md: Why things are the way they are. Saves you from re-proposing something the team already evaluated and rejected.
LEARNINGS.md: Gotchas, tips, and hard-won lessons. The stuff that doesn't fit anywhere else but will save you hours.
See Context Files for detailed documentation of each file's structure and purpose.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#checking-context-health","level":2,"title":"Checking Context Health","text":"
Before you start working, check whether the context is current:
ctx status\n
This shows file counts, token estimates, and recent activity. If files haven't been touched in weeks, the context may be stale.
ctx drift\n
This compares context files against recent code changes and flags potential drift: decisions that no longer match the codebase, conventions that have shifted, or tasks that look outdated.
If things are stale, mention it to the team. Don't silently fix it yourself on day one.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#starting-your-first-session","level":2,"title":"Starting Your First Session","text":"
Generate a context packet to prime your AI:
ctx agent --budget 8000\n
This outputs a token-budgeted summary of the project context, ordered by priority. With Claude Code and the ctx plugin, context loads automatically via hooks. You can also use the /ctx-remember skill to get a structured readback of what the AI knows.
The readback is your verification step: if the AI can cite specific tasks and decisions, the context is working.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#adding-context","level":2,"title":"Adding Context","text":"
As you work, you'll discover things worth recording. Use the CLI:
# Record a decision you made or learned about\nctx add decision \"Use connection pooling for DB access\" \\\n --rationale \"Reduces connection overhead under load\"\n\n# Capture a gotcha you hit\nctx add learning \"Redis timeout defaults to 5s\" \\\n --context \"Hit timeouts during bulk operations\" \\\n --application \"Set explicit timeout for batch jobs\"\n\n# Add a convention you noticed the team follows\nctx add convention \"All API handlers return structured errors\"\n
You can also just tell the AI: \"Record this as a learning\" or \"Add this decision to context.\" With the ctx plugin, context-update commands handle the file writes.
See the Knowledge Capture recipe for the full workflow.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#session-etiquette","level":2,"title":"Session Etiquette","text":"
A few norms for working in a ctx-managed project:
Respect existing conventions. If CONVENTIONS.md says \"use filepath.Join,\" use filepath.Join. If you disagree, propose a change, don't silently diverge.
Don't restructure context files without asking. The file layout and section structure are shared state. Reorganizing them affects every team member and every AI session.
Mark tasks done when complete. Check the box ([x]) in place. Don't move tasks between sections or delete them.
Add context as you go. Decisions, learnings, and conventions you discover are valuable to the next person (or the next session).
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#common-pitfalls","level":2,"title":"Common Pitfalls","text":"
Ignoring CONSTITUTION.md. The constitution exists for a reason. If a task conflicts with a constitution rule, the task is wrong. Raise it with the team instead of working around the constraint.
Deleting tasks. Never delete a task from TASKS.md. Mark it [x] (done) or [-] (skipped with a reason). The history matters for session replay and audit.
Bypassing hooks. If the project uses ctx hooks (pre-commit nudges, context autoloading), don't disable them. They exist to keep context fresh. If a hook is noisy or broken, fix it or file a task.
Over-contributing on day one. Read first, then contribute. Adding a dozen learnings before you understand the project's norms creates noise, not signal.
Related:
Getting Started: installation and setup from scratch
Context Files: detailed file reference
Knowledge Capture: recording decisions, learnings, and conventions
Session Lifecycle: how a typical AI session flows with ctx
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/keeping-ai-honest/","level":1,"title":"Keeping AI Honest","text":"","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#the-problem","level":2,"title":"The Problem","text":"
AI agents confabulate. They invent history that never happened, claim familiarity with decisions that were never made, and sometimes declare a task complete when it is not. This is not malice - it is the default behavior of a system optimizing for plausible-sounding responses.
When your AI says \"we decided to use Redis for caching last week,\" can you verify that? When it says \"the auth module is complete,\" can you confirm it? Without grounded, persistent context, the answer is no. You are trusting vibes.
ctx replaces vibes with verifiable artifacts.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#grounded-memory","level":2,"title":"Grounded Memory","text":"
Every entry in ctx context files has a timestamp and structured fields. When the AI cites a decision, you can check it.
## [2026-01-28-143022] Use Event Sourcing for Audit Trail\n\n**Status**: Accepted\n\n**Context**: Compliance requires full mutation history.\n\n**Decision**: Event sourcing for the audit subsystem only.\n\n**Rationale**: Append-only log meets compliance requirements\nwithout imposing event sourcing on the entire domain model.\n
The timestamp 2026-01-28-143022 is not decoration. It is a verifiable anchor. If the AI references this decision, you can open DECISIONS.md, find the entry, and confirm it says what the AI claims. If the entry does not exist, the AI is hallucinating - and you know immediately.
This is grounded memory: claims that trace back to artifacts you control and can audit.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#constitutionmd-hard-guardrails","level":2,"title":"CONSTITUTION.md: Hard Guardrails","text":"
CONSTITUTION.md defines rules the AI must treat as inviolable. These are not suggestions or best practices - they are constraints that override task requirements.
# Constitution\n\nThese rules are INVIOLABLE. If a task requires violating these,\nthe task is wrong.\n\n* [ ] Never commit secrets, tokens, API keys, or credentials\n* [ ] All public API changes require a decision record\n* [ ] Never delete context files without explicit user approval\n
The AI reads these at session start, before anything else. A well- integrated agent will refuse a task that conflicts with a constitutional rule, citing the specific rule it would violate.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#the-agent-playbooks-anti-hallucination-rules","level":2,"title":"The Agent Playbook's Anti-Hallucination Rules","text":"
The AGENT_PLAYBOOK.md file includes a section called \"How to Avoid Hallucinating Memory\" with five explicit rules:
Never assume. If it is not in the context files, you do not know it.
Never invent history. Do not claim \"we discussed\" something without a file reference.
Verify before referencing. Search files before citing them.
When uncertain, say so. \"I don't see a decision on this\" is always better than a fabricated one.
Trust files over intuition. If the files say PostgreSQL but your training data suggests MySQL, the files win.
These rules create a behavioral contract. The AI is not left to guess how confident it should be - it has explicit instructions to ground every claim in the context directory.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#drift-detection","level":2,"title":"Drift Detection","text":"
Context files can go stale. You rename a package, delete a module, or finish a sprint, and suddenly ARCHITECTURE.md references paths that no longer exist. Stale context is almost as dangerous as no context: the AI treats outdated information as current truth.
ctx drift detects this divergence:
ctx drift\n
It scans context files for references to files, paths, and symbols that no longer exist in the codebase. Stale references get flagged so you can update or remove them before they mislead the next session.
Regular drift checks - weekly, or after major refactors - keep your context files honest the same way tests keep your code honest.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#the-verification-loop","level":2,"title":"The Verification Loop","text":"
The /ctx-commit skill includes a built-in verification step: before staging, it maps claims to evidence and runs self-audit questions to surface gaps. This catches inconsistencies at the point where they matter most — right before code is committed.
This closes the loop. You write context. The AI reads context. The verification step confirms that context still matches reality. When it does not, you fix it - and the next session starts from truth, not from drift.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#trust-through-structure","level":2,"title":"Trust Through Structure","text":"
The common thread across all of these mechanisms is structure over prose. Timestamps make claims verifiable. Constitutional rules make boundaries explicit. Drift detection makes staleness visible. The playbook makes behavioral expectations concrete.
You do not need to trust the AI. You need to trust the system -- and verify when it matters.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#further-reading","level":2,"title":"Further Reading","text":"
Detecting and Fixing Drift: the full workflow for keeping context files accurate
Invariants: the properties that must hold for any valid ctx implementation
Agent Security: threat model and mitigations for AI agents operating with persistent context
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/prompting-guide/","level":1,"title":"Prompting Guide","text":"
New to ctx?
This guide references context files like TASKS.md, DECISIONS.md, and LEARNINGS.md:
These are plain Markdown files that ctx maintains in your project's .context/ directory.
If terms like \"context packet\" or \"session ceremony\" are unfamiliar,
start with the ctx Manifesto for the why,
About for the big picture,
then Getting Started to set up your first project.
This guide is about crafting effective prompts for working with AI assistants in ctx-enabled projects, but the guidelines given here apply to other AI systems, too.
The right prompt triggers the right behavior.
This guide documents prompts that reliably produce good results.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#tldr","level":2,"title":"TL;DR","text":"Goal Prompt Load context \"Do you remember?\" Resume work \"What's the current state?\" What's next /ctx-next Debug \"Why doesn't X work?\" Validate \"Is this consistent with our decisions?\" Impact analysis \"What would break if we...\" Reflect /ctx-reflect Wrap up /ctx-wrap-up Persist \"Add this as a learning\" Explore \"How does X work in this codebase?\" Sanity check \"Is this the right approach?\" Completeness \"What am I missing?\" One more thing \"What's the single smartest addition?\" Set tone \"Push back if my assumptions are wrong.\" Constrain scope \"Only change files in X. Nothing else.\" Course correct \"Stop. That's not what I meant.\" Check health \"Run ctx drift\" Commit /ctx-commit","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#session-start","level":2,"title":"Session Start","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#do-you-remember","level":3,"title":"\"Do you remember?\"","text":"
Triggers the AI to silently read TASKS.md, DECISIONS.md, LEARNINGS.md, and check recent history via ctx journal before responding with a structured readback:
Last session: most recent session topic and date
Active work: pending or in-progress tasks
Recent context: 1-2 recent decisions or learnings
Next step: offer to continue or ask what to focus on
Use this at the start of every important session.
Do you remember what we were working on?\n
This question implies prior context exists. The AI checks files rather than admitting ignorance. The expected response cites specific context (session names, task counts, decisions), not vague summaries.
If the AI instead narrates its discovery process (\"Let me check if there are files...\"), it has not loaded CLAUDE.md or AGENT_PLAYBOOK.md properly.
For a detailed case study on making agents actually follow this protocol (including the failure modes, the timing problem, and the hook design that solved it) see The Dog Ate My Homework.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#whats-the-current-state","level":3,"title":"\"What's the current state?\"","text":"
Prompts reading of TASKS.md, recent sessions, and status overview.
Use this when resuming work after a break.
Variants:
\"Where did we leave off?\"
\"What's in progress?\"
\"Show me the open tasks.\"
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#during-work","level":2,"title":"During Work","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#why-doesnt-x-work","level":3,"title":"\"Why doesn't X work?\"","text":"
This triggers root cause analysis rather than surface-level fixes.
Use this when something fails unexpectedly.
Framing as \"why\" encourages investigation before action. The AI will trace through code, check configurations, and identify the actual cause.
Real Example
\"Why can't I run /ctx-reflect?\" led to discovering missing permissions in settings.local.json bootstrapping.
This was a fix that benefited all users of ctx.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#is-this-consistent-with-our-decisions","level":3,"title":"\"Is this consistent with our decisions?\"","text":"
This prompts checking DECISIONS.md before implementing.
Use this before making architectural choices.
Variants:
\"Check if we've decided on this before\"
\"Does this align with our conventions?\"
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#what-would-break-if-we","level":3,"title":"\"What would break if we...\"","text":"
This triggers defensive thinking and impact analysis.
Use this before making significant changes.
What would break if we change the Settings struct?\n
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#before-you-start-read-x","level":3,"title":"\"Before you start, read X\"","text":"
This ensures specific context is loaded before work begins.
Use this when you know the relevant context exists in a specific file.
Before you start, check ctx journal source for the auth discussion session\n
When the AI misbehaves, match the symptom to the recovery prompt:
Symptom Recovery prompt Hand-waves (\"should work now\") \"Show evidence: file/line refs, command output, or test name.\" Creates unnecessary files \"No new files. Modify the existing implementation.\" Expands scope unprompted \"Stop after the smallest working change. Ask before expanding scope.\" Narrates instead of acting \"Skip the explanation. Make the change and show the diff.\" Repeats a failed approach \"That didn't work last time. Try a different approach.\" Claims completion without proof \"Run the test. Show me the output.\"
These are recovery handles, not rules to paste into CLAUDE.md.
Use them in the moment when you see the behavior.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#reflection-and-persistence","level":2,"title":"Reflection and Persistence","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#what-did-we-learn","level":3,"title":"\"What did we learn?\"","text":"
This prompts reflection on the session and often triggers adding learnings to LEARNINGS.md.
Use this after completing a task or debugging session.
This is an explicit reflection prompt. The AI will summarize insights and often offer to persist them.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#add-this-as-a-learningdecision","level":3,"title":"\"Add this as a learning/decision\"","text":"
This is an explicit persistence request.
Use this when you have discovered something worth remembering.
Add this as a learning: \"JSON marshal escapes angle brackets by default\"\n\n# or simply.\nAdd this as a learning.\n# and let the AI autonomously infer and summarize.\n
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#save-context-before-we-end","level":3,"title":"\"Save context before we end\"","text":"
This triggers context persistence before the session closes.
Use it at the end of the session or before switching topics.
Variants:
\"Let's persist what we did\"
\"Update the context files\"
/ctx-wrap-up:the recommended end-of-session ceremony (see Session Ceremonies)
/ctx-reflect: mid-session reflection checkpoint
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#exploration-and-research","level":2,"title":"Exploration and Research","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#explore-the-codebase-for-x","level":3,"title":"\"Explore the codebase for X\"","text":"
This triggers thorough codebase search rather than guessing.
Use this when you need to understand how something works.
This works because \"Explore\" signals that investigation is needed, not immediate action.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#how-does-x-work-in-this-codebase","level":3,"title":"\"How does X work in this codebase?\"","text":"
This prompts reading actual code rather than explaining general concepts.
Use this to understand the existing implementation.
How does session saving work in this codebase?\n
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#find-all-places-where-x","level":3,"title":"\"Find all places where X\"","text":"
This triggers a comprehensive search across the codebase.
Use this before refactoring or understanding the impact.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#meta-and-process","level":2,"title":"Meta and Process","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#what-should-we-document-from-this","level":3,"title":"\"What should we document from this?\"","text":"
This prompts identifying learnings, decisions, and conventions worth persisting.
Use this after complex discussions or implementations.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#is-this-the-right-approach","level":3,"title":"\"Is this the right approach?\"","text":"
This invites the AI to challenge the current direction.
Use this when you want a sanity check.
This works because it allows AI to disagree.
AIs often default to agreeing; this prompt signals you want an honest assessment.
Stronger variant: \"Push back if my assumptions are wrong.\" This sets the tone for the entire session: The AI will flag questionable choices proactively instead of waiting to be asked.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#what-am-i-missing","level":3,"title":"\"What am I missing?\"","text":"
This prompts thinking about edge cases, overlooked requirements, or unconsidered approaches.
Use this before finalizing a design or implementation.
Forward-looking variant: \"What's the single smartest addition you could make to this at this point?\" Use this after you think you're done: It surfaces improvements you wouldn't have thought to ask for. The constraint to one thing prevents feature sprawl.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#cli-commands-as-prompts","level":2,"title":"CLI Commands as Prompts","text":"
Asking the AI to run ctx commands is itself a prompt. These load context or trigger specific behaviors:
Command What it does \"Run ctx status\" Shows context summary, file presence, staleness \"Run ctx agent\" Loads token-budgeted context packet \"Run ctx drift\" Detects dead paths, stale files, missing context","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#ctx-skills","level":3,"title":"ctx Skills","text":"
The SKILS.md Standard
Skills are formalized prompts stored as SKILL.md files.
The /slash-command syntax below is Claude Code specific.
Other agents can use the same skill files, but invocation may differ.
Use ctx skills by name:
Skill When to use /ctx-status Quick context summary /ctx-agent Load full context packet /ctx-remember Recall project context and structured readback /ctx-wrap-up End-of-session context persistence /ctx-history Browse session history for past discussions /ctx-reflect Structured reflection checkpoint /ctx-next Suggest what to work on next /ctx-commit Commit with context persistence /ctx-drift Detect and fix context drift /ctx-implement Execute a plan step-by-step with verification /ctx-loop Generate autonomous loop script /ctx-pad Manage encrypted scratchpad /ctx-archive Archive completed tasks /check-links Audit docs for dead links
Ceremony vs. Workflow Skills
Most skills work conversationally: \"what should we work on?\" triggers /ctx-next, \"save that as a learning\" triggers /ctx-add-learning. Natural language is the recommended approach.
Two skills are the exception: /ctx-remember and /ctx-wrap-up are ceremony skills for session boundaries: Invoke them as explicit slash commands: conversational triggers risk partial execution. See Session Ceremonies.
Skills combine a prompt, tool permissions, and domain knowledge into a single invocation.
Skills Beyond Claude Code
The /slash-command syntax above is Claude Code native, but the underlying SKILL.md files are a standard markdown format that any agent can consume. If you use a different coding agent, consult its documentation for how to load skill files as prompt templates.
Based on our ctx development experience (i.e., \"sipping our own champagne\") so far, here are some prompts that tend to produce poor results:
Prompt Problem Better Alternative \"Fix this\" Too vague, may patch symptoms \"Why is this failing?\" \"Make it work\" Encourages quick hacks \"What's the right way to solve this?\" \"Just do it\" Skips planning \"Plan this, then implement\" \"You should remember\" Confrontational \"Do you remember?\" \"Obviously...\" Discourages questions State the requirement directly \"Idiomatic X\" Triggers language priors \"Follow project conventions\" \"Implement everything\" No phasing, sprawl risk Break into tasks, implement one at a time \"You should know this\" Assumes context is loaded \"Before you start, read X\"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#reliability-checklist","level":2,"title":"Reliability Checklist","text":"
Before sending a non-trivial prompt, check these four elements. This is the guide's DNA in one screenful.
Goal in one sentence: What does \"done\" look like?
Files to read: What existing code or context should the AI review before acting?
Verification command: How will you prove it worked? (test name, CLI command, expected output)
Scope boundary: What should the AI not touch?
A prompt that covers all four is almost always good enough.
A prompt missing #3 is how you get \"should work now\" without evidence.
A prompting guide earns its trust by being honest about risk.
These four rules mentioned below don't change with model versions, agent frameworks, or project size.
Build them into your workflow once and stop thinking about them.
Tool-using agents can read files, run commands, and modify your codebase. That power makes them useful. It also creates a trust boundary you should be aware of.
These invariants apply regardless of which agent or model you use.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#treat-the-repository-text-as-untrusted-input","level":3,"title":"Treat the Repository Text as \"Untrusted Input\"","text":"
Issue descriptions, PR comments, commit messages, documentation, and even code comments can contain text that looks like instructions. An agent that reads a GitHub issue and then runs a command found inside it is executing untrusted input.
The rule: Before running any command the agent found in repo text (issues, docs, comments), restate the command explicitly and confirm it does what you expect. Don't let the agent copy-paste from untrusted sources into a shell.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#ask-before-destructive-operations","level":3,"title":"Ask Before Destructive Operations","text":"
git push --force, rm -rf, DROP TABLE, docker system prune: these are irreversible or hard to reverse. A good agent should pause before running them, but don't rely on that.
The rule: For any operation that deletes data, overwrites history, or affects shared infrastructure, require explicit confirmation. If the agent runs something destructive without asking, that's a course-correction moment: \"Stop. Never run destructive commands without asking first.\"
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#scope-the-blast-radius","level":3,"title":"Scope the Blast Radius","text":"
An agent told to \"fix the tests\" might modify test fixtures, change assertions, or delete tests that inconveniently fail. An agent told to \"deploy\" might push to production. Broad mandates create broad risk.
The rule: Constrain scope before starting work. The Reliability Checklist's scope boundary (#4) is your primary safety lever. When in doubt, err on the side of a tighter boundary.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#secrets-never-belong-in-context","level":3,"title":"Secrets Never Belong in Context","text":"
LEARNINGS.md, DECISIONS.md, and session transcripts are plain-text files that may be committed to version control.
Don't persist API keys, passwords, tokens, or credentials in context files.
The rule: If the agent encounters a secret during work, it should use it transiently (environment variable, an alias to the secret instead of the actual secret, etc.) and never write it to a context file.
Any Secret Seen IS Exposed
If you see a secret in a context file, remove it immediately and rotate the credential.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#explore-plan-implement","level":2,"title":"Explore → Plan → Implement","text":"
For non-trivial work, name the phase you want:
Explore src/auth and summarize the current flow.\nThen propose a plan. After I approve, implement with tests.\n
This prevents the AI from jumping straight to code.
The three phases map to different modes of thinking:
Explore: read, search, understand: no changes
Plan: propose approach, trade-offs, scope: no changes
Implement: write code, run tests, verify: changes
Small fixes skip straight to implement. Complex or uncertain work benefits from all three.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#prompts-by-task-type","level":2,"title":"Prompts by Task Type","text":"
Different tasks need different prompt structures. The pattern: symptom + location + verification.
Users report search returns empty results for queries with hyphens.\nReproduce in src/search/. Write a failing test for \"foo-bar\",\nfix the root cause, run: go test ./internal/search/...\n
Inspect src/auth/ and list duplication hotspots.\nPropose a refactor plan scoped to one module.\nAfter approval, remove duplication without changing behavior.\nAdd a test if coverage is missing. Run: make audit\n
Update docs/cli-reference.md to reflect the new --format flag.\nConfirm the flag exists in the code and the example works.\n
Notice each prompt includes what to verify and how. Without that, you get a \"should work now\" instead of evidence.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#writing-tasks-as-prompts","level":2,"title":"Writing Tasks as Prompts","text":"
Tasks in TASKS.md are indirect prompts to the AI. How you write them shapes how the AI approaches the work.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#state-the-motivation-not-just-the-goal","level":3,"title":"State the Motivation, Not Just the Goal","text":"
Tell the AI why you are building something, not just what.
Bad: \"Build a calendar view.\"
Good: \"Build a calendar view. The motivation is that all notes and tasks we build later should be viewable here.\"
The second version lets the AI anticipate downstream requirements:
It will design the calendar's data model to be compatible with future features: Without you having to spell out every integration point. Motivation turns a one-off task into a directional task.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#state-the-deliverable-not-just-steps","level":3,"title":"State the Deliverable, Not Just Steps","text":"
For complex tasks, add explicit \"done when\" criteria:
- [ ] T2.0: Authentication system\n **Done when**:\n - [ ] User can register with email\n - [ ] User can log in and get a token\n - [ ] Protected routes reject unauthenticated requests\n
This prevents premature \"task complete\" when only the implementation details are done, but the feature doesn't actually work.
Completing all subtasks does not mean the parent task is complete.
The parent task describes what the user gets.
Subtasks describe how to build it.
Always re-read the parent task description before marking it complete. Verify the stated deliverable exists and works.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#why-do-these-approaches-work","level":2,"title":"Why Do These Approaches Work?","text":"
The patterns in this guide aren't invented here: They are practitioner translations of well-established, peer-reviewed research, most of which predate the current AI (hype) wave.
The underlying ideas come from decades of work in machine learning, cognitive science, and numerical optimization. For a concrete case study showing how these principles play out when an agent decides whether to follow instructions (attention competition, optimization toward least-resistance paths, and observable compliance as a design goal) see The Dog Ate My Homework.
Phased work (\"Explore → Plan → Implement\") applies chain-of-thought reasoning: Decomposing a problem into sequential steps before acting. Forcing intermediate reasoning steps measurably improves output quality in language models, just as it does in human problem-solving. Wei et al., Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (2022).
Root-cause prompts (\"Why doesn't X work?\") use step-back abstraction: Retreating to a higher-level question before diving into specifics. This mirrors how experienced engineers debug: they ask \"what should happen?\" before asking \"what went wrong?\" Zheng et al., Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models (2023).
Exploring alternatives (\"Propose 2-3 approaches\") leverages self-consistency: Generating multiple independent reasoning paths and selecting the most coherent result. The idea traces back to ensemble methods in ML: A committee of diverse solutions outperforms any single one. Wang et al., Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022).
Impact analysis (\"What would break if we...\") is a form of tree-structured exploration: Branching into multiple consequence paths before committing. This is the same principle behind game-tree search (minimax, MCTS) that has powered decision-making systems since the 1950s. Yao et al., Tree of Thoughts: Deliberate Problem Solving with Large Language Models (2023).
Motivation prompting (\"Build X because Y\") works through goal conditioning: Providing the objective function alongside the task. In optimization terms, you are giving the gradient direction, not just the loss. The model can make locally coherent decisions that serve the global objective because it knows what \"better\" means.
Scope constraints (\"Only change files in X\") apply constrained optimization: Bounding the search space to prevent divergence. This is the same principle behind regularization in ML: Without boundaries, powerful optimizers find solutions that technically satisfy the objective but are practically useless.
CLI commands as prompts (\"Run ctx status\") interleave reasoning with acting: The model thinks, acts on external tools, observes results, then thinks again. Grounding reasoning in real tool output reduces hallucination because the model can't ignore evidence it just retrieved. Yao et al., ReAct: Synergizing Reasoning and Acting in Language Models (2022).
Task decomposition (\"Prompts by Task Type\") applies least-to-most prompting: Breaking a complex problem into subproblems and solving them sequentially, each building on the last. This is the research version of \"plan, then implement one slice.\" Zhou et al., Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2022).
Explicit planning (\"Explore → Plan → Implement\") is directly supported by plan-and-solve prompting, which addresses missing-step failures in zero-shot reasoning by extracting a plan before executing. The phased structure prevents the model from jumping to code before understanding the problem. Wang et al., Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models (2023).
Session reflection (\"What did we learn?\", /ctx-reflect) is a form of verbal reinforcement learning: Improving future performance by persisting linguistic feedback as memory rather than updating weights. This is exactly what LEARNINGS.md and DECISIONS.md provide: a durable feedback signal across sessions. Shinn et al., Reflexion: Language Agents with Verbal Reinforcement Learning (2023).
These aren't prompting \"hacks\" that you will find in the \"1000 AI Prompts for the Curious\" listicles: They are applications of foundational principles:
Decomposition,
Abstraction,
Ensemble Reasoning,
Search,
and Constrained Optimization.
They work because language models are, at their core, optimization systems navigating probabilistic landscapes.
The Attention Budget: Why your AI forgets what you just told it, and how token budgets shape context strategy
The Dog Ate My Homework: A case study in making agents follow instructions: attention timing, delegation decay, and observable compliance as a design goal
Found a prompt that works well? Open an issue or PR with:
The prompt text;
What behavior it triggers;
When to use it;
Why it works (optional but helpful).
Dive Deeper:
Recipes: targeted how-to guides for specific tasks
CLI Reference: all commands and flags
Integrations: setup for Claude Code, Cursor, Aider
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/repeated-mistakes/","level":1,"title":"My AI Keeps Making the Same Mistakes","text":"","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#the-problem","level":2,"title":"The Problem","text":"
You found a bug last Tuesday. You debugged it, understood the root cause, and moved on. Today, a new session hits the exact same bug. The AI rediscovers it from scratch, burning twenty minutes on something you already solved.
Worse: you spent an hour last week evaluating two database migration strategies, picked one, documented why in a comment somewhere, and now the AI is cheerfully suggesting the approach you rejected. Again.
This is not a model problem. It is a memory problem. Without persistent context, every session starts with amnesia.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#how-ctx-stops-the-loop","level":2,"title":"How ctx Stops the Loop","text":"
ctx gives your AI three files that directly prevent repeated mistakes, each targeting a different failure mode.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#decisionsmd-stop-relitigating-settled-choices","level":3,"title":"DECISIONS.md: Stop Relitigating Settled Choices","text":"
When you make an architectural decision, record it with rationale and rejected alternatives. The AI reads this at session start and treats it as settled.
## [2026-02-12] Use JWT for Authentication\n\n**Status**: Accepted\n\n**Context**: Need stateless auth for the API layer.\n\n**Decision**: JWT with short-lived access tokens and refresh rotation.\n\n**Rationale**: Stateless, scales horizontally, team has prior experience.\n\n**Alternatives Considered**:\n- Session-based auth: Rejected. Requires sticky sessions or shared store.\n- API keys only: Rejected. No user identity, no expiry rotation.\n
Next session, when the AI considers auth, it reads this entry and builds on the decision instead of re-debating it. If someone asks \"why not sessions?\", the rationale is already there.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#learningsmd-capture-gotchas-once","level":3,"title":"LEARNINGS.md: Capture Gotchas Once","text":"
Learnings are the bugs, quirks, and non-obvious behaviors that cost you time the first time around. Write them down so they cost you zero time the second time.
## Build\n\n### CGO Required for SQLite on Alpine\n\n**Discovered**: 2026-01-20\n\n**Context**: Docker build failed silently with \"no such table\" at runtime.\n\n**Lesson**: The go-sqlite3 driver requires CGO_ENABLED=1 and gcc\ninstalled in the build stage. Alpine needs apk add build-base.\n\n**Application**: Always use the golang:alpine image with build-base\nfor SQLite builds. Never set CGO_ENABLED=0.\n
Without this entry, the next session that touches the Dockerfile will hit the same wall. With it, the AI knows before it starts.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#constitutionmd-draw-hard-lines","level":3,"title":"CONSTITUTION.md: Draw Hard Lines","text":"
Some mistakes are not about forgetting - they are about boundaries the AI should never cross. CONSTITUTION.md sets inviolable rules.
* [ ] Never commit secrets, tokens, API keys, or credentials\n* [ ] Never disable security linters without a documented exception\n* [ ] All database migrations must be reversible\n
The AI reads these as absolute constraints. It does not weigh them against convenience. It refuses tasks that would violate them.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#the-accumulation-effect","level":2,"title":"The Accumulation Effect","text":"
Each of these files grows over time. Session one captures two decisions. Session five adds a tricky learning about timezone handling. Session twelve records a convention about error message formatting.
By session twenty, your AI has a knowledge base that no single person carries in their head. New team members - human or AI - inherit it instantly.
The key insight: you are not just coding. You are building a knowledge layer that makes every future session faster.
ctx files version with your code in git. They survive branch switches, team changes, and model upgrades. The context outlives any single session.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#getting-started","level":2,"title":"Getting Started","text":"
Capture your first decision or learning right now:
ctx add decision \"Use PostgreSQL\" \\\n --context \"Need a relational database for the project\" \\\n --rationale \"Team expertise, JSONB support, mature ecosystem\"\n\nctx add learning \"Vitest mock hoisting\" \\\n --context \"Tests failing intermittently\" \\\n --lesson \"vi.mock() must be at file top level\" \\\n --application \"Use vi.doMock() for dynamic mocks\"\n
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#further-reading","level":2,"title":"Further Reading","text":"
Knowledge Capture: the full workflow for persisting decisions, learnings, and conventions
Context Files Reference: structure and format for every file in .context/
About ctx: the bigger picture - why persistent context changes how you work with AI
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"operations/","level":1,"title":"Operations","text":"
Guides for installing, upgrading, integrating, and running ctx.
Run an unattended AI agent that works through tasks overnight, with ctx providing persistent memory between iterations.
","path":["Operations"],"tags":[]},{"location":"operations/autonomous-loop/","level":1,"title":"Autonomous Loops","text":"","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#autonomous-ai-development","level":2,"title":"Autonomous AI Development","text":"
Iterate until done.
An autonomous loop is an iterative AI development workflow where an agent works on tasks until completion, without constant human intervention.
ctx provides the memory that makes this possible:
ctx provides the memory: persistent context that survives across iterations
The loop provides the automation: continuous execution until done
Together, they enable fully autonomous AI development where the agent remembers everything across iterations.
Origin
This pattern is inspired by Geoffrey Huntley's Ralph Wiggum technique.
We use generic terminology here so the concepts remain clear regardless of trends.
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#how-it-works","level":2,"title":"How It Works","text":"
graph TD\n A[Start Loop] --> B[Load .context/loop.md]\n B --> C[AI reads .context/]\n C --> D[AI picks task from TASKS.md]\n D --> E[AI completes task]\n E --> F[AI updates context files]\n F --> G[AI commits changes]\n G --> H{Check signals}\n H -->|SYSTEM_CONVERGED| I[Done - all tasks complete]\n H -->|SYSTEM_BLOCKED| J[Done - needs human input]\n H -->|Continue| B
Loop reads .context/loop.md and invokes AI
AI loads context from .context/
AI picks one task and completes it
AI updates context files (mark task done, add learnings)
AI commits changes
Loop checks for completion signals
Repeat until converged or blocked
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#quick-start-shell-while-loop-recommended","level":2,"title":"Quick Start: Shell While Loop (Recommended)","text":"
The best way to run an autonomous loop is a plain shell script that invokes your AI tool in a fresh process on each iteration. This is \"pure ralph\":
The only state that carries between iterations is what lives in .context/ and the git history. No context window bleed, no accumulated tokens, no hidden state.
Create a loop.sh:
#!/bin/bash\n# loop.sh: an autonomous iteration loop\n\nPROMPT_FILE=\"${1:-.context/loop.md}\"\nMAX_ITERATIONS=\"${2:-10}\"\nOUTPUT_FILE=\"/tmp/loop_output.txt\"\n\nfor i in $(seq 1 $MAX_ITERATIONS); do\n echo \"=== Iteration $i ===\"\n\n # Invoke AI with prompt\n cat \"$PROMPT_FILE\" | claude --print > \"$OUTPUT_FILE\" 2>&1\n\n # Display output\n cat \"$OUTPUT_FILE\"\n\n # Check for completion signals\n if grep -q \"SYSTEM_CONVERGED\" \"$OUTPUT_FILE\"; then\n echo \"Loop complete: All tasks done\"\n break\n fi\n\n if grep -q \"SYSTEM_BLOCKED\" \"$OUTPUT_FILE\"; then\n echo \"Loop blocked: Needs human input\"\n break\n fi\n\n sleep 2\ndone\n
Make it executable and run:
chmod +x loop.sh\n./loop.sh\n
You can also generate this script with ctx loop (see CLI Reference).
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#why-do-we-use-a-shell-loop","level":3,"title":"Why Do We Use a Shell Loop?","text":"
Each iteration starts a fresh AI process with zero context window history. The agent knows only what it reads from .context/ files: Exactly the information you chose to persist.
This is the core loop principle: memory is explicit, not accidental.
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#alternative-claude-codes-built-in-loop","level":2,"title":"Alternative: Claude Code's Built-in Loop","text":"
This is convenient for quick iterations, but be aware of important caveats:
This Loop Is not Pure
Claude Code's /loop runs all iterations within the same session. This means:
State leaks between iterations: The context window accumulates output from every previous iteration. The agent \"remembers\" things it saw earlier (even if they were never persisted to .context/).
Token budget degrades: Each iteration adds to the context window, leaving less room for actual work in later iterations.
Not ergonomic for long runs: Users report that the built-in loop is less predictable for 10+ iteration runs compared to a shell loop.
For short explorations (2-5 iterations) or interactive use, /loop works fine. For overnight unattended runs or anything where iteration independence matters, use the shell while loop instead.
The prompt file instructs the AI on how to work autonomously. Here's a template:
# Autonomous Development Prompt\n\nYou are working on this project autonomously. Follow these steps:\n\n## 1. Load Context\n\nRead these files in order:\n\n1. `.context/CONSTITUTION.md`: NEVER violate these rules\n2. `.context/TASKS.md`: Find work to do\n3. `.context/CONVENTIONS.md`: Follow these patterns\n4. `.context/DECISIONS.md`: Understand past choices\n\n## 2. Pick One Task\n\nFrom `.context/TASKS.md`, select ONE task that is:\n\n- Not blocked\n- Highest priority available\n- Within your capabilities\n\n## 3. Complete the Task\n\n- Write code following conventions\n- Run tests if applicable\n- Keep changes focused and minimal\n\n## 4. Update Context\n\nAfter completing work:\n\n- Mark task complete in `TASKS.md`\n- Add any learnings to `LEARNINGS.md`\n- Add any decisions to `DECISIONS.md`\n\n## 5. Commit Changes\n\nCreate a focused commit with clear message.\n\n## 6. Signal Status\n\nEnd your response with exactly ONE of:\n\n- `SYSTEM_CONVERGED`: All tasks in TASKS.md are complete\n- `SYSTEM_BLOCKED`: Cannot proceed, need human input (explain why)\n- (no signal): More work remains, continue to next iteration\n\n## Rules\n\n- ONE task per iteration\n- NEVER skip tests\n- NEVER violate CONSTITUTION.md\n- Commit after each task\n
Signal Meaning When to Use SYSTEM_CONVERGED All tasks complete No pending tasks in TASKS.md SYSTEM_BLOCKED Cannot proceed Needs clarification, access, or decision BOOTSTRAP_COMPLETE Initial setup done Project scaffolding finished","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#example-usage","level":3,"title":"Example Usage","text":"
converged state
I've completed all tasks in TASKS.md:\n- [x] Set up project structure\n- [x] Implement core API\n- [x] Add authentication\n- [x] Write tests\n\nNo pending tasks remain.\n\nSYSTEM_CONVERGED\n
blocked state
I cannot proceed with the \"Deploy to production\" task because:\n- Missing AWS credentials\n- Need confirmation on region selection\n\nPlease provide credentials and confirm deployment region.\n\nSYSTEM_BLOCKED\n
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#why-ctx-and-loops-work-well-together","level":2,"title":"Why ctx and Loops Work Well Together","text":"Without ctx With ctx Each iteration starts fresh Each iteration has full history Decisions get re-made Decisions persist in DECISIONS.md Learnings are lost Learnings accumulate in LEARNINGS.md Tasks can be forgotten Tasks tracked in TASKS.md","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#automatic-context-updates","level":3,"title":"Automatic Context Updates","text":"
During the loop, the AI should update context files:
End EVERY response with one of:\n- SYSTEM_CONVERGED (if all tasks done)\n- SYSTEM_BLOCKED (if stuck)\n
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#context-not-persisting","level":3,"title":"Context Not Persisting","text":"
Cause: AI not updating context files
Fix: Add explicit instructions to .context/loop.md:
After completing a task, you MUST:\n1. Run: ctx task complete \"<task>\"\n2. Add learnings: ctx add learning \"...\"\n
Cause: Task not marked complete before next iteration
Fix: Ensure commit happens after context update:
Order of operations:\n1. Complete coding work\n2. Update context files (*`ctx task complete`, `ctx add`*)\n3. Commit **ALL** changes including `.context/`\n4. Then signal status\n
# From the ctx repository\nclaude /plugin install ./internal/assets/claude\n\n# Or from the marketplace\nclaude /plugin marketplace add ActiveMemory/ctx\nclaude /plugin install ctx@activememory-ctx\n
Ensure the Plugin Is Enabled
Installing a plugin registers it, but local installs may not auto-enable it globally. Verify ~/.claude/settings.json contains:
Without this, the plugin's hooks and skills won't appear in other projects. Running ctx init auto-enables the plugin; use --no-plugin-enable to skip this step.
This gives you:
Component Purpose .context/ All context files CLAUDE.md Bootstrap instructions Plugin hooks Lifecycle automation Plugin skills Agent Skills","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#how-it-works","level":3,"title":"How It Works","text":"
graph TD\n A[Session Start] --> B[Claude reads CLAUDE.md]\n B --> C[PreToolUse hook runs]\n C --> D[ctx agent loads context]\n D --> E[Work happens]\n E --> F[Session End]
Session start: Claude reads CLAUDE.md, which tells it to check .context/
First tool use: PreToolUse hook runs ctx agent and emits the context packet (subsequent invocations within the cooldown window are silent)
Next session: Claude reads context files and continues with context
The ctx plugin provides lifecycle hooks implemented as Go subcommands (ctx system *):
Hook Event Purpose ctx system context-load-gate PreToolUse (.*) Auto-inject context on first tool use ctx system block-non-path-ctx PreToolUse (Bash) Block ./ctx or go run: force $PATH install ctx system qa-reminder PreToolUse (Bash) Remind agent to lint/test before committing ctx system specs-nudge PreToolUse (EnterPlanMode) Nudge agent to use project specs when planning ctx system check-context-size UserPromptSubmit Nudge context assessment as sessions grow ctx system check-ceremonies UserPromptSubmit Nudge /ctx-remember and /ctx-wrap-up adoption ctx system check-persistence UserPromptSubmit Remind to persist learnings/decisions ctx system check-journal UserPromptSubmit Remind to export/enrich journal entries ctx system check-reminders UserPromptSubmit Relay pending reminders at session start ctx system check-version UserPromptSubmit Warn when binary/plugin versions diverge ctx system check-resources UserPromptSubmit Warn when memory/swap/disk/load hit DANGER level ctx system check-knowledge UserPromptSubmit Nudge when knowledge files grow large ctx system check-map-staleness UserPromptSubmit Nudge when ARCHITECTURE.md is stale ctx system heartbeat UserPromptSubmit Session-alive signal with prompt count metadata ctx system post-commit PostToolUse (Bash) Nudge context capture and QA after git commits
A catch-all PreToolUse hook also runs ctx agent on every tool use (with cooldown) to autoload context.
The --session $PPID flag isolates the cooldown per session: $PPID resolves to the Claude Code process PID, so concurrent sessions don't interfere. The default cooldown is 10 minutes; use --cooldown 0 to disable it.
When developing ctx locally (adding skills, hooks, or changing plugin behavior), Claude Code caches the plugin by version. You must bump the version in both files and update the marketplace for changes to take effect:
Start a new Claude Code session: skill changes aren't reflected in existing sessions.
Both Version Files Must Match
If you only bump plugin.json but not marketplace.json (or vice versa), Claude Code may not detect the update. Always bump both together.
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#troubleshooting","level":3,"title":"Troubleshooting","text":"Issue Solution Context not loading Check ctx is in PATH: which ctx Hook errors Verify plugin is installed: claude /plugin list New skill not visible Bump version in both plugin.json files, update marketplace","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#manual-context-load","level":3,"title":"Manual Context Load","text":"
If hooks aren't working, manually load context:
# Get context packet\nctx agent --budget 4000\n\n# Or paste into conversation\ncat .context/TASKS.md\n
The ctx plugin ships Agent Skills following the agentskills.io specification.
These are invoked in Claude Code with /skill-name.
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#session-lifecycle-skills","level":4,"title":"Session Lifecycle Skills","text":"Skill Description /ctx-remember Recall project context at session start (ceremony) /ctx-wrap-up End-of-session context persistence (ceremony) /ctx-status Show context summary (tasks, decisions, learnings) /ctx-agent Get AI-optimized context packet /ctx-next Suggest 1-3 concrete next actions from context /ctx-commit Commit with integrated context capture /ctx-reflect Review session and suggest what to persist /ctx-remind Manage session-scoped reminders /ctx-pause Pause context hooks for this session /ctx-resume Resume context hooks after a pause","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#context-persistence-skills","level":4,"title":"Context Persistence Skills","text":"Skill Description /ctx-add-task Add a task to TASKS.md /ctx-add-learning Add a learning to LEARNINGS.md /ctx-add-decision Add a decision with context/rationale/consequence /ctx-add-convention Add a coding convention to CONVENTIONS.md /ctx-archive Archive completed tasks","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#scratchpad-skills","level":4,"title":"Scratchpad Skills","text":"Skill Description /ctx-pad Manage encrypted scratchpad entries","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#session-history-skills","level":4,"title":"Session History Skills","text":"Skill Description /ctx-history Browse AI session history /ctx-journal-enrich Enrich a journal entry with frontmatter/tags /ctx-journal-enrich-all Full journal pipeline: export if needed, then batch-enrich","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#blogging-skills","level":4,"title":"Blogging Skills","text":"
Blogging is a Better Way of Creating Release Notes
The blogging workflow can also double as generating release notes:
AI reads your git commit history and creates a \"narrative\", which is essentially what a release note is for.
Skill Description /ctx-blog Generate blog post from recent activity /ctx-blog-changelog Generate blog post from commit range with theme","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#auditing-health-skills","level":4,"title":"Auditing & Health Skills","text":"Skill Description /ctx-doctor Troubleshoot ctx behavior with structural health checks /ctx-drift Detect and fix context drift (structural + semantic) /ctx-consolidate Merge redundant learnings or decisions into denser entries /ctx-alignment-audit Audit doc claims against playbook instructions /ctx-prompt-audit Analyze session logs for vague prompts /check-links Audit docs for dead internal and external links","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#planning-execution-skills","level":4,"title":"Planning & Execution Skills","text":"Skill Description /ctx-loop Generate a Ralph Loop iteration script /ctx-implement Execute a plan step-by-step with checks /ctx-import-plans Import Claude Code plan files into project specs /ctx-worktree Manage git worktrees for parallel agents /ctx-architecture Build and maintain architecture maps","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#usage-examples","level":4,"title":"Usage Examples","text":"
// split to multiple lines for readability\n{\n \"ai.systemPrompt\": \"Read .context/TASKS.md and \n .context/CONVENTIONS.md before responding. \n Follow rules in .context/CONSTITUTION.md.\",\n}\n
The --write flag creates .github/copilot-instructions.md, which Copilot reads automatically at the start of every session. This file contains your project's constitution rules, current tasks, conventions, and architecture: giving Copilot persistent context without manual copy-paste.
Re-run ctx setup copilot --write after updating your .context/ files to regenerate the instructions.
The ctx VS Code extension adds a @ctx chat participant to GitHub Copilot Chat, giving you direct access to all context commands from within the editor.
Typing @ctx without a command shows help with all available commands. The extension also supports natural language: asking @ctx about \"status\" or \"drift\" routes to the correct command automatically.
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#configuration_2","level":4,"title":"Configuration","text":"Setting Default Description ctx.executablePathctx Path to the ctx binary. Set this if ctx is not in your PATH.","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#follow-up-suggestions","level":4,"title":"Follow-Up Suggestions","text":"
After each command, the extension suggests relevant next steps. For example, after /init it suggests /status and /hook; after /drift it suggests /sync.
ctx init creates a .context/sessions/ directory for storing session data from non-Claude tools. The Markdown session parser scans this directory during ctx journal, enabling session history for Copilot and other tools.
These patterns work without the extension, using Copilot's built-in file awareness:
Pattern 1: Keep context files open
Open .context/CONVENTIONS.md in a split pane. Copilot will reference it.
Pattern 2: Reference in comments
// See .context/CONVENTIONS.md for naming patterns\n// Following decision in .context/DECISIONS.md: Use PostgreSQL\n\nfunction getUserById(id: string) {\n // Copilot now has context\n}\n
Pattern 3: Paste context into Copilot Chat
ctx agent --budget 2000\n
Paste output into Copilot Chat for context-aware responses.
// Split to multiple lines for readability\n{\n \"ai.customInstructions\": \"Always read .context/CONSTITUTION.md first. \n Check .context/TASKS.md for current work. \n Follow patterns in .context/CONVENTIONS.md.\"\n}\n
You are working on a project with persistent context in .context/\n\nBefore responding:\n1. Read .context/CONSTITUTION.md - NEVER violate these rules\n2. Check .context/TASKS.md for current work\n3. Follow .context/CONVENTIONS.md patterns\n4. Reference .context/DECISIONS.md for architectural choices\n\nWhen you learn something new, note it for .context/LEARNINGS.md\nWhen you make a decision, document it for .context/DECISIONS.md\n
<context-update type=\"task\">Implement rate limiting</context-update>\n<context-update type=\"convention\">Use kebab-case for files</context-update>\n<context-update type=\"complete\">rate limiting</context-update>\n
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#structured-format-learnings-decisions","level":3,"title":"Structured Format (learnings, decisions)","text":"
Learnings and decisions support structured attributes for better documentation:
Learning with full structure:
<context-update type=\"learning\"\n context=\"Debugging Claude Code hooks\"\n lesson=\"Hooks receive JSON via stdin, not environment variables\"\n application=\"Parse JSON stdin with the host language (Go, Python, etc.): no jq needed\"\n>Hook Input Format</context-update>\n
Decision with full structure:
<context-update type=\"decision\"\n context=\"Need a caching layer for API responses\"\n rationale=\"Redis is fast, well-supported, and team has experience\"\n consequence=\"Must provision Redis infrastructure; team training on Redis patterns\"\n>Use Redis for caching</context-update>\n
Learnings require: context, lesson, application attributes. Decisions require: context, rationale, consequence attributes. Updates missing required attributes are rejected with an error.
Skills That Fight the Platform: Common pitfalls in skill design that work against the host tool
The Anatomy of a Skill That Works: What makes a skill reliable: the E/A/R framework and quality gates
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/migration/","level":1,"title":"Integration","text":"","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#adopting-ctx-in-existing-projects","level":2,"title":"Adopting ctx in Existing Projects","text":"
Claude Code User?
You probably want the plugin instead of this page.
Install ctx from the marketplace: (/plugin → search \"ctx\" → Install) and you're done: hooks, skills, and updates are handled for you.
See Getting Started for the full walkthrough.
This guide covers adopting ctx in existing projects regardless of which tools your team uses.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#quick-paths","level":2,"title":"Quick Paths","text":"You have... Command What happens Nothing (greenfield) ctx init Creates .context/, CLAUDE.md, permissions Existing CLAUDE.mdctx init --merge Backs up your file, inserts ctx block after the H1 Existing CLAUDE.md + ctx markers ctx init --force Replaces the ctx block, leaves your content intact .cursorrules / .aider.conf.ymlctx initctx ignores those files: they coexist cleanly Team repo, first adopter ctx init --merge && git add .context/ CLAUDE.md Initialize and commit for the team","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#existing-claudemd","level":2,"title":"Existing CLAUDE.md","text":"
This is the most common scenario:
You have a CLAUDE.md with project-specific instructions and don't want to lose them.
You Own CLAUDE.md
After initialization, CLAUDE.md is yours: edit it freely.
Add project instructions, remove sections you don't need, reorganize as you see fit.
The only part ctx manages is the block between the <!-- ctx:context --> and <!-- ctx:end --> markers; everything outside those markers is yours to change at any time.
If you remove the markers, nothing breaks: ctx simply treats the file as having no ctx content and will offer to merge again on the next ctx init.
When ctx init detects an existing CLAUDE.md, it checks for ctx markers (<!-- ctx:context --> ... <!-- ctx:end -->):
State Default behavior With --merge With --force No CLAUDE.md Creates from template Creates from template Creates from template Exists, no ctx markers Prompts to merge Auto-merges (no prompt) Auto-merges (no prompt) Exists, has ctx markers Skips (already set up) Skips Replaces the ctx block only","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#the-merge-flag","level":3,"title":"The --merge Flag","text":"
--merge auto-merges without prompting. The merge process:
Backs up your existing CLAUDE.md to CLAUDE.md.<timestamp>.bak;
Finds the H1 heading (e.g., # My Project) in your file;
Inserts the ctx block immediately after it;
Preserves everything else untouched.
Your content before and after the ctx block remains exactly as it was.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#before-after-example","level":3,"title":"Before / After Example","text":"
Before: your existing CLAUDE.md:
# My Project\n\n## Build Commands\n\n-`npm run build`: production build\n- `npm test`: run tests\n\n## Code Style\n\n- Use TypeScript strict mode\n- Prefer named exports\n
After ctx init --merge:
# My Project\n\n<!-- ctx:context -->\n<!-- DO NOT REMOVE: This marker indicates ctx-managed content -->\n\n## IMPORTANT: You Have Persistent Memory\n\nThis project uses Context (`ctx`) for context persistence across sessions.\n...\n\n<!-- ctx:end -->\n\n## Build Commands\n\n- `npm run build`: production build\n- `npm test`: run tests\n\n## Code Style\n\n- Use TypeScript strict mode\n- Prefer named exports\n
Your build commands and code style sections are untouched. The ctx block sits between markers and can be updated independently.
If your CLAUDE.md already has ctx markers (from a previous ctx init), the default behavior is to skip it. Use --force to replace the ctx block with the latest template: This is useful after upgrading ctx:
ctx init --force\n
This only replaces content between <!-- ctx:context --> and <!-- ctx:end -->. Your own content outside the markers is preserved. A timestamped backup is created before any changes.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#undoing-a-merge","level":3,"title":"Undoing a Merge","text":"
ctx doesn't touch tool-specific config files. It creates its own files (.context/, CLAUDE.md) and coexists with whatever you already have.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#what-does-ctx-create","level":3,"title":"What Does ctx Create?","text":"ctx creates ctx does NOT touch .context/ directory .cursorrulesCLAUDE.md (or merges into) .aider.conf.yml.claude/settings.local.json (seeded by ctx init; the plugin manages hooks and skills) .github/copilot-instructions.md.windsurfrules Any other tool-specific config
Claude Code hooks and skills are provided by the ctx plugin, installed from the Claude Code marketplace (/plugin → search \"ctx\" → Install).
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#running-ctx-alongside-other-tools","level":3,"title":"Running ctx Alongside Other Tools","text":"
The .context/ directory is the source of truth. Tool-specific configs point to it:
Cursor: Reference .context/ files in your system prompt (see Cursor setup)
Aider: Add .context/ files to the read: list in .aider.conf.yml (see Aider setup)
Copilot: Keep .context/ files open or reference them in comments (see Copilot setup)
You can generate a tool-specific configuration with:
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#migrating-content-into-context","level":3,"title":"Migrating Content Into .context/","text":"
If you have project knowledge scattered across .cursorrules or custom prompt files, consider migrating it:
Rules / invariants → .context/CONSTITUTION.md
Code patterns → .context/CONVENTIONS.md
Architecture notes → .context/ARCHITECTURE.md
Known issues / tips → .context/LEARNINGS.md
You don't need to delete the originals: ctx and tool-specific files can coexist. But centralizing in .context/ means every tool gets the same context.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#team-adoption","level":2,"title":"Team Adoption","text":"","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#context-is-designed-to-be-committed","level":3,"title":".context/ Is Designed to Be Committed","text":"
The context files (tasks, decisions, learnings, conventions, architecture) are meant to live in version control. However, some subdirectories are personal or sensitive and should not be committed.
ctx init automatically adds these .gitignore entries:
# Journals contain full session transcripts: personal, potentially large\n.context/journal/\n.context/journal-site/\n.context/journal-obsidian/\n\n# Legacy encryption key path (copy to ~/.ctx/.ctx.key if needed)\n.context/.ctx.key\n\n# Runtime state and logs (ephemeral, machine-specific):\n.context/state/\n.context/logs/\n\n# Claude Code local settings (machine-specific)\n.claude/settings.local.json\n
With those in place, committing is straightforward:
# One person initializes\nctx init --merge\n\n# Commit context files (journals and keys are already gitignored)\ngit add .context/ CLAUDE.md\ngit commit -m \"Add ctx context management\"\ngit push\n
Teammates pull and immediately have context. No per-developer setup needed.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#what-about-claude","level":3,"title":"What About .claude/?","text":"
The .claude/ directory contains permissions that ctx init seeds. Hooks and skills are provided by the ctx plugin (not per-project files).
Context files are plain Markdown. Resolve conflicts the same way you would for any other documentation file:
# After a conflicting pull\ngit diff .context/TASKS.md # See both sides\n# Edit to keep both sets of tasks, then:\ngit add .context/TASKS.md\ngit commit\n
Common conflict scenarios:
TASKS.md: Two people added tasks: Keep both.
DECISIONS.md: Same decision recorded differently: Unify the entry.
CLAUDE.md instructions work immediately for Claude Code users;
Other tool users can adopt at their own pace using ctx setup <tool>;
Context files benefit everyone who reads them, even without tool integration.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#verifying-it-worked","level":2,"title":"Verifying It Worked","text":"","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#check-status","level":3,"title":"Check Status","text":"
ctx status\n
You should see your context files listed with token counts and no warnings.
Start a new AI session and ask: \"Do you remember?\"
The AI should cite specific context:
Current tasks from .context/TASKS.md;
Recent decisions or learnings;
Session history (if you've had prior sessions);
If it responds with generic \"I don't have memory\", check that ctx is in your PATH (which ctx) and that hooks are configured (see Troubleshooting).
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#verify-the-merge","level":3,"title":"Verify the Merge","text":"
If you used --merge, check that your original content is intact:
# Your original content should still be there\ncat CLAUDE.md\n\n# The ctx block should be between markers\ngrep -c \"ctx:context\" CLAUDE.md # Should print 1\ngrep -c \"ctx:end\" CLAUDE.md # Should print 1\n
","path":["Operations","Integration"],"tags":[]},{"location":"operations/release/","level":1,"title":"Cutting a Release","text":"","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#prerequisites","level":2,"title":"Prerequisites","text":"
Before you can cut a release you need:
Push access to origin (GitHub)
GPG signing configured (make gpg-test)
Go installed (version in go.mod)
Zensical installed (make site-setup)
A clean working tree (git status shows nothing to commit)
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#step-by-step","level":2,"title":"Step-by-Step","text":"","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#1-update-the-version-file","level":3,"title":"1. Update the VERSION File","text":"
echo \"0.9.0\" > VERSION\ngit add VERSION\ngit commit -m \"chore: bump version to 0.9.0\"\n
The VERSION file uses bare semver (0.9.0), no v prefix. The release script adds the v prefix for git tags.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#2-generate-release-notes","level":3,"title":"2. Generate Release Notes","text":"
In Claude Code:
/_ctx-release-notes\n
This analyzes commits since the last tag and writes dist/RELEASE_NOTES.md. The release script refuses to proceed without this file.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#3-verify-docs-and-commit-any-remaining-changes","level":3,"title":"3. Verify Docs and Commit Any Remaining Changes","text":"
/ctx-check-links # audit docs for dead links\nmake audit # full check: fmt, vet, lint, style, test\ngit status # must be clean\n
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#4-run-the-release","level":3,"title":"4. Run the Release","text":"
make release\n
Or, if you are in a Claude Code session:
/_ctx-release\n
The release script does everything in order:
Step What happens 1 Reads VERSION, verifies release notes exist 2 Verifies working tree is clean 3 Updates version in 4 config files (plugin.json, marketplace.json, VS Code package.json + lock) 4 Updates download URLs in 3 doc files (index.md, getting-started.md, integrations.md) 5 Adds new row to versions.md 6 Rebuilds the documentation site (make site) 7 Commits all version and docs updates 8 Runs make test and make smoke 9 Builds binaries for all 6 platforms via hack/build-all.sh 10 Creates a signed git tag (v0.9.0) 11 Pushes the tag to origin 12 Updates and pushes the latest tag","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#5-github-ci-takes-over","level":3,"title":"5. GitHub CI Takes Over","text":"
Pushing a v* tag triggers .github/workflows/release.yml:
Checks out the tagged commit
Runs the full test suite
Builds binaries for all platforms
Creates a GitHub Release with auto-generated notes
Uploads binaries and SHA256 checksums
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#6-verify","level":3,"title":"6. Verify","text":"
GitHub Releases shows the new version
All 6 binaries are attached (linux/darwin x amd64/arm64, windows x amd64)
SHA256 files are attached
Release notes look correct
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#what-gets-updated-automatically","level":2,"title":"What Gets Updated Automatically","text":"
The release script updates 8 files so you do not have to:
File What changes internal/assets/claude/.claude-plugin/plugin.json Plugin version .claude-plugin/marketplace.json Marketplace version (2 fields) editors/vscode/package.json VS Code extension version editors/vscode/package-lock.json VS Code lock version (2 fields) docs/index.md Download URLs docs/home/getting-started.md Download URLs docs/operations/integrations.md VSIX filename version docs/reference/versions.md New version row + latest pointer
The Go binary version is injected at build time via -ldflags from the VERSION file. No source file needs editing.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#build-targets-reference","level":2,"title":"Build Targets Reference","text":"Target What it does make release Full release (script + tag + push) make build Build binary for current platform make build-all Build all 6 platform binaries make test Unit tests make smoke Integration smoke tests make audit Full check (fmt + vet + lint + drift + docs + test) make site Rebuild documentation site","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#troubleshooting","level":2,"title":"Troubleshooting","text":"","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#release-notes-not-found","level":3,"title":"\"Release notes not found\"","text":"
ERROR: dist/RELEASE_NOTES.md not found.\n
Run /_ctx-release-notes in Claude Code first, or write dist/RELEASE_NOTES.md manually.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#working-tree-is-not-clean","level":3,"title":"\"Working tree is not clean\"","text":"
ERROR: Working tree is not clean.\n
Commit or stash all changes before running make release.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#tag-already-exists","level":3,"title":"\"Tag already exists\"","text":"
ERROR: Tag v0.9.0 already exists.\n
You cannot release the same version twice. Either bump VERSION to a new version, or delete the old tag if the release was incomplete:
git tag -d v0.9.0\ngit push origin :refs/tags/v0.9.0\n
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#ci-build-fails-after-tag-push","level":3,"title":"CI build fails after tag push","text":"
The tag is already published. Fix the issue, bump to a patch version (e.g. 0.9.1), and release again. Do not force-push tags that others may have already fetched.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/upgrading/","level":1,"title":"Upgrade","text":"","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#upgrade","level":2,"title":"Upgrade","text":"
New versions of ctx may ship updated permissions, CLAUDE.md directives, or plugin hooks and skills.
Claude Code User?
The marketplace can update skills, hooks, and prompts independently: /plugin → select ctx → Update now (or enable auto-update).
The ctx binary is separate: rebuild from source or download a new release when one is available, then run ctx init --force --merge. Knowledge files are preserved automatically.
# Plugin users (Claude Code)\n# /plugin → select ctx → Update now\n# Then update the binary and reinitialize:\nctx init --force --merge\n\n# From-source / manual users\n# install new ctx binary, then:\nctx init --force --merge\n# /plugin → select ctx → Update now (if using Claude Code)\n
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#what-changes-between-versions","level":2,"title":"What Changes Between Versions","text":"
ctx init generates two categories of files:
Category Examples Changes between versions? Infrastructure .claude/settings.local.json (permissions), ctx-managed sections in CLAUDE.md, ctx plugin (hooks + skills) Yes Knowledge .context/TASKS.md, DECISIONS.md, LEARNINGS.md, CONVENTIONS.md, ARCHITECTURE.md, GLOSSARY.md, CONSTITUTION.md, AGENT_PLAYBOOK.md No: this is your data
Infrastructure is regenerated by ctx init and plugin updates. Knowledge files are yours and should never be overwritten.
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#upgrade-steps","level":2,"title":"Upgrade Steps","text":"","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#1-install-the-new-version","level":3,"title":"1. Install the New Version","text":"
Build from source or download the binary:
cd /path/to/ctx-source\ngit pull\nmake build\nsudo make install\nctx --version # verify\n
--force regenerates infrastructure files (permissions, ctx-managed sections in CLAUDE.md).
--merge preserves your content outside ctx markers.
Knowledge files (.context/TASKS.md, DECISIONS.md, etc.) are preserved automatically: ctx init only overwrites infrastructure, never your data.
Encryption key: The encryption key lives at ~/.ctx/.ctx.key (outside the project). Reinit does not affect it. If you have a legacy key at .context/.ctx.key or ~/.local/ctx/keys/, copy it manually (see Syncing Scratchpad Notes).
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#3-update-the-ctx-plugin","level":3,"title":"3. Update the ctx Plugin","text":"
If you use Claude Code, update the plugin to get new hooks and skills:
Open /plugin in Claude Code.
Select ctx.
Click Update now.
Or enable auto-update so the plugin stays current without manual steps.
If you made manual backups, remove them once satisfied:
rm -rf .context.bak .claude.bak CLAUDE.md.bak\n
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#what-if-i-skip-the-upgrade","level":2,"title":"What If I Skip the Upgrade?","text":"
The old binary still works with your existing .context/ files. But you may miss:
New plugin hooks that enforce better practices or catch mistakes;
Updated skill prompts that produce better results;
New .gitignore entries for directories added in newer versions;
Bug fixes in the CLI itself.
The plugin and the binary can be updated independently. You can update the plugin (for new hooks/skills) even if you stay on an older binary, and vice versa.
Context files are plain Markdown: They never break between versions.
Workflow recipes combining ctx commands and skills to solve specific problems.
","path":["Recipes"],"tags":[]},{"location":"recipes/#getting-started","level":2,"title":"Getting Started","text":"","path":["Recipes"],"tags":[]},{"location":"recipes/#guide-your-agent","level":3,"title":"Guide Your Agent","text":"
How commands, skills, and conversational patterns work together. Train your agent to be proactive through ask, guide, reinforce.
","path":["Recipes"],"tags":[]},{"location":"recipes/#setup-across-ai-tools","level":3,"title":"Setup Across AI Tools","text":"
Initialize ctx and configure hooks for Claude Code, Cursor, Aider, Copilot, or Windsurf. Includes shell completion, watch mode for non-native tools, and verification.
","path":["Recipes"],"tags":[]},{"location":"recipes/#keeping-context-in-a-separate-repo","level":3,"title":"Keeping Context in a Separate Repo","text":"
Store context files outside the project tree: in a private repo, shared directory, or anywhere else. Useful for open source projects with private context or multi-repo setups.
The two bookend rituals for every session: /ctx-remember at the start to load and confirm context, /ctx-wrap-up at the end to review the session and persist learnings, decisions, and tasks.
","path":["Recipes"],"tags":[]},{"location":"recipes/#browsing-and-enriching-past-sessions","level":3,"title":"Browsing and Enriching Past Sessions","text":"
Export your AI session history to a browsable journal site. Enrich entries with metadata and search across months of work.
Leave a message for your next session. Reminders surface automatically at session start and repeat until dismissed. Date-gate reminders to surface only after a specific date.
Silence all nudge hooks for a quick task that doesn't need ceremony overhead. Session-scoped: Other sessions are unaffected. Security hooks still fire.
","path":["Recipes"],"tags":[]},{"location":"recipes/#knowledge-tasks","level":2,"title":"Knowledge & Tasks","text":"","path":["Recipes"],"tags":[]},{"location":"recipes/#persisting-decisions-learnings-and-conventions","level":3,"title":"Persisting Decisions, Learnings, and Conventions","text":"
Record architectural decisions with rationale, capture gotchas and lessons learned, and codify conventions so they survive across sessions and team members.
","path":["Recipes"],"tags":[]},{"location":"recipes/#using-the-scratchpad","level":3,"title":"Using the Scratchpad","text":"
Use the encrypted scratchpad for quick notes, working memory, and sensitive values during AI sessions. Natural language in, encrypted storage out.
Uses: ctx pad, /ctx-pad, ctx pad show, ctx pad edit
","path":["Recipes"],"tags":[]},{"location":"recipes/#syncing-scratchpad-notes-across-machines","level":3,"title":"Syncing Scratchpad Notes Across Machines","text":"
Distribute your scratchpad encryption key, push and pull encrypted notes via git, and resolve merge conflicts when two machines edit simultaneously.
Uses: ctx init, ctx pad, ctx pad resolve, scp
","path":["Recipes"],"tags":[]},{"location":"recipes/#bridging-claude-code-auto-memory","level":3,"title":"Bridging Claude Code Auto Memory","text":"
Mirror Claude Code's auto memory (MEMORY.md) into .context/ for version control, portability, and drift detection. Import entries into structured context files with heuristic classification.
Choose the right output pattern for your Claude Code hooks: VERBATIM relay for user-facing reminders, hard gates for invariants, agent directives for nudges, and five more patterns across the spectrum.
Customize what hooks say without changing what they do. Override the QA gate for Python (pytest instead of make lint), silence noisy ceremony nudges, or tailor post-commit instructions for your stack.
Uses: ctx system message list, ctx system message show, ctx system message edit, ctx system message reset
Mermaid sequence diagrams for every system hook: entry conditions, state reads, output, throttling, and exit points. Includes throttling summary table and state file reference.
Uses: All ctx system hooks
","path":["Recipes"],"tags":[]},{"location":"recipes/#auditing-system-hooks","level":3,"title":"Auditing System Hooks","text":"
The 12 system hooks that run invisibly during every session: what each one does, why it exists, and how to verify they're actually firing. Covers webhook-based audit trails, log inspection, and detecting silent hook failures.
Get push notifications when loops complete, hooks fire, or agents hit milestones. Webhook URL is encrypted: never stored in plaintext. Works with IFTTT, Slack, Discord, ntfy.sh, or any HTTP endpoint.
","path":["Recipes"],"tags":[]},{"location":"recipes/#maintenance","level":2,"title":"Maintenance","text":"","path":["Recipes"],"tags":[]},{"location":"recipes/#detecting-and-fixing-drift","level":3,"title":"Detecting and Fixing Drift","text":"
Keep context files accurate by detecting structural drift (stale paths, missing files, stale file ages) and task staleness.
Diagnose hook failures, noisy nudges, stale context, and configuration issues. Start with ctx doctor for a structural health check, then use /ctx-doctor for agent-driven analysis of event patterns.
Keep .claude/settings.local.json clean: recommended safe defaults, what to never pre-approve, and a maintenance workflow for cleaning up session debris.
","path":["Recipes"],"tags":[]},{"location":"recipes/#importing-claude-code-plans","level":3,"title":"Importing Claude Code Plans","text":"
Import Claude Code plan files (~/.claude/plans/*.md) into specs/ as permanent project specs. Filter by date, select interactively, and optionally create tasks referencing each imported spec.
Uses: /ctx-import-plans, /ctx-add-task
","path":["Recipes"],"tags":[]},{"location":"recipes/#design-before-coding","level":3,"title":"Design Before Coding","text":"
Front-load design with a four-skill chain: brainstorm the approach, spec the design, task the work, implement step-by-step. Each step produces an artifact that feeds the next.
Encode repeating workflows into reusable skills the agent loads automatically. Covers the full cycle: identify a pattern, create the skill, test with realistic prompts, and iterate until it triggers correctly.
Uses: /ctx-skill-creator, ctx init
","path":["Recipes"],"tags":[]},{"location":"recipes/#running-an-unattended-ai-agent","level":3,"title":"Running an Unattended AI Agent","text":"
Set up a loop where an AI agent works through tasks overnight without you at the keyboard, using ctx for persistent memory between iterations.
This recipe shows how ctx supports long-running agent loops without losing context or intent.
","path":["Recipes"],"tags":[]},{"location":"recipes/#when-to-use-a-team-of-agents","level":3,"title":"When to Use a Team of Agents","text":"
Decision framework for choosing between a single agent, parallel worktrees, and a full agent team.
This recipe covers the file overlap test, when teams make things worse, and what ctx provides at each level.
Uses: /ctx-worktree, /ctx-next, ctx status
","path":["Recipes"],"tags":[]},{"location":"recipes/#parallel-agent-development-with-git-worktrees","level":3,"title":"Parallel Agent Development with Git Worktrees","text":"
Split a large backlog across 3-4 agents using git worktrees, each on its own branch and working directory. Group tasks by file overlap, work in parallel, merge back.
Map your project's internal and external dependency structure. Auto-detects Go, Node.js, Python, and Rust. Output as Mermaid, table, or JSON.
Uses: ctx dep, ctx drift
","path":["Recipes"],"tags":[]},{"location":"recipes/autonomous-loops/","level":1,"title":"Running an Unattended AI Agent","text":"","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-problem","level":2,"title":"The Problem","text":"
You have a project with a clear list of tasks, and you want an AI agent to work through them autonomously: overnight, unattended, without you sitting at the keyboard.
Each iteration needs to remember what the previous one did, mark tasks as completed, and know when to stop.
Without persistent memory, every iteration starts fresh and the loop collapses. With ctx, each iteration can pick up where the last one left off, but only if the agent persists its context as part of the work.
Unattended operation works because the agent treats context persistence as a first-class deliverable, not an afterthought.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#tldr","level":2,"title":"TL;DR","text":"
ctx init # 1. init context\n# Edit TASKS.md with phased work items\nctx loop --tool claude --max-iterations 10 # 2. generate loop.sh\n./loop.sh 2>&1 | tee /tmp/loop.log & # 3. run the loop\nctx watch --log /tmp/loop.log # 4. process context updates\n# Next morning:\nctx status && ctx load # 5. review the results\n
Read on for permissions, isolation, and completion signals.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx init Command Initialize project context and prompt templates ctx loop Command Generate the loop shell script ctx watch Command Monitor AI output and persist context updates ctx load Command Display assembled context (for debugging) /ctx-loop Skill Generate loop script from inside Claude Code /ctx-implement Skill Execute a plan step-by-step with verification","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-1-initialize-for-unattended-operation","level":3,"title":"Step 1: Initialize for Unattended Operation","text":"
Start by creating a .context/ directory configured so the agent can work without human input.
ctx init\n
This creates .context/ with the template files (including a loop prompt at .context/loop.md), and seeds Claude Code permissions in .claude/settings.local.json. Install the ctx plugin for hooks and skills.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-2-populate-tasksmd-with-phased-work","level":3,"title":"Step 2: Populate TASKS.md with Phased Work","text":"
Open .context/TASKS.md and organize your work into phases. The agent works through these systematically, top to bottom, using priority tags to break ties.
# Tasks\n\n## Phase 1: Foundation\n\n- [ ] Set up project structure and build system `#priority:high`\n- [ ] Configure testing framework `#priority:high`\n- [ ] Create CI pipeline `#priority:medium`\n\n## Phase 2: Core Features\n\n- [ ] Implement user registration `#priority:high`\n- [ ] Add email verification `#priority:high`\n- [ ] Create password reset flow `#priority:medium`\n\n## Phase 3: Hardening\n\n- [ ] Add rate limiting to API endpoints `#priority:medium`\n- [ ] Improve error messages `#priority:low`\n- [ ] Write integration tests `#priority:medium`\n
Phased organization matters because it gives the agent natural boundaries. Phase 1 tasks should be completable without Phase 2 code existing yet.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-3-configure-the-loop-prompt","level":3,"title":"Step 3: Configure the Loop Prompt","text":"
The loop prompt at .context/loop.md instructs the agent to operate autonomously:
Read .context/CONSTITUTION.md first (hard rules, never violated)
Load context from .context/ files
Pick one task per iteration
Complete the task and update context files
Commit changes (including .context/)
Signal status with a completion signal
You can customize .context/loop.md for your project. The critical parts are the one-task-per-iteration discipline, proactive context persistence, and completion signals at the end:
## Signal Status\n\nEnd your response with exactly ONE of:\n\n* `SYSTEM_CONVERGED`: All tasks in `TASKS.md` are complete (*this is the\n signal the loop script detects by default*)\n* `SYSTEM_BLOCKED`: Cannot proceed, need human input (explain why)\n* (*no signal*): More work remains, continue to the next iteration\n\nNote: the loop script only checks for `SYSTEM_CONVERGED` by default.\n`SYSTEM_BLOCKED` is a convention for the human reviewing the log.\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-4-configure-permissions","level":3,"title":"Step 4: Configure Permissions","text":"
An unattended agent needs permission to use tools without prompting. By default, Claude Code asks for confirmation on file writes, bash commands, and other operations, which stops the loop and waits for a human who is not there.
There are two approaches.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#option-a-explicit-allowlist-recommended","level":4,"title":"Option A: Explicit Allowlist (Recommended)","text":"
Grant only the permissions the agent needs. In .claude/settings.local.json:
Adjust the Bash patterns for your project's toolchain. The agent can run make, go, git, and ctx commands but cannot run arbitrary shell commands.
This is recommended even in sandboxed environments because it limits blast radius.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#option-b-skip-all-permission-checks","level":4,"title":"Option B: Skip All Permission Checks","text":"
Claude Code supports a --dangerously-skip-permissions flag that disables all permission prompts:
claude --dangerously-skip-permissions -p \"$(cat .context/loop.md)\"\n
This Flag Means What It Says
With --dangerously-skip-permissions, the agent can execute any shell command, write to any file, and make network requests without confirmation.
Only use this on a sandboxed machine: ideally a virtual machine with no access to host credentials, no SSH keys, and no access to production systems.
If you would not give an untrusted intern sudo on this machine, do not use this flag.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#enforce-isolation-at-the-os-level","level":4,"title":"Enforce Isolation at the OS Level","text":"
The only controls an agent cannot override are the ones enforced by the operating system, the container runtime, or the hypervisor.
Do Not Skip This Section
This is not optional hardening:
An unattended agent with unrestricted OS access is an unattended shell with unrestricted OS access.
The allowlist above is a strong first layer, but do not rely on a single runtime boundary.
For unattended runs, enforce isolation at the infrastructure level:
Layer What to enforce User account Run the agent as a dedicated unprivileged user with no sudo access and no membership in privileged groups (docker, wheel, adm). Filesystem Restrict the project directory via POSIX permissions or ACLs. The agent should have no access to other users' files or system directories. Container Run inside a Docker/Podman sandbox. Mount only the project directory. Drop capabilities (--cap-drop=ALL). Disable network if not needed (--network=none). Never mount the Docker socket and do not run privileged containers. Prefer rootless containers. Virtual machine Prefer a dedicated VM with no shared folders, no host passthrough, and no keys to other machines. Network If the agent does not need the internet, disable outbound access entirely. If it does, restrict to specific domains via firewall rules. Resource limits Apply CPU, memory, and disk limits (cgroups/container limits). A runaway loop should not fill disk or consume all RAM. Self-modification Make instruction files read-only. CLAUDE.md, .claude/settings.local.json, and .context/CONSTITUTION.md should not be writable by the agent user. If using project-local hooks, protect those too.
Use multiple layers together: OS-level isolation (the boundary the agent cannot cross), a permission allowlist (what Claude Code will do within that boundary), and CONSTITUTION.md (a soft nudge for the common case).
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-5-generate-the-loop-script","level":3,"title":"Step 5: Generate the Loop Script","text":"
Use ctx loop to generate a loop.sh tailored to your AI tool:
# Generate for Claude Code with a 10-iteration cap\nctx loop --tool claude --max-iterations 10\n\n# Generate for Aider\nctx loop --tool aider --max-iterations 10\n\n# Custom prompt file and output filename\nctx loop --tool claude --prompt my-prompt.md --output my-loop.sh\n
The generated script reads .context/loop.md, runs the tool, checks for completion signals, and loops until done or the cap is reached.
You can also use the /ctx-loop skill from inside Claude Code.
A Shell Loop Is the Best Practice
The shell loop approach spawns a fresh AI process each iteration, so the only state that carries between iterations is what lives in .context/ and git.
Claude Code's built-in /loop runs iterations within the same session, which can allow context window state to leak between iterations. This can be convenient for short runs, but it is less reliable for unattended loops.
See Shell Loop vs Built-in Loop for details.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-6-run-with-watch-mode","level":3,"title":"Step 6: Run with Watch Mode","text":"
Open two terminals. In the first, run the loop. In the second, run ctx watch to process context updates from the AI output.
# Terminal 1: Run the loop\n./loop.sh 2>&1 | tee /tmp/loop.log\n\n# Terminal 2: Watch for context updates\nctx watch --log /tmp/loop.log\n
The watch command parses XML context-update commands from the AI output and applies them:
<context-update type=\"complete\">user registration</context-update>\n<context-update type=\"learning\"\n context=\"Setting up user registration\"\n lesson=\"Email verification needs SMTP configured\"\n application=\"Add SMTP setup to deployment checklist\"\n>SMTP Requirement</context-update>\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-7-completion-signals-end-the-loop","level":3,"title":"Step 7: Completion Signals End the Loop","text":"
The generated script checks for one completion signal per run. By default this is SYSTEM_CONVERGED. You can change it with the --completion flag:
ctx loop --tool claude --completion BOOTSTRAP_COMPLETE --max-iterations 5\n
The following signals are conventions used in .context/loop.md:
Signal Convention How the script handles it SYSTEM_CONVERGED All tasks in TASKS.md are done Detected by default (--completion default value) SYSTEM_BLOCKED Agent cannot proceed Only detected if you set --completion to this BOOTSTRAP_COMPLETE Initial scaffolding done Only detected if you set --completion to this
The script uses grep -q on the agent's output, so any string works as a signal. If you need to detect multiple signals in one run, edit the generated loop.sh to add additional grep checks.
When you return in the morning, check the log and the context files:
tail -100 /tmp/loop.log\nctx status\nctx load\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-8-use-ctx-implement-for-plan-execution","level":3,"title":"Step 8: Use /ctx-implement for Plan Execution","text":"
Within each iteration, the agent can use /ctx-implement to execute multi-step plans with verification between steps. This is useful for complex tasks that touch multiple files.
The skill breaks a plan into atomic, verifiable steps:
Step 1/6: Create user model .................. OK\nStep 2/6: Add database migration ............. OK\nStep 3/6: Implement registration handler ..... OK\nStep 4/6: Write unit tests ................... OK\nStep 5/6: Run test suite ..................... FAIL\n -> Fixed: missing test dependency\n -> Re-verify ............................... OK\nStep 6/6: Update TASKS.md .................... OK\n
Each step is verified (build, test, syntax check) before moving to the next.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
A typical overnight run:
ctx init\n# Edit TASKS.md and .context/loop.md\n\nctx loop --tool claude --max-iterations 20\n\n./loop.sh 2>&1 | tee /tmp/loop.log &\nctx watch --log /tmp/loop.log\n\n# Next morning:\nctx status\nctx load\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#why-autonomous-loops-work-proactive-context-persistence","level":2,"title":"Why Autonomous Loops Work: Proactive Context Persistence","text":"
The autonomous loop pattern works because the agent persists context as part of the job.
Without proactive persistence, the loop degrades into disconnected iterations that repeat work, contradict decisions, and lose track of progress. The agent cannot rely on a human to prompt it. It must treat context updates as part of every task, not as a separate step.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-agent-playbook-self-check-drives-autonomy","level":3,"title":"The Agent Playbook Self-Check Drives Autonomy","text":"
The Agent Playbook defines a self-check prompt the agent applies continuously:
\"If this session ended right now, would the next session know what happened?\"\n
In interactive sessions, a human might remind the agent to save context. When the agent runs unattended, nobody is there to remind it.
The self-check becomes the trigger: after meaningful work, the agent evaluates whether the context files reflect reality and updates them immediately if they do not.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#what-the-agent-does-proactively-between-iterations","level":3,"title":"What the Agent Does Proactively Between Iterations","text":"
At milestones within an iteration, the agent persists without waiting for instructions:
Milestone What the agent does proactively Completes a task Marks it [x] in TASKS.md with #done timestamp Discovers a gotcha Adds it to LEARNINGS.md Makes a design choice Records it in DECISIONS.md with rationale Identifies follow-up work Creates new tasks in TASKS.md with #added timestamp Hits an unexpected failure Documents the root cause before moving on","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#example-what-proactive-persistence-looks-like","level":3,"title":"Example: What Proactive Persistence Looks Like","text":"
Here is what the agent does during a single iteration beyond writing code:
Iteration 4:\n 1. Read TASKS.md -> pick \"Add email verification\" (#priority:high)\n 2. Add #started:2026-01-25-030012 to the task\n 3. Implement the feature (code, tests, docs if needed)\n 4. Tests pass -> mark task [x], add #done:2026-01-25-031544\n 5. Add learning: \"SMTP config must be set before verification handler registers. Order matters in init().\"\n 6. Add decision: \"Use token-based verification links (not codes) because links work better in automated tests.\"\n 7. Create follow-up task: \"Add rate limiting to verification endpoint\" #added:...\n 8. Commit all changes including `.context/`\n 9. No signal emitted -> loop continues to iteration 5\n
Steps 2, 4, 5, 6, and 7 are proactive context persistence:
The agent was not asked to do any of them.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#context-persistence-at-milestones","level":3,"title":"Context Persistence at Milestones","text":"
For long autonomous runs, the agent persists context at natural boundaries, often at phase transitions or after completing a cluster of related tasks. It updates TASKS.md, DECISIONS.md, and LEARNINGS.md as it goes.
If the loop crashes at 4 AM, the context files tell you exactly where to resume. You can also use ctx journal source to review the session transcripts.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-persistence-contract","level":3,"title":"The Persistence Contract","text":"
The autonomous loop has an implicit contract:
Every iteration reads context: TASKS.md, DECISIONS.md, LEARNINGS.md
Every iteration writes context: task updates, new learnings, decisions
Every commit includes .context/ so the next iteration sees changes
Context stays current: if the loop stopped right now, nothing important is lost
Break any part of this contract and the loop degrades.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#tips","level":2,"title":"Tips","text":"
Markdown Is Not Enforcement
Your real guardrails are permissions and isolation, not Markdown. CONSTITUTION.md can nudge the agent, but it is probabilistic.
The permission allowlist and OS isolation are deterministic:
For unattended runs, trust the sandbox and the allowlist, not the prose.
Start with a small iteration cap. Use --max-iterations 5 on your first run.
Keep tasks atomic. Each task should be completable in a single iteration.
Check signal discipline. If the loop runs forever, the agent is not emitting SYSTEM_CONVERGED or SYSTEM_BLOCKED. Make the signal requirement explicit in .context/loop.md.
Commit after context updates. Finish code, update .context/, commit including .context/, then signal.
Set up webhook notifications to get notified when the loop completes, hits max iterations, or when hooks fire nudges. The generated loop script includes ctx notify calls automatically.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#next-up","level":2,"title":"Next Up","text":"
When to Use a Team of Agents →: Decision framework for choosing between a single agent, parallel worktrees, and a full agent team.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#see-also","level":2,"title":"See Also","text":"
Tracking Work Across Sessions: structuring TASKS.md
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/building-skills/","level":1,"title":"Building Project Skills","text":"","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#the-problem","level":2,"title":"The Problem","text":"
You have workflows your agent needs to repeat across sessions: a deploy checklist, a review protocol, a release process. Each time, you re-explain the steps. The agent gets it mostly right but forgets edge cases you corrected last time.
Skills solve this by encoding domain knowledge into a reusable document the agent loads automatically when triggered. A skill is not code - it is a structured prompt that captures what took you sessions to learn.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#tldr","level":2,"title":"TL;DR","text":"
/ctx-skill-creator\n
The skill-creator walks you through: identify a repeating workflow, draft a skill, test with realistic prompts, iterate until it triggers correctly and produces good output.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-skill-creator Skill Interactive skill creation and improvement workflow ctx init Command Deploys template skills to .claude/skills/ on first setup","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-1-identify-a-repeating-pattern","level":3,"title":"Step 1: Identify a Repeating Pattern","text":"
Good skill candidates:
Checklists you repeat: deploy steps, release prep, code review
Decisions the agent gets wrong: if you keep correcting the same behavior, encode the correction
Multi-step workflows: anything with a sequence of commands and conditional branches
Domain knowledge: project-specific terminology, architecture constraints, or conventions the agent cannot infer from code alone
Not good candidates: one-off instructions, things the platform already handles (file editing, git operations), or tasks too narrow to reuse.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-2-create-the-skill","level":3,"title":"Step 2: Create the Skill","text":"
Invoke the skill-creator:
You: \"I want a skill for our deploy process\"\n\nAgent: [Asks about the workflow: what steps, what tools,\n what edge cases, what the output should look like]\n
Or capture a workflow you just did:
You: \"Turn what we just did into a skill\"\n\nAgent: [Extracts the steps from conversation history,\n confirms understanding, drafts the skill]\n
The skill-creator produces a SKILL.md file in .claude/skills/your-skill/.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-3-test-with-realistic-prompts","level":3,"title":"Step 3: Test with Realistic Prompts","text":"
The skill-creator proposes 2-3 test prompts - the kind of thing a real user would say. It runs each one and shows the result alongside a baseline (same prompt without the skill) so you can compare.
Agent: \"Here are test prompts I'd try:\n 1. 'Deploy to staging'\n 2. 'Ship the hotfix'\n 3. 'Run the release checklist'\n Want to adjust these?\"\n
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-4-iterate-on-the-description","level":3,"title":"Step 4: Iterate on the Description","text":"
The description field in frontmatter determines when a skill triggers. Claude tends to undertrigger - descriptions need to be specific and slightly \"pushy\":
# Weak - too vague, will undertrigger\ndescription: \"Use for deployments\"\n\n# Strong - covers situations and synonyms\ndescription: >-\n Use when deploying to staging or production, running the release\n checklist, or when the user says 'ship it', 'deploy this', or\n 'push to prod'. Also use after merging to main when a deploy\n is expected.\n
The skill-creator helps you tune this iteratively.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-5-deploy-as-template-optional","level":3,"title":"Step 5: Deploy as Template (Optional)","text":"
If the skill should be available to all projects (not just this one), place it in internal/assets/claude/skills/ so ctx init deploys it to new projects automatically.
Most project-specific skills stay in .claude/skills/ and travel with the repo.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#skill-anatomy","level":2,"title":"Skill Anatomy","text":"
my-skill/\n SKILL.md # Required: frontmatter + instructions (<500 lines)\n scripts/ # Optional: deterministic code the skill can execute\n references/ # Optional: detail loaded on demand (not always)\n assets/ # Optional: output templates, not loaded into context\n
Key sections in SKILL.md:
Section Purpose Required? Frontmatter Name, description (trigger) Yes When to Use Positive triggers Yes When NOT to Use Prevents false activations Yes Process Steps and commands Yes Examples Good/bad output pairs Recommended Quality Checklist Verify before reporting completion For complex skills","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#tips","level":2,"title":"Tips","text":"
Description is everything. A great skill with a vague description never fires. Spend time on trigger coverage - synonyms, concrete situations, edge cases.
Stay under 500 lines. If your skill is growing past this, move detail into references/ files and point to them from SKILL.md.
Do not duplicate the platform. If the agent already knows how to do something (edit files, run git commands), do not restate it. Tag paragraphs as Expert/Activation/Redundant and delete Redundant ones.
Explain why, not just what. \"Sort by date because users want recent results first\" beats \"ALWAYS sort by date.\" The agent generalizes from reasoning better than from rigid rules.
Test negative triggers. Make sure the skill does not fire on unrelated prompts. A skill that activates too broadly becomes noise.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#next-up","level":2,"title":"Next Up","text":"
Parallel Agent Development with Git Worktrees ->: Split work across multiple agents using git worktrees.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#see-also","level":2,"title":"See Also","text":"
Skills Reference: full listing of all bundled and project-local skills
Guide Your Agent: how commands, skills, and conversational patterns work together
Design Before Coding: the four-skill chain for front-loading design work
Claude Code's .claude/settings.local.json controls what the agent can do without asking. Over time, this file accumulates one-off permissions from individual sessions: Exact commands with hardcoded paths, duplicate entries, and stale skill references.
A noisy \"allowlist\" makes it harder to spot dangerous permissions and increases the surface area for unintended behavior.
Since settings.local.json is .gitignored, it drifts independently of your codebase. There is no PR review, no CI check: just whatever you clicked \"Allow\" on.
This recipe shows what a well-maintained permission file looks like and how to keep it clean.
","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Command/Skill Role in this workflow ctx init Populates default ctx permissions /ctx-drift Detects missing or stale permission entries /ctx-sanitize-permissions Audits for dangerous patterns (security-focused)","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#recommended-defaults","level":2,"title":"Recommended Defaults","text":"
After running ctx init, your settings.local.json will have the ctx defaults pre-populated. Here is an opinionated safe starting point for a Go project using ctx:
The goal is intentional permissions: Every entry should be there because you decided it belongs, not because you clicked \"Allow\" once during debugging.
Use wildcards for trusted binaries: If you trust the binary (your own project's CLI, make, go), a single wildcard like Bash(ctx:*) beats twenty subcommand entries. It reduces noise and means new subcommands work without re-prompting.
Keep git commands granular: Unlike ctx or make, git has both safe commands (git log, git status) and destructive ones (git reset --hard, git clean -f). Listing safe commands individually prevents accidentally pre-approving dangerous ones.
Pre-approve all ctx- skills: Skills shipped with ctx (Skill(ctx-*)) are safe to pre-approve. They are part of your project and you control their content. This prevents the agent from prompting on every skill invocation.
ctx init automatically populates permissions.deny with rules that block dangerous operations. Deny rules are evaluated before allow rules: A denied pattern always prompts the user, even if it also matches an allow entry.
The defaults block:
Pattern Why Bash(sudo *) Cannot enter password; will hang Bash(git push *) Must be explicit user action Bash(rm -rf /*) etc. Recursive delete of system/home directories Bash(curl *) / wget Arbitrary network requests Bash(chmod 777 *) World-writable permissions Read/Edit(**/.env*) Secrets and credentials Read(**/*.pem, *.key) Private keys
Read/Edit Deny Rules
Read() and Edit() deny rules have known upstream enforcement issues (claude-code#6631,#24846).
They are included as defense-in-depth and intent documentation.
Blocked by default deny rules: no action needed, ctx init handles these:
Pattern Risk Bash(git push:*) Must be explicit user action Bash(sudo:*) Privilege escalation Bash(rm -rf:*) Recursive delete with no confirmation Bash(curl:*) / Bash(wget:*) Arbitrary network requests
Requires manual discipline: Never add these to allow:
Pattern Risk Bash(git reset:*) Can discard uncommitted work Bash(git clean:*) Deletes untracked files Skill(ctx-sanitize-permissions) Edits this file: self-modification vector Skill(release) Runs the release pipeline: high impact","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#hooks-regex-safety-net","level":2,"title":"Hooks: Regex Safety Net","text":"
Deny rules handle prefix-based blocking natively. Hooks complement them by catching patterns that require regex matching: Things deny rules can't express.
The ctx plugin ships these blocking hooks:
Hook What it blocks ctx system block-non-path-ctx Running ctx from wrong path
Project-local hooks (not part of the plugin) catch regex edge cases:
Hook What it blocks block-dangerous-commands.sh Mid-command sudo/git push (after &&), copies to bin dirs, absolute-path ctx
Pre-Approved + Hook-Blocked = Silent Block
If you pre-approve a command that a hook blocks, the user never sees the confirmation dialog. The agent gets a block response and must handle it, which is confusing.
It's better not to pre-approve commands that hooks are designed to intercept.
If manual cleanup is too tedious, use a golden image to automate it:
Snapshot a curated permission set, then restore at session start to automatically drop session-accumulated permissions. See the Permission Snapshots recipe for the full workflow.
","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#adapting-for-other-languages","level":2,"title":"Adapting for Other Languages","text":"
The recommended defaults above are Go-specific. For other stacks, swap the build/test tooling:
","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/context-health/","level":1,"title":"Detecting and Fixing Drift","text":"","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#the-problem","level":2,"title":"The Problem","text":"
ctx files drift: you rename a package, delete a module, or finish a sprint, and suddenly ARCHITECTURE.md references paths that no longer exist, TASKS.md is 80 percent completed checkboxes, and CONVENTIONS.md describes patterns you stopped using two months ago.
Stale context is worse than no context:
An AI tool that trusts outdated references will hallucinate confidently.
This recipe shows how to detect drift, fix it, and keep your .context/ directory lean and accurate.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#tldr","level":2,"title":"TL;DR","text":"
ctx drift # detect problems\nctx drift --fix # auto-fix the easy ones\nctx sync --dry-run && ctx sync # reconcile after refactors\nctx compact --archive # archive old completed tasks\nctx status # verify\n
Or just ask your agent: \"Is our context clean?\"
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx drift Command Detect stale paths, missing files, violations ctx drift --fix Command Auto-fix simple issues ctx sync Command Reconcile context with codebase structure ctx compact Command Archive completed tasks, clean up empty sections ctx status Command Quick health overview /ctx-drift Skill Structural plus semantic drift detection /ctx-architecture Skill Refresh ARCHITECTURE.md from actual codebase /ctx-status Skill In-session context summary /ctx-prompt-audit Skill Audit prompt quality and token efficiency","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#the-workflow","level":2,"title":"The Workflow","text":"
The best way to maintain context health is conversational: Ask your agent, guide it, and let it detect problems, explain them, and fix them with your approval. CLI commands exist for CI pipelines, scripting, and fine-grained control.
For day-to-day maintenance, talk to your agent.
Your Questions Reinforce the Pattern
Asking \"is our context clean?\" does two things:
It triggers a drift check right now
It reinforces the habit
This is reinforcement, not enforcement.
Do not wait for the agent to be proactive on its own:
Guide your agent, especially in early sessions.
Over time, you will ask less and the agent will start offering more.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-1-ask-your-agent","level":3,"title":"Step 1: Ask Your Agent","text":"
The simplest way to check context health:
Is our context clean?\nAnything stale?\nHow healthy are our context files?\n
Or invoke the skill directly:
/ctx-drift\n
The agent performs two layers of analysis:
Layer 1, structural checks (via ctx drift): Dead paths, missing files, completed task counts, constitution violations. Fast and programmatic.
Layer 2, semantic analysis (agent-driven): Does CONVENTIONS.md describe patterns the code no longer follows? Does DECISIONS.md contain entries whose rationale no longer applies? Are there learnings about bugs that are now fixed? This is where the agent adds value the CLI cannot: It reads both context files and source code and compares them.
The agent reports both layers together, explains each finding in plain language, and offers to fix what it can.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-2-maintenance-at-session-start","level":3,"title":"Step 2: Maintenance at Session Start","text":"
You do not need to ask explicitly.
Using Claude Code
ctx ships with Claude Code hooks that remind the agent at the right time to take initiative.
Checking context health at the session start, offering to persist learnings before you quit, and flagging drift when it matters. The agent stays proactive without you having to prompt it:
Agent: Good morning. I've loaded the context files. A few things\n before we start:\n\n - ARCHITECTURE.md references `pkg/auth/` which is now empty\n - DECISIONS.md hasn't been updated in 40 days\n - There are 18 completed tasks ready for archival\n\n Want me to run a quick maintenance pass, or should we jump\n straight into today's work?\n
☝️️ this is what persistent, initiative-driven sessions feel like when context is treated as a system instead of a prompt.
If the agent does not offer this on its own, a gentle nudge is enough:
Anything stale before we start?\nHow's the context looking?\n
This turns maintenance from a scheduled chore into a conversation that happens when it matters.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-3-real-time-detection-during-work","level":3,"title":"Step 3: Real-Time Detection During Work","text":"
Agents can notice drift while working: When a mismatch is directly in the path of their current task. If an agent reads ARCHITECTURE.md to find where to add a handler and internal/handlers/ doesn't exist, it will notice because the stale reference blocks its work:
Agent: ARCHITECTURE.md references `internal/handlers/` but that directory\n doesn't exist. I'll look at the actual source tree to find where\n handlers live now.\n
This happens reliably when the drift intersects the task. What is less reliable is the agent generalizing from one mismatch to \"there might be more stale references; let me run drift detection\" That leap requires the agent to know /ctx-drift exists and to decide the current task should pause for maintenance.
If you want that behavior, reinforce it:
Good catch. Yes, run /ctx-drift and clean up any other stale references.\n
Over time, agents that have seen this pattern will start offering proactively. But do not expect it from a cold start.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-4-archival-and-cleanup","level":3,"title":"Step 4: Archival and Cleanup","text":"
ctx drift detects when TASKS.md has more than 10 completed items and flags it as a staleness warning. Running ctx drift --fix archives completed tasks automatically.
You can also run /ctx-archive to compact on demand.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#knowledge-health-flow","level":3,"title":"Knowledge Health Flow","text":"
Over time, LEARNINGS.md and DECISIONS.md accumulate entries that overlap or partially repeat each other. The check-persistence hook detects when entry counts exceed a configurable threshold and surfaces a nudge:
\"LEARNINGS.md has 25+ entries. Consider running /ctx-consolidate to merge overlapping items.\"
The consolidation workflow:
Review: /ctx-consolidate groups entries by keyword similarity and presents candidate merges for your approval.
Merge: Approved groups are combined into single entries that preserve the key information from each original.
Archive: Originals move to .context/archive/, not deleted -- the full history is preserved in git and the archive directory.
Verify: Run ctx drift after consolidation to confirm no cross-references were broken by the merge.
This replaces ad-hoc cleanup with a repeatable, nudge-driven cycle: detect accumulation, review candidates, merge with approval, archive originals.
See also: Knowledge Capture for the recording workflow that feeds into this maintenance cycle.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-doctor-the-superset-check","level":2,"title":"ctx doctor: The Superset Check","text":"
ctx doctor combines drift detection with hook auditing, configuration checks, event logging status, and token size reporting in a single command. If you want one command that covers structural health, hooks, and state:
ctx doctor # everything in one pass\nctx doctor --json # machine-readable for scripting\n
Use /ctx-doctor Too
For agent-driven diagnosis that adds semantic analysis on top of the structural checks, use /ctx-doctor.
See the Troubleshooting recipe for the full workflow.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#cli-reference","level":2,"title":"CLI Reference","text":"
The conversational approach above uses CLI commands under the hood. When you need direct control, use the commands directly.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-drift","level":3,"title":"ctx drift","text":"
Scan context files for structural problems:
ctx drift\n
Sample output:
Drift Report\n============\n\nWarnings (3):\n ARCHITECTURE.md:14 path \"internal/api/router.go\" does not exist\n ARCHITECTURE.md:28 path \"pkg/auth/\" directory is empty\n CONVENTIONS.md:9 path \"internal/handlers/\" not found\n\nViolations (1):\n TASKS.md 31 completed tasks (recommend archival)\n\nStaleness:\n DECISIONS.md last modified 45 days ago\n LEARNINGS.md last modified 32 days ago\n\nExit code: 1 (warnings found)\n
Level Meaning Action Warning Stale path references, missing files Fix or remove Violation Constitution rule heuristic failures, heavy clutter Fix soon Staleness Files not updated recently Review content
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-drift-fix","level":3,"title":"ctx drift --fix","text":"
Auto-fix mechanical issues:
ctx drift --fix\n
This handles removing dead path references, updating unambiguous renames, clearing empty sections. Issues requiring judgment are flagged but left for you.
Run ctx drift again afterward to confirm what remains.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-sync","level":3,"title":"ctx sync","text":"
After a refactor, reconcile context with the actual codebase structure:
ctx sync scans for structural changes, compares with ARCHITECTURE.md, checks for new dependencies worth documenting, and identifies context referring to code that no longer exists.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-compact","level":3,"title":"ctx compact","text":"
Consolidate completed tasks and clean up empty sections:
ctx compact # move completed tasks to Completed section,\n # remove empty sections\nctx compact --archive # also archive old tasks to .context/archive/\n
Tasks: moves completed items (with all subtasks done) into the Completed section of TASKS.md
All files: removes empty sections left behind
With --archive: writes tasks older than 7 days to .context/archive/tasks-YYYY-MM-DD.md
Without --archive, nothing is deleted: Tasks are reorganized in place.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-status","level":3,"title":"ctx status","text":"
Quick health overview:
ctx status --verbose\n
Shows file counts, token estimates, modification times, and drift warnings in a single glance.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-prompt-audit","level":3,"title":"/ctx-prompt-audit","text":"
Checks whether your context files are readable, compact, and token-efficient for the model.
/ctx-prompt-audit\n
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
Conversational approach (recommended):
Is our context clean? -> agent runs structural plus semantic checks\nFix what you can -> agent auto-fixes and proposes edits\nArchive the done tasks -> agent runs ctx compact --archive\nHow's token usage? -> agent checks ctx status\n
CLI approach (for CI, scripts, or direct control):
ctx drift # 1. Detect problems\nctx drift --fix # 2. Auto-fix the easy ones\nctx sync --dry-run && ctx sync # 3. Reconcile after refactors\nctx compact --archive # 4. Archive old completed tasks\nctx status # 5. Verify\n
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#tips","level":2,"title":"Tips","text":"
Agents cross-reference context files with source code during normal work. When drift intersects their current task, they will notice: a renamed package, a deleted directory, a path that doesn't resolve. But they rarely generalize from one mismatch to a full audit on their own. Reinforce the pattern: when an agent mentions a stale reference, ask it to run /ctx-drift. Over time, it starts offering.
When an agent says \"this reference looks stale,\" it is usually right.
Semantic drift is more damaging than structural drift: ctx drift catches dead paths. But CONVENTIONS.md describing a pattern your code stopped following three weeks ago is worse. When you ask \"is our context clean?\", the agent can do both checks.
Use ctx status as a quick check: It shows file counts, token estimates, and drift warnings in a single glance. Good for a fast \"is everything ok?\" before diving into work.
Drift detection in CI: add ctx drift --json to your CI pipeline and fail on exit code 3 (violations). This catches constitution-level problems before they reach upstream.
Do not over-compact: Completed tasks have historical value. The --archive flag preserves them in .context/archive/ so you can search past work without cluttering active context.
Sync is cautious by default: Use --dry-run after large refactors, then apply.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#next-up","level":2,"title":"Next Up","text":"
Claude Code Permission Hygiene →: Recommended permission defaults and maintenance workflow for Claude Code.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#see-also","level":2,"title":"See Also","text":"
Troubleshooting: full diagnostic workflow using ctx doctor, event logs, and /ctx-doctor
Tracking Work Across Sessions: task lifecycle and archival
Persisting Decisions, Learnings, and Conventions: keeping knowledge files current
The Complete Session: where maintenance fits in the daily workflow
CLI Reference: full flag documentation for all commands
Context Files: structure and purpose of each .context/ file
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/customizing-hook-messages/","level":1,"title":"Customizing Hook Messages","text":"","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#the-problem","level":2,"title":"The Problem","text":"
ctx hooks speak ctx's language, not your project's. The QA gate says \"lint the ENTIRE project\" and \"make build,\" but your Python project uses pytest and ruff. The post-commit nudge suggests running lints, but your project uses npm test. You could remove the hook entirely, but then you lose the logic (counting, state tracking, adaptive frequency) just to change the words.
How do you customize what hooks say without removing what they do?
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#tldr","level":2,"title":"TL;DR","text":"
ctx system message list # see all hooks and their messages\nctx system message show qa-reminder gate # view the current template\nctx system message edit qa-reminder gate # copy default to .context/ for editing\nctx system message reset qa-reminder gate # revert to embedded default\n
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#commands-used","level":2,"title":"Commands Used","text":"Tool Type Purpose ctx system message list CLI command Show all hook messages with category and override status ctx system message show CLI command Print the effective message template ctx system message edit CLI command Copy embedded default to .context/ for editing ctx system message reset CLI command Delete user override, revert to default","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#how-it-works","level":2,"title":"How It Works","text":"
Hook messages use a 3-tier fallback:
User override: .context/hooks/messages/{hook}/{variant}.txt
Embedded default: compiled into the ctx binary
Hardcoded fallback: belt-and-suspenders safety net
The hook logic (when to fire, counting, state tracking, cooldowns) is unchanged. Only the content (what text gets emitted) comes from the template. You customize what the hook says without touching how it decides to speak.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#finding-the-original-templates","level":3,"title":"Finding the Original Templates","text":"
The default templates live in the ctx source tree at:
You can also browse them on GitHub: internal/assets/hooks/messages/
Or use ctx system message show to print any template without digging through source code:
ctx system message show qa-reminder gate # QA gate instructions\nctx system message show check-persistence nudge # persistence nudge\nctx system message show post-commit nudge # post-commit reminder\n
The show output includes the template source and available variables -- everything you need to write a replacement.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#template-variables","level":3,"title":"Template Variables","text":"
Some messages use Go text/template variables for dynamic content:
No context files updated in {{.PromptsSinceNudge}}+ prompts.\nHave you discovered learnings, made decisions,\nestablished conventions, or completed tasks\nworth persisting?\n
The show and edit commands list available variables for each message. When writing a replacement, keep the same {{.VariableName}} placeholders to preserve dynamic content. Variables that you omit render as <no value>: no error, but the output may look odd.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#intentional-silence","level":3,"title":"Intentional Silence","text":"
An empty template file (0 bytes or whitespace-only) means \"don't emit a message\". The hook still runs its logic but produces no output. This lets you silence specific messages without removing the hook from hooks.json.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#example-python-project-qa-gate","level":2,"title":"Example: Python Project QA Gate","text":"
The default QA gate says \"lint the ENTIRE project\" and references make lint. For a Python project, you want pytest and ruff:
# See the current default\nctx system message show qa-reminder gate\n\n# Copy it to .context/ for editing\nctx system message edit qa-reminder gate\n\n# Edit the override\n
Replace the content in .context/hooks/messages/qa-reminder/gate.txt:
HARD GATE! DO NOT COMMIT without completing ALL of these steps first:\n(1) Run the full test suite: pytest -x\n(2) Run the linter: ruff check .\n(3) Verify a clean working tree\nRun tests and linter BEFORE every git commit, no exceptions.\n
The hook still fires on every Edit call. The logic is identical. Only the instructions changed.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#example-silencing-ceremony-nudges","level":2,"title":"Example: Silencing Ceremony Nudges","text":"
The ceremony check nudges you to use /ctx-remember and /ctx-wrap-up. If your team has a different workflow and finds these noisy:
ctx system message edit check-ceremonies both\nctx system message edit check-ceremonies remember\nctx system message edit check-ceremonies wrapup\n
The hooks still track ceremony usage internally, but they no longer emit any visible output.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#example-javascript-project-post-commit","level":2,"title":"Example: JavaScript Project Post-Commit","text":"
The default post-commit nudge mentions generic \"lints and tests.\" For a JavaScript project:
ctx system message edit post-commit nudge\n
Replace with:
Commit succeeded. 1. Offer context capture to the user: Decision (design\nchoice?), Learning (gotcha?), or Neither. 2. Ask the user: \"Want me to\nrun npm test and eslint before you push?\" Do NOT push. The user pushes\nmanually.\n
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#the-two-categories","level":2,"title":"The Two Categories","text":"
Not all messages are equal. The list command shows each message's category:
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#customizable-17-messages","level":3,"title":"Customizable (17 messages)","text":"
Messages that are opinions: project-specific wording that benefits from customization. These are the primary targets for override.
Templates that reference undefined variables render <no value>: no error, graceful degradation.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#tips","level":2,"title":"Tips","text":"
Override files are version-controlled: they live in .context/ alongside your other context files. Team members get the same customized messages.
Start with show: always check the current default before editing. The embedded template is the baseline your override replaces.
Use reset to undo: if a customization causes confusion, reset reverts to the embedded default instantly.
Empty file = silence: you don't need to delete the hook. An empty override file silences the message while preserving the hook's logic.
JSON output for scripting: ctx system message list --json returns structured data for automation.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#see-also","level":2,"title":"See Also","text":"
Hook Output Patterns: understanding VERBATIM relays, agent directives, and hard gates
Auditing System Hooks: verifying hooks are running and auditing their output
Understanding how packages relate to each other is the first step in onboarding, refactoring, and architecture review. ctx dep generates dependency graphs from source code so you can see the structure at a glance instead of tracing imports by hand.
# Auto-detect ecosystem and output Mermaid (default)\nctx dep\n\n# Table format for a quick terminal overview\nctx dep --format table\n\n# JSON for programmatic consumption\nctx dep --format json\n
By default, only internal (first-party) dependencies are shown. Add --external to include third-party packages:
ctx dep --external\nctx dep --external --format table\n
This is useful when auditing transitive dependencies or checking which packages pull in heavy external libraries.
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#when-to-use-it","level":2,"title":"When to Use It","text":"
Onboarding. Generate a Mermaid graph and drop it into the project wiki. New contributors see the architecture before reading code.
Refactoring. Before moving packages, check what depends on them. Combine with ctx drift to find stale references after the move.
Architecture review. Table format gives a quick overview; Mermaid format goes into design docs and PRs.
Pre-commit. Run in CI to detect unexpected new dependencies between packages.
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#combining-with-other-commands","level":2,"title":"Combining with Other Commands","text":"","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#refactoring-with-ctx-drift","level":3,"title":"Refactoring with ctx drift","text":"
# See the dependency structure before refactoring\nctx dep --format table\n\n# After moving packages, check for broken references\nctx drift\n
Use JSON output as input for context files or architecture documentation:
# Generate a dependency snapshot for the context directory\nctx dep --format json > .context/deps.json\n\n# Or pipe into other tools\nctx dep --format mermaid >> docs/architecture.md\n
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#monorepos-and-multi-ecosystem-projects","level":2,"title":"Monorepos and Multi-Ecosystem Projects","text":"
In a monorepo with multiple ecosystems, ctx dep picks the first manifest it finds (Go beats Node.js beats Python beats Rust). Use --type to target a specific ecosystem:
# In a repo with both go.mod and package.json\nctx dep --type node\nctx dep --type go\n
For separate subdirectories, run from each root:
cd services/api && ctx dep --format table\ncd frontend && ctx dep --type node --format mermaid\n
Start with table format. It is the fastest way to get a mental model of the dependency structure. Switch to Mermaid when you need a visual for documentation or a PR.
Pipe JSON to jq. Filter for specific packages, count edges, or extract subgraphs programmatically.
Skip --external unless you need it. Internal-only graphs are cleaner and load faster. Add external deps when you are specifically auditing third-party usage.
Force --type in CI. Auto-detection is convenient locally, but explicit types prevent surprises when the repo structure changes.
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/design-before-coding/","level":1,"title":"Design Before Coding","text":"","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#the-problem","level":2,"title":"The Problem","text":"
You start coding a feature. Halfway through, you realize the approach doesn't handle a key edge case. You refactor. Then you discover the CLI interface doesn't fit the existing patterns. More refactoring.
The design work happened during implementation, mixed in with debugging and trial-and-error. The result works, but the spec was never written down, the trade-offs were never recorded, and the next session has no idea why things are shaped this way.
How do you front-load design so the implementation is straightforward?
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#tldr","level":2,"title":"TL;DR","text":"
/ctx-brainstorm # explore the design space\n/ctx-spec # write the spec document\n/ctx-add-task # break it into tasks\n/ctx-implement # execute step-by-step\n
Four skills, used in sequence. Each produces an artifact that feeds the next.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-brainstorm Skill Structured design dialogue: explore approaches /ctx-spec Skill Scaffold and fill out a spec from the template /ctx-add-task Skill Add implementation tasks to TASKS.md /ctx-implement Skill Execute a plan step-by-step with verification /ctx-add-decision Skill Record design choices made during brainstorming","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-1-brainstorm-the-design","level":3,"title":"Step 1: Brainstorm the Design","text":"
Start with /ctx-brainstorm when you have a rough idea but haven't committed to an approach. The skill follows a gated process:
Understand context: what exists today, what's the gap
Clarify the idea: one question at a time, not a wall of questions
Lock understanding: confirm before exploring solutions
Explore 2-3 approaches: with trade-offs for each
Stress-test: poke holes in the chosen approach
Present the design: detailed enough to implement from
You: \"I want to add RSS feed generation to the blog.\"\n\nAgent: \"Before we design, let me understand: is this for the journal\n site, the docs site, or both?\"\n\nYou: \"Journal site only.\"\n\nAgent: [asks about update frequency, content scope, existing infra...]\n [presents 3 approaches with trade-offs]\n\nYou: \"Option 2 - generate at build time.\"\n\nAgent: \"Let me stress-test that. What happens when...\"\n [probes edge cases]\n \"Design looks solid. Want me to record the approach as a\n decision and write a spec?\"\n
The brainstorm produces a validated design in conversation. It doesn't produce a file - that's what the spec is for.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-2-write-the-spec","level":3,"title":"Step 2: Write the Spec","text":"
Once the design is clear, use /ctx-spec to produce the written artifact. The skill reads the project's spec template (specs/tpl/spec-template.md) and walks through each section:
You: /ctx-spec (rss feed generation)\n\nAgent: \"Creating specs/rss-feed.md from template.\"\n\n \"Problem: What user-visible problem does this solve? Why now?\"\n\nYou: \"Users can't subscribe to blog updates. They have to check\n the site manually.\"\n\nAgent: [writes the Problem section, moves to Approach]\n\n \"Approach: High-level, how does this work?\"\n\nYou: \"Generate feed.xml during ctx journal site. Atom format.\"\n\nAgent: [writes Approach, moves to Happy Path, then Edge Cases...]\n
The skill spends extra time on Edge Cases and Error Handling: These are where specs earn their value. The skill pushes for at least three edge cases and won't accept \"none\" without a challenge.
Sections that don't apply can be skipped. The result is a complete spec at specs/{feature-name}.md.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-3-break-into-tasks","level":3,"title":"Step 3: Break Into Tasks","text":"
After the spec is written, the skill offers to create tasks:
Agent: \"Spec written to specs/rss-feed.md. Want me to break this\n into tasks in TASKS.md?\"\n\nYou: \"Yes.\"\n\nAgent: [creates a Phase section referencing the spec, adds tasks]\n \"Added Phase 0.8: RSS/Atom Feed Generation with 4 tasks:\n - [ ] Implement feed.xml generation in ctx journal site\n - [ ] Add --feed flag and .ctxrc feed config\n - [ ] Write tests for feed generation\n - [ ] Add recipe and update docs\"\n
Each task is independently completable and references the spec via Spec: specs/rss-feed.md in the Phase header.
You can also invoke /ctx-add-task directly at any point to add individual tasks.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-4-implement-step-by-step","level":3,"title":"Step 4: Implement Step-by-Step","text":"
Pick up tasks with /ctx-implement. The skill loads the spec, breaks work into atomic steps, and checkpoints after every 3-5 steps with a build and test verification:
You: /ctx-implement (specs/rss-feed.md)\n\nAgent: \"Loaded spec. Breaking into steps:\n 1. Add FeedEntry type to journal package\n 2. Implement feed.xml template\n 3. Wire into site generation pipeline\n [builds and tests after step 3]\n 4. Add --feed flag to ctx journal site\n 5. Add .ctxrc feed configuration\n [builds and tests after step 5]\n ...\"\n
If a build or test fails, the agent stops, diagnoses, and fixes before continuing.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#when-to-skip-steps","level":2,"title":"When to Skip Steps","text":"
Not every feature needs all four steps. Use your judgment:
Situation Start at Vague idea, multiple valid approaches Step 1: Brainstorm Clear approach, need to document it Step 2: Spec Spec already exists, need to plan work Step 3: Tasks Tasks exist, ready to code Step 4: Implement
A brainstorm without a spec is fine for small decisions. A spec without a brainstorm is fine when the design is obvious. The full chain is for features complex enough to warrant front-loaded design.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't need skill names. Natural language works:
You say What happens \"Let's think through this feature\" /ctx-brainstorm \"Spec this out\" /ctx-spec \"Write a design doc for...\" /ctx-spec \"Break this into tasks\" /ctx-add-task \"Implement the spec\" /ctx-implement \"Let's design before we build\" Starts at brainstorm","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#tips","level":2,"title":"Tips","text":"
Brainstorm first when uncertain. If you can articulate the approach in two sentences, skip to spec. If you can't, brainstorm.
Specs prevent scope creep. The Non-Goals section is as important as the approach. Writing down what you won't do keeps implementation focused.
Edge cases are the point. A spec that only describes the happy path isn't a spec - it's a wish. The /ctx-spec skill pushes for at least 3 edge cases because that's where designs break.
Record decisions during brainstorming. When you choose between approaches, the agent offers to persist the trade-off via /ctx-add-decision. Accept - future sessions need to know why, not just what.
Specs are living documents. Update them when implementation reveals new constraints. A spec that diverges from reality is worse than no spec.
The spec template is customizable. Edit specs/tpl/spec-template.md to match your project's needs. The /ctx-spec skill reads whatever template it finds there.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#see-also","level":2,"title":"See Also","text":"
Skills Reference: /ctx-spec: spec scaffolding from template
Skills Reference: /ctx-implement: step-by-step execution with verification
Tracking Work Across Sessions: task lifecycle and archival
Importing Claude Code Plans: turning ephemeral plans into permanent specs
Persisting Decisions, Learnings, and Conventions: capturing design trade-offs
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/external-context/","level":1,"title":"Keeping Context in a Separate Repo","text":"","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#the-problem","level":2,"title":"The Problem","text":"
ctx files contain project-specific decisions, learnings, conventions, and tasks. By default, they live in .context/ inside the project tree, and that works well when the context can be public.
But sometimes you need the context outside the project:
Open-source projects with private context: Your architectural notes, internal task lists, and scratchpad entries shouldn't ship with the public repo.
Compliance or IP concerns: Context files reference sensitive design rationale that belongs in a separate access-controlled repository.
Personal preference: You want a single context repo that covers multiple projects, or you just prefer keeping notes separate from code.
ctx supports this through three configuration methods. This recipe shows how to set them up and how to tell your AI assistant where to find the context.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#tldr","level":2,"title":"TL;DR","text":"
All ctx commands now use the external directory automatically.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx init CLI command Initialize context directory --context-dir Global flag Point ctx at a non-default directory --allow-outside-cwd Global flag Permit context outside the project root .ctxrc Config file Persist the context directory setting CTX_DIR Env variable Override context directory per-session /ctx-status Skill Verify context is loading correctly","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-1-create-the-private-context-repo","level":3,"title":"Step 1: Create the Private Context Repo","text":"
Create a separate repository for your context files. This can live anywhere: a private GitHub repo, a shared drive, a sibling directory:
# Create the context repo\nmkdir ~/repos/myproject-context\ncd ~/repos/myproject-context\ngit init\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-2-initialize-ctx-pointing-at-it","level":3,"title":"Step 2: Initialize ctx Pointing at It","text":"
From your project root, initialize ctx with --context-dir pointing to the external location. Because the directory is outside your project tree, you also need --allow-outside-cwd:
cd ~/repos/myproject\nctx --context-dir ~/repos/myproject-context \\\n --allow-outside-cwd \\\n init\n
This creates the full .context/-style file set inside ~/repos/myproject-context/ instead of ~/repos/myproject/.context/.
Boundary Validation
ctx validates that the .context directory is within the current working directory.
If your external directory is truly outside the project root:
Either every ctx command needs --allow-outside-cwd,
or you can persist the setting in .ctxrc (next step).
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-3-make-it-stick","level":3,"title":"Step 3: Make It Stick","text":"
Typing --context-dir and --allow-outside-cwd on every command is tedious. Pick one of these methods to make the configuration permanent.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#option-a-ctxrc-recommended","level":4,"title":"Option A: .ctxrc (Recommended)","text":"
Create a .ctxrc file in your project root:
# .ctxrc: committed to the project repo\ncontext_dir: ~/repos/myproject-context\nallow_outside_cwd: true\n
ctx reads .ctxrc automatically. Every command now uses the external directory without extra flags:
ctx status # reads from ~/repos/myproject-context\nctx add learning \"Redis MULTI doesn't roll back on error\"\n
Commit .ctxrc
.ctxrc belongs in the project repo. It contains no secrets: It's just a path and a boundary override.
.ctxrc lets teammates share the same configuration.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#option-b-ctx_dir-environment-variable","level":4,"title":"Option B: CTX_DIR Environment Variable","text":"
Good for CI pipelines, temporary overrides, or when you don't want to commit a .ctxrc:
# In your shell profile (~/.bashrc, ~/.zshrc)\nexport CTX_DIR=~/repos/myproject-context\n
Or for a single session:
CTX_DIR=~/repos/myproject-context ctx status\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#option-c-shell-alias","level":4,"title":"Option C: Shell Alias","text":"
If you prefer a shell alias over .ctxrc:
# ~/.bashrc or ~/.zshrc\nalias ctx='ctx --context-dir ~/repos/myproject-context --allow-outside-cwd'\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#priority-order","level":4,"title":"Priority Order","text":"
When multiple methods are set, ctx resolves the context directory in this order (highest priority first):
--context-dir flag
CTX_DIR environment variable
context_dir in .ctxrc
Default: .context/
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-4-agent-auto-discovery-via-bootstrap","level":3,"title":"Step 4: Agent Auto-Discovery via Bootstrap","text":"
When context lives outside the project tree, your AI assistant needs to know where to find it. The ctx system bootstrap command resolves the configured context directory and communicates it to the agent automatically:
$ ctx system bootstrap\nctx bootstrap\n=============\n\ncontext_dir: /home/user/repos/myproject-context\n\nFiles:\n CONSTITUTION.md, TASKS.md, DECISIONS.md, ...\n
The CLAUDE.md template generated by ctx init already instructs the agent to run ctx system bootstrap at session start. Because .ctxrc is in the project root, your agent inherits the external path automatically via the ctx system boostrap call instruction.
Here is the relevant section from CLAUDE.md for reference:
<!-- CLAUDE.md -->\n1. **Run `ctx system bootstrap`**: CRITICAL, not optional.\n This tells you where the context directory is. If it fails or returns\n no context_dir, STOP and warn the user.\n
Moreover, every nudge (context checkpoint, persistence reminder, etc.) also includes a Context: /home/user/repos/myproject-context footer, so the agent remains anchored to the correct directory even in long sessions.
If you use CTX_DIR instead of .ctxrc, export it in your shell profile so the hook process inherits it:
export CTX_DIR=~/repos/myproject-context\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-5-share-with-teammates","level":3,"title":"Step 5: Share with Teammates","text":"
Teammates clone both repos and set up .ctxrc:
# Clone the project\ngit clone git@github.com:org/myproject.git\ncd myproject\n\n# Clone the private context repo\ngit clone git@github.com:org/myproject-context.git ~/repos/myproject-context\n
If .ctxrc is already committed to the project, they're done: ctx commands will find the external context automatically.
If teammates use different paths, each developer sets their own CTX_DIR:
export CTX_DIR=~/my-own-path/myproject-context\n
For encryption key distribution across the team, see the Syncing Scratchpad Notes recipe.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-6-day-to-day-sync","level":3,"title":"Step 6: Day-to-Day Sync","text":"
The external context repo has its own git history. Treat it like any other repo: Commit and push after sessions:
cd ~/repos/myproject-context\n\n# After a session\ngit add -A\ngit commit -m \"Session: refactored auth module, added rate-limit learning\"\ngit push\n
Your AI assistant can do this too. When ending a session:
You: \"Save what we learned and push the context repo.\"\n\nAgent: [runs ctx add learning, then commits and pushes the context repo]\n
You can also set up a post-session habit: project code gets committed to the project repo, context gets committed to the context repo.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't need to remember the flags; simply ask your assistant:
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#set-up-your-system-using-natural-language","level":3,"title":"Set Up Your System Using Natural Language","text":"
You: \"Set up ctx to use ~/repos/myproject-context as the context directory.\"\n\nAgent: \"I'll create a .ctxrc in the project root pointing to that path.\n I'll also update CLAUDE.md so future sessions know where to find\n context. Want me to initialize the context files there too?\"\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#configure-separate-repo-for-context-folder-using-natural-language","level":3,"title":"Configure Separate Repo for .context Folder Using Natural Language","text":"
You: \"My context is in a separate repo. Can you load it?\"\n\nAgent: [reads .ctxrc, finds the path, loads context from the external dir]\n \"Loaded. You have 3 pending tasks, last session was about the auth\n refactor.\"\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#tips","level":2,"title":"Tips","text":"
Start simple. If you don't need external context yet, don't set it up. The default .context/ in-tree is the easiest path. Move to an external repo when you have a concrete reason.
One context repo per project. Sharing a single context directory across multiple projects creates confusion. Keep the mapping 1:1.
Use .ctxrc over env vars when the path is stable. It's committed, documented, and works for the whole team without per-developer shell setup.
Don't forget the boundary flag. The most common error is Error: context directory is outside the project root. Set allow_outside_cwd: true in .ctxrc or pass --allow-outside-cwd.
Commit both repos at session boundaries. Context without code history (or code without context history) loses half the value.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#next-up","level":2,"title":"Next Up","text":"
The Complete Session →: Walk through a full ctx session from start to finish.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#see-also","level":2,"title":"See Also","text":"
Setting Up ctx Across AI Tools: initial setup recipe
Syncing Scratchpad Notes Across Machines: distribute encryption keys when context is shared
CLI Reference: all global flags including --context-dir and --allow-outside-cwd
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/guide-your-agent/","level":1,"title":"Guide Your Agent","text":"
Commands vs. Skills
Commands (ctx status, ctx add task) run in your terminal.
Skills (/ctx-reflect, /ctx-next) run inside your AI coding assistant.
Recipes combine both.
Think of commands as structure and skills as behavior.
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/guide-your-agent/#proactive-behavior","level":2,"title":"Proactive Behavior","text":"
These recipes show explicit commands and skills, but agents trained on the ctx playbook are proactive: They offer to save learnings after debugging, record decisions after trade-offs, create follow-up tasks after completing work, and suggest what to work on next.
Your questions train the agent. Asking \"what have we learned?\" or \"is our context clean?\" does two things:
It triggers the workflow right now,
and it reinforces the pattern.
The more you guide, the more the agent habituates the behavior and begins offering on its own.
Each recipe includes a Conversational Approach section showing these natural-language patterns.
Tip
Don't wait passively for proactive behavior: especially in early sessions.
Ask, guide, reinforce. Over time, you ask less and the agent offers more.
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/guide-your-agent/#next-up","level":2,"title":"Next Up","text":"
Setup Across AI Tools →: Initialize ctx and configure hooks for Claude Code, Cursor, Aider, Copilot, or Windsurf.
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/guide-your-agent/#see-also","level":2,"title":"See Also","text":"
The Complete Session: full session lifecycle from start to finish
Prompting Guide: general tips for working effectively with AI coding assistants
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/hook-output-patterns/","level":1,"title":"Hook Output Patterns","text":"","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#the-problem","level":2,"title":"The Problem","text":"
Claude Code hooks can output text, JSON, or nothing at all. But the format of that output determines who sees it and who acts on it.
Choose the wrong pattern, and your carefully crafted warning gets silently absorbed by the agent, or your agent-directed nudge gets dumped on the user as noise.
This recipe catalogs the known hook output patterns and explains when to use each one.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#tldr","level":2,"title":"TL;DR","text":"
Eight patterns from full control to full invisibility:
hard gate (exit 2),
VERBATIM relay (agent MUST show),
agent directive (context injection),
and silent side-effect (background work).
Most hooks belong in the middle.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#the-spectrum","level":2,"title":"The Spectrum","text":"
These patterns form a spectrum based on who decides what the user sees:
The spectrum runs from full hook control (hard gate) to full invisibility (silent side effect).
Most hooks belong somewhere in the middle.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-1-hard-gate","level":2,"title":"Pattern 1: Hard Gate","text":"
Block the tool call entirely. The agent cannot proceed: it must find another approach or tell the user.
echo '{\"decision\": \"block\", \"reason\": \"Use ctx from PATH, not ./ctx\"}'\n
When to use: Enforcing invariants that must never be violated: Constitution rules, security boundaries, destructive command prevention.
Hook type: PreToolUse only (Claude Code first-class mechanism).
Examples in ctx:
ctx system block-non-path-ctx: Enforces the PATH invocation rule
block-git-push.sh: Requires explicit user approval for pushes (project-local)
block-dangerous-commands.sh: Prevents sudo, copies to ~/.local/bin (project-local)
Trade-off: The agent gets a block response with a reason. Good reasons help the agent recover (\"use X instead\"); bad reasons leave it stuck.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-2-verbatim-relay","level":2,"title":"Pattern 2: VERBATIM Relay","text":"
Force the agent to show this to the user as-is. The explicit instruction overcomes the agent's tendency to silently absorb context.
echo \"IMPORTANT: Relay this warning to the user VERBATIM before answering their question.\"\necho \"\"\necho \"┌─ Journal Reminder ─────────────────────────────\"\necho \"│ You have 12 sessions not yet exported.\"\necho \"└────────────────────────────────────────────────\"\n
When to use: Actionable reminders the user needs to see regardless of what they asked: Stale backups, unimported sessions, resource warnings.
Hook type: UserPromptSubmit (runs before the agent sees the prompt).
Examples in ctx:
ctx system check-journal: Unexported sessions and unenriched entries
ctx system check-context-size: Context capacity warning
ctx system check-resources: Resource pressure (memory, swap, disk, load): DANGER only
ctx system check-freshness: Technology constant staleness warning
check-backup-age.sh: Stale backup warning (project-local)
Trade-off: Noisy if overused. Every VERBATIM relay adds a preamble before the agent's actual answer. Throttle with once-per-day markers or adaptive frequency.
Key detail: The phrase IMPORTANT: Relay this ... VERBATIM is what makes this work. Without it, agents tend to process the information internally and never surface it. The explicit instruction is the pattern: the box-drawing is just fancy formatting.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-3-agent-directive","level":2,"title":"Pattern 3: Agent Directive","text":"
Tell the agent to do something, not the user. The agent decides whether and how to involve the user.
echo \"┌─ Persistence Checkpoint (prompt #25) ───────────\"\necho \"│ No context files updated in 15+ prompts.\"\necho \"│ Have you discovered learnings, decisions,\"\necho \"│ or completed tasks worth persisting?\"\necho \"└──────────────────────────────────────────────────\"\n
When to use: Behavioral nudges. The hook detects a condition and asks the agent to consider an action. The user may never need to know.
Hook type: UserPromptSubmit.
Examples in ctx:
ctx system check-persistence: Nudges the agent to persist context
Trade-off: No guarantee the agent acts. The nudge is one signal among many in the context window. Strong phrasing helps (\"Have you...?\" is better than \"Consider...\"), but ultimately the agent decides.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-4-silent-context-injection","level":2,"title":"Pattern 4: Silent Context Injection","text":"
Load context with no visible output. The agent gets enriched without either party noticing.
ctx agent --budget 4000 >/dev/null || true\n
When to use: Background context loading that should be invisible. The agent benefits from the information, but neither it, nor the user needs to know it happened.
Hook type: PreToolUse with .* matcher (runs on every tool call).
Examples in ctx:
The ctx agentPreToolUse hook: injects project context silently
Trade-off: Adds latency to every tool call. Keep the injected content small and fast to generate.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-5-silent-side-effect","level":2,"title":"Pattern 5: Silent Side-Effect","text":"
Do work, produce no output: Housekeeping that needs no acknowledgment.
find \"$CTX_TMPDIR\" -type f -mtime +15 -delete\n
When to use: Cleanup, log rotation, temp file management. Anything where the action is the point and nobody needs to know it happened.
Hook type: Any hook where output is irrelevant.
Examples in ctx:
Log rotation, marker file cleanup, state directory maintenance
Trade-off: None, if the action is truly invisible. If it can fail in a way that matters, consider logging.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-6-conditional-relay","level":3,"title":"Pattern 6: Conditional Relay","text":"
Tell the agent to relay only if a condition holds in context.
echo \"If the user's question involves modifying .context/ files,\"\necho \"relay this warning VERBATIM:\"\necho \"\"\necho \"┌─ Context Integrity ─────────────────────────────\"\necho \"│ CONSTITUTION.md has not been verified in 7 days.\"\necho \"└────────────────────────────────────────────────\"\necho \"\"\necho \"Otherwise, proceed normally.\"\n
When to use: Warnings that only matter in certain contexts. Avoids noise when the user is doing unrelated work.
Trade-off: Depends on the agent's judgment about when the condition holds. More fragile than VERBATIM relay, but less noisy.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-7-suggested-action","level":3,"title":"Pattern 7: Suggested Action","text":"
Give the agent a specific command to propose to the user.
echo \"┌─ Stale Dependencies ──────────────────────────\"\necho \"│ go.sum is 30+ days newer than go.mod.\"\necho \"│ Suggested: run \\`go mod tidy\\`\"\necho \"│ Ask the user before proceeding.\"\necho \"└───────────────────────────────────────────────\"\n
When to use: The hook detects a fixable condition and knows the fix. Goes beyond a nudge: Gives the agent a concrete next step. The agent still asks for permission but knows exactly what to propose.
Trade-off: The suggestion might be wrong or outdated. The \"ask the user before proceeding\" part is critical.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-8-escalating-severity","level":3,"title":"Pattern 8: Escalating Severity","text":"
Different urgency tiers with different relay expectations.
# INFO: agent processes silently, mentions if relevant\necho \"INFO: Last test run was 3 days ago.\"\n\n# WARN: agent should mention to user at next natural pause\necho \"WARN: 12 uncommitted changes across 3 branches.\"\n\n# CRITICAL: agent must relay immediately, before any other work\necho \"CRITICAL: Relay VERBATIM before answering. Disk usage at 95%.\"\n
When to use: When you have multiple hooks producing output and need to avoid overwhelming the user. INFO gets absorbed, WARN gets mentioned, CRITICAL interrupts.
Examples in ctx:
ctx system check-resources: Uses two tiers (WARNING/DANGER) internally but only fires the VERBATIM relay at DANGER level: WARNING is silent. See ctx system for the user-facing command that shows both tiers.
Trade-off: Requires agent training or convention to recognize the tiers. Without a shared protocol, the prefixes are just text.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#choosing-a-pattern","level":2,"title":"Choosing a Pattern","text":"
Is the agent about to do something forbidden?\n └─ Yes → Hard gate\n\nDoes the user need to see this regardless of what they asked?\n └─ Yes → VERBATIM relay\n └─ Sometimes → Conditional relay\n\nShould the agent consider an action?\n └─ Yes, with a specific fix → Suggested action\n └─ Yes, open-ended → Agent directive\n\nIs this background context the agent should have?\n └─ Yes → Silent injection\n\nIs this housekeeping?\n └─ Yes → Silent side-effect\n
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#design-tips","level":2,"title":"Design Tips","text":"
Throttle aggressively: VERBATIM relays that fire every prompt will be ignored or resented. Use once-per-day markers (touch $REMINDED), adaptive frequency (every Nth prompt), or staleness checks (only fire if condition persists).
Include actionable commands: \"You have 12 unimported sessions\" is less useful than \"You have 12 unimported sessions. Run: ctx journal import --all.\" Give the user (or agent) the exact next step.
Use box-drawing for visual structure: The ┌─ ─┐ │ └─ ─┘ pattern makes hook output visually distinct from agent prose. It also signals \"this is machine-generated, not agent opinion.\"
Test the silence path: Most hook runs should produce no output (the condition isn't met). Make sure the common case is fast and silent.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#common-pitfalls","level":2,"title":"Common Pitfalls","text":"
Lessons from 19 days of hook debugging in ctx. Every one of these was encountered, debugged, and fixed in production.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#silent-misfire-wrong-key-name","level":3,"title":"Silent Misfire: Wrong Key Name","text":"
{ \"PreToolUseHooks\": [ ... ] }\n
The key is PreToolUse, not PreToolUseHooks. Claude Code validates silently: A misspelled key means the hook is ignored with no error. Always test with a debug echo first to confirm the hook fires before adding real logic.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#json-escaping-breaks-shell-commands","level":3,"title":"JSON Escaping Breaks Shell Commands","text":"
Go's json.Marshal escapes >, <, and & as Unicode sequences (\\u003e) by default. This breaks shell commands in generated config:
\"command\": \"ctx agent 2\\u003e/dev/null\"\n
Fix: use json.Encoder with SetEscapeHTML(false) when generating hook configuration.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#stdin-not-environment-variables","level":3,"title":"stdin, Not Environment Variables","text":"
Hook input arrives as JSON via stdin, not environment variables:
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#regex-overfitting","level":3,"title":"Regex Overfitting","text":"
A regex meant to catch ctx as a binary will also match ctx as a directory component:
# Too broad: blocks: git -C /home/jose/WORKSPACE/ctx status\n(/home/|/tmp/|/var/)[^ ]*ctx[^ ]*\n\n# Narrow to binary only:\n(/home/|/tmp/|/var/)[^ ]*/ctx( |$)\n
Test hook regexes against paths that contain the target string as a substring, not just as the final component.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#repetition-fatigue","level":3,"title":"Repetition Fatigue","text":"
Injecting context on every tool call sounds safe. In practice, after seeing the same context injection fifteen times, the agent treats it as background noise: Conventions stated in the injected context get violated because salience has been destroyed by repetition.
Fix: cooldowns. ctx agent --session $PPID --cooldown 10m injects at most once per ten minutes per session using a tombstone file in /tmp/. This is not an optimization; it is a correction for a design flaw. Every injection consumes attention budget: 50 tool calls at 4,000 tokens each means 200,000 tokens of repeated context, most of it wasted.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#hardcoded-paths","level":3,"title":"Hardcoded Paths","text":"
A username rename (parallels to jose) broke every hook at once. Use $CLAUDE_PROJECT_DIR instead of absolute paths:
If the platform provides a runtime variable for paths, always use it.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#next-up","level":2,"title":"Next Up","text":"
Webhook Notifications →: Get push notifications when loops complete, hooks fire, or agents hit milestones.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#see-also","level":2,"title":"See Also","text":"
Customizing Hook Messages: override what hooks say without changing what they do
Claude Code Permission Hygiene: how permissions and hooks work together
Defense in Depth: why hooks matter for agent security
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-sequence-diagrams/","level":1,"title":"Hook Sequence Diagrams","text":"","path":["Hook Sequence Diagrams"],"tags":[]},{"location":"recipes/hook-sequence-diagrams/#hook-lifecycle","level":2,"title":"Hook Lifecycle","text":"
Every ctx hook is a Go binary invoked by Claude Code at one of three lifecycle events: PreToolUse (before a tool runs, can block), PostToolUse (after a tool completes), or UserPromptSubmit (on every user prompt, before any tools run). Hooks receive JSON on stdin and emit JSON or plain text on stdout.
This page documents the execution flow of every hook as a sequence diagram.
Daily check for unimported sessions and unenriched journal entries.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-journal\n participant State as .context/state/\n participant Journal as Journal dir\n participant Claude as Claude projects dir\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Check daily throttle marker\n alt throttled\n Hook-->>CC: (silent exit)\n end\n Hook->>Journal: Check dir exists\n Hook->>Claude: Check dir exists\n alt either dir missing\n Hook-->>CC: (silent exit)\n end\n Hook->>Journal: Get newest entry mtime\n Hook->>Claude: Count .jsonl files newer than journal\n Hook->>Journal: Count unenriched entries\n alt unimported == 0 and unenriched == 0\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, variant, {counts})\n Note over Hook: variant: both | unimported | unenriched\n Hook-->>CC: Nudge box (counts)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Touch throttle marker
Per-session check for MEMORY.md changes since last sync.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-memory-drift\n participant State as .context/state/\n participant Mem as memory.Discover\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Check session tombstone\n alt already nudged this session\n Hook-->>CC: (silent exit)\n end\n Hook->>Mem: DiscoverMemoryPath(projectRoot)\n alt auto memory not active\n Hook-->>CC: (silent exit)\n end\n Hook->>Mem: HasDrift(contextDir, sourcePath)\n alt no drift\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, nudge, fallback)\n Hook-->>CC: Nudge box (drift reminder)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Touch session tombstone
Tracks context file modification and nudges when edits happen without persisting context. Adaptive threshold based on prompt count.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-persistence\n participant State as .context/state/\n participant Ctx as .context/ files\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Read persistence state {Count, LastNudge, LastMtime}\n alt first prompt (no state)\n Hook->>State: Initialize state {Count:1, LastNudge:0, LastMtime:now}\n Hook-->>CC: (silent exit)\n end\n Hook->>Hook: Increment Count\n Hook->>Ctx: Get current context mtime\n alt context modified since LastMtime\n Hook->>State: Reset LastNudge = Count, update LastMtime\n Hook-->>CC: (silent exit)\n end\n Hook->>Hook: sinceNudge = Count - LastNudge\n Hook->>Hook: PersistenceNudgeNeeded(Count, sinceNudge)?\n alt threshold not reached\n Hook->>State: Write state\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, nudge, vars)\n Hook-->>CC: Nudge box (prompt count, time since last persist)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Update LastNudge = Count, write state
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-reminders\n participant Store as Reminders store\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>Store: ReadReminders()\n alt load error\n Hook-->>CC: (silent exit)\n end\n Hook->>Hook: Filter by due date (After <= today)\n alt no due reminders\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, reminders, {list})\n Hook-->>CC: Nudge box (reminder list + dismiss hints)\n Hook->>Hook: NudgeAndRelay(message)
Silent per-prompt pulse. Tracks prompt count, context modification, and token usage. The agent never sees this hook's output.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as heartbeat\n participant State as .context/state/\n participant Ctx as .context/ files\n participant Notify as Webhook + EventLog\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Increment heartbeat counter\n Hook->>Ctx: Get latest context file mtime\n Hook->>State: Compare with last recorded mtime\n Hook->>State: Update mtime record\n Hook->>State: Read session token info\n Hook->>Notify: Send heartbeat notification\n Hook->>Notify: Append to event log\n Hook->>State: Write heartbeat log entry\n Note over Hook: No stdout - agent never sees this
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-backup-age\n participant State as .context/state/\n participant FS as Filesystem\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Check daily throttle marker\n alt throttled\n Hook-->>CC: (silent exit)\n end\n Hook->>FS: Check SMB mount (if env var set)\n Hook->>FS: Check backup marker file age\n alt no warnings\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, warning, {Warnings})\n Hook-->>CC: Nudge box (warnings)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Touch throttle marker
","path":["Hook Sequence Diagrams"],"tags":[]},{"location":"recipes/hook-sequence-diagrams/#throttling-summary","level":2,"title":"Throttling Summary","text":"Hook Lifecycle Throttle Type Scope context-load-gate PreToolUse One-shot marker Per session block-non-path-ctx PreToolUse None Every match qa-reminder PreToolUse None Every git command specs-nudge PreToolUse None Every prompt post-commit PostToolUse None Every git commit check-task-completion PostToolUse Configurable interval Per session check-context-size UserPromptSubmit Adaptive counter Per session check-ceremonies UserPromptSubmit Daily marker Once per day check-freshness UserPromptSubmit Daily marker Once per day check-journal UserPromptSubmit Daily marker Once per day check-knowledge UserPromptSubmit Daily marker Once per day check-map-staleness UserPromptSubmit Daily marker Once per day check-memory-drift UserPromptSubmit Session tombstone Once per session check-persistence UserPromptSubmit Adaptive counter Per session check-reminders UserPromptSubmit None Every prompt check-resources UserPromptSubmit None Every prompt check-version UserPromptSubmit Daily marker Once per day heartbeat UserPromptSubmit None Every prompt block-dangerous-commands PreToolUse * None Every match check-backup-age UserPromptSubmit * Daily marker Once per day
* Project-local hook (settings.local.json), not shipped with ctx.
Claude Code plan files (~/.claude/plans/*.md) are ephemeral: They have structured context, approach, and file lists, but they're orphaned after the session ends. The filenames are UUIDs, so you can't tell what's in them without opening each one.
How do you turn a useful plan into a permanent project spec?
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#tldr","level":2,"title":"TL;DR","text":"
You: /ctx-import-plans\nAgent: [lists plans with dates and titles]\n 1. 2026-02-28 Add authentication middleware\n 2. 2026-02-27 Refactor database connection pool\nYou: \"import 1\"\nAgent: [copies to specs/add-authentication-middleware.md]\n
Plans are copied (not moved) to specs/, slugified by their H1 heading.
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-import-plans Skill List, filter, and import plan files to specs /ctx-add-task Skill Optionally add a task referencing the spec","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-1-list-available-plans","level":3,"title":"Step 1: List Available Plans","text":"
Invoke the skill and it lists plans with modification dates and titles:
You: /ctx-import-plans\n\nAgent: Found 3 plan files:\n 1. 2026-02-28 Add authentication middleware\n 2. 2026-02-27 Refactor database connection pool\n 3. 2026-02-25 Import plans skill\n Which plans would you like to import?\n
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-2-filter-optional","level":3,"title":"Step 2: Filter (Optional)","text":"
You can narrow the list with arguments:
Argument Effect --today Only plans modified today --since YYYY-MM-DD Only plans modified on or after the date --all Import everything without prompting (none) Interactive selection
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-3-select-and-import","level":3,"title":"Step 3: Select and Import","text":"
Pick one or more plans by number:
You: \"import 1 and 3\"\n\nAgent: Imported 2 plan(s):\n ~/.claude/plans/abc123.md -> specs/add-authentication-middleware.md\n ~/.claude/plans/ghi789.md -> specs/import-plans-skill.md\n Want me to add tasks referencing these specs?\n
The agent reads the H1 heading from each plan and slugifies it for the filename. If a plan has no H1 heading, the original filename (minus extension) is used as the slug.
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-4-add-follow-up-tasks-optional","level":3,"title":"Step 4: Add Follow-Up Tasks (Optional)","text":"
If you say yes, the agent creates tasks in TASKS.md that reference the imported specs:
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't need to remember the exact skill name:
You say What happens \"import my plans\" /ctx-import-plans (interactive) \"save today's plans as specs\" /ctx-import-plans --today \"import all plans from this week\" /ctx-import-plans --since ... \"turn that plan into a spec\" /ctx-import-plans (filtered)","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#tips","level":2,"title":"Tips","text":"
Plans are copied, not moved: The originals stay in ~/.claude/plans/. Claude Code manages that directory; ctx doesn't delete from it.
Conflict handling: If specs/{slug}.md already exists, the agent asks whether to overwrite or pick a different name.
Specs are project memory: Once imported, specs are tracked in git and available to future sessions. Reference them from TASKS.md phase headers with Spec: specs/slug.md.
Pair with /ctx-implement: After importing a plan as a spec, use /ctx-implement to execute it step-by-step with verification.
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#see-also","level":2,"title":"See Also","text":"
Skills Reference: /ctx-import-plans: full skill description
The Complete Session: where plan import fits in the session flow
Tracking Work Across Sessions: managing tasks that reference imported specs
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/knowledge-capture/","level":1,"title":"Persisting Decisions, Learnings, and Conventions","text":"","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#the-problem","level":2,"title":"The Problem","text":"
You debug a subtle issue, discover the root cause, and move on.
Three weeks later, a different session hits the same issue. The knowledge existed briefly in one session's memory but was never written down.
Architectural decisions suffer the same fate: you weigh trade-offs, pick an approach, and six sessions later the AI suggests the alternative you already rejected.
How do you make sure important context survives across sessions?
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#tldr","level":2,"title":"TL;DR","text":"
/ctx-reflect # surface items worth persisting\n/ctx-add-decision \"Title\" # record with context/rationale/consequence\n/ctx-add-learning \"Title\" # record with context/lesson/application\n
Or just tell your agent: \"What have we learned this session?\"
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx add decision Command Record an architectural decision ctx add learning Command Record a gotcha, tip, or lesson ctx add convention Command Record a coding pattern or standard ctx reindex Command Rebuild both quick-reference indices ctx decision reindex Command Rebuild the DECISIONS.md index ctx learning reindex Command Rebuild the LEARNINGS.md index /ctx-add-decision Skill AI-guided decision capture with validation /ctx-add-learning Skill AI-guided learning capture with validation /ctx-add-convention Skill AI-guided convention recording with placement /ctx-reflect Skill Surface items worth persisting at breakpoints","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#the-workflow","level":2,"title":"The Workflow","text":"
Decision, Learning, or Convention?
If you chose between alternatives, it is a decision.
If you discovered something surprising, it is a learning.
If you are codifying a repeated pattern, it is a convention.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-1-understand-what-to-persist","level":3,"title":"Step 1: Understand What to Persist","text":"
Three context files serve different purposes:
Decisions (DECISIONS.md) answer \"why is it this way?\" They record trade-offs between alternatives with structured fields: context, rationale, and consequences. Decisions prevent re-debating settled questions.
Learnings (LEARNINGS.md) answer \"what did we discover the hard way?\" They record gotchas and debugging insights specific to this project with structured fields: context, lesson, and application.
Learnings prevent repeating past mistakes.
Conventions (CONVENTIONS.md) answer \"how do we do things here?\" They record patterns and standards. No structured fields required: just a name, a rule, and an example. Conventions keep code consistent across sessions.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-2-record-decisions","level":3,"title":"Step 2: Record Decisions","text":"
Decisions have three structured fields:
context,
rationale,
and consequences.
The easiest way to record one is to tell your agent:
You: \"Save that as a decision: we picked file-based cooldown\n tokens over env vars.\"\n\nAgent: \"Recorded in DECISIONS.md:\n - Decision: Use file-based cooldown tokens instead of env vars\n - Context: Hook subprocesses cannot persist env vars to parent shell\n - Rationale: File tokens survive across processes. Simpler than\n IPC. Cleanup is automatic via TTL.\n - Consequence: Tombstone files accumulate in /tmp. Cannot share\n state across machines.\"\n
The agent was present for the trade-off discussion. It fills in the structured fields from what it observed, which is hard to reconstruct later as a terminal one-liner.
When multiple options were considered, the agent includes rejected alternatives in the rationale automatically.
The /ctx-add-decision skill also guides you through the fields interactively. For quick decisions, it supports a Y-statement:
\"In the context of [situation], facing [constraint], we decided for [choice] and against [alternatives], to achieve [benefit], accepting that [trade-off].\"
CLI Command for Scripting and Automation
When no agent is in the loop (CI pipelines, shell scripts, hooks), use the CLI directly with structured flags:
ctx add decision \"Use file-based cooldown tokens instead of env vars\" \\\n --context \"Hook subprocesses cannot persist env vars to parent shell\" \\\n --rationale \"File tokens survive across processes. Simpler than IPC. Cleanup is automatic via TTL.\" \\\n --consequence \"Tombstone files accumulate in /tmp. Cannot share state across machines.\"\n
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-3-record-learnings","level":3,"title":"Step 3: Record Learnings","text":"
Learnings have three structured fields: context, lesson, and application.
Again, your agent is the best tool for this. It was there when you discovered the gotcha and can express it more completely than a backslash-escaped terminal command:
You: \"That subprocess thing was painful. Save it as a learning.\"\n\nAgent: \"Added to LEARNINGS.md:\n - Learning: Claude Code hooks run in a subprocess\n - Context: Set env var in PreToolUse hook, but it was not visible\n in the main session\n - Lesson: Hook scripts execute in a child process. Env changes do\n not propagate to parent.\n - Application: Use tombstone files for hook-to-session\n communication. Never rely on hook env vars.\"\n
You said one sentence. The agent produced a structured entry with all three fields filled from the session context.
That is the point: the agent has the full conversation and can be far more expressive than what you would type at a command prompt.
The /ctx-add-learning skill applies three quality filters:
Could someone Google this in 5 minutes?
Is it specific to this codebase?
Did it take real effort to discover?
All three must pass.
Learnings capture principles and heuristics, not code snippets.
CLI Command for Scripting and Automation
When no agent is in the loop:
ctx add learning \"Claude Code hooks run in a subprocess\" \\\n --context \"Set env var in PreToolUse hook, but it was not visible in the main session\" \\\n --lesson \"Hook scripts execute in a child process. Env changes do not propagate to parent.\" \\\n --application \"Use tombstone files for hook-to-session communication. Never rely on hook env vars.\"\n
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-4-record-conventions","level":3,"title":"Step 4: Record Conventions","text":"
Conventions are simpler: just a name, a rule, and optionally a section.
These are short enough that either approach works:
You: \"We've been using kebab-case for every CLI flag. Codify that.\"\n\nAgent: \"Added to CONVENTIONS.md under Naming:\n 'Use kebab-case for all CLI flag names.'\"\n
Or from the terminal:
ctx add convention \"Use kebab-case for all CLI flag names\" --section \"Naming\"\n
Conventions work best for rules that come up repeatedly. Codify a pattern the third time you see it, not the first.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-5-reindex-after-manual-edits","level":3,"title":"Step 5: Reindex After Manual Edits","text":"
DECISIONS.md and LEARNINGS.md maintain a quick-reference index at the top: a compact table of date and title for each entry. The index updates automatically via ctx add, but falls out of sync after hand edits.
ctx reindex\n
This single command regenerates both indices. You can also reindex individually with ctx decision reindex or ctx learning reindex.
Run reindex after any manual edit. The index lets AI tools scan all entries without reading the full file, which matters when token budgets are tight.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-6-use-ctx-reflect-to-surface-what-to-capture","level":3,"title":"Step 6: Use /ctx-reflect to Surface What to Capture","text":"
Keep It Conversational
/ctx-reflect is not the only way to trigger reflection.
Agents trained on the ctx playbook naturally surface persist-worthy items at breakpoints, even without invoking the skill explicitly.
A conversational prompt like \"anything worth saving?\" or \"let's wrap up\" can trigger the same review.
The skill provides a structured checklist, but the behavior is available through natural conversation.
At natural breakpoints (after completing a feature, fixing a bug, or before ending a session) use /ctx-reflect to identify items worth persisting.
/ctx-reflect\n
The skill walks through learnings, decisions, tasks, and session notes, skipping categories with nothing to report. The output includes specific commands for each suggested persist:
This session implemented file-based cooldown for `ctx agent` and\ndiscovered that hook subprocesses cannot set env vars in the parent.\n\nI'd suggest persisting:\n- **Learning**: Hook subprocesses cannot propagate env vars\n `ctx add learning \"...\" --context \"...\" --lesson \"...\" --application \"...\"`\n- **Decision**: File-based cooldown tokens over env vars\n `ctx add decision \"...\" --context \"...\" --rationale \"...\" --consequence \"...\"`\n\nWant me to persist any of these?\n
The skill always asks before persisting.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-7-the-conversational-approach","level":3,"title":"Step 7: The Conversational Approach","text":"
The commands shown in Steps 2 to 4 are precise, but you rarely need to type them yourself. When working with an agent that has loaded the ctx playbook, natural language triggers the same operations.
You ask, the agent acts:
You say What the agent does \"What have we learned this session?\" Reviews recent work, identifies learnings, offers to persist \"Save that as a decision\" Captures the trade-off you just discussed with structured fields \"Any gotchas worth remembering?\" Scans recent work for non-obvious lessons \"Record that convention\" Adds the pattern you just agreed on to CONVENTIONS.md \"Let's wrap up\" Runs through the reflect checklist conversationally
Example exchange:
You: \"What did we learn from that debugging session?\"\n\nAgent: \"We discovered three things worth persisting:\n 1. Hook subprocesses can't propagate env vars to the parent. We\n spent 20 minutes on this before switching to tombstone files.\n 2. Tombstone files need a TTL or they accumulate in /tmp.\n 3. The cooldown logic must check file mtime, not just existence.\n Want me to save these as learnings?\"\n\nYou: \"Yes, all of them.\"\n\nAgent: \"Done. Added all three to LEARNINGS.md with structured fields.\"\n
The agent can also be proactive without being asked. The playbook encourages agents to persist context at natural milestones.
You may see:
After fixing a tricky bug: \"I noticed this was a non-obvious issue. Want me to save it as a learning?\"
After choosing between approaches: \"We just made a trade-off between file-based tokens and IPC. Should I record this as a decision?\"
At session milestones: the agent checks what is worth persisting and offers a brief summary.
When agents run unattended (no human in the loop), they may persist directly:
\"I've added the subprocess env var gotcha to LEARNINGS.md so we don't hit it again.\"
\"Recorded the decision to use file-based tokens over env vars in DECISIONS.md.\"
\"Marked the cooldown task done and added a follow-up for TTL cleanup.\"
This is behavior by design.
The playbook's self-check prompt, \"If this session ended right now, would the next session know what happened?\" drives agents to persist early and often rather than waiting for explicit instructions.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#command-line-approach-scripting-and-automation","level":3,"title":"Command-line Approach (Scripting and Automation)","text":"
# Decision: record the trade-off\nctx add decision \"Use PostgreSQL over SQLite\" \\\n --context \"Need concurrent multi-user access\" \\\n --rationale \"SQLite locks on writes; Postgres handles concurrency\" \\\n --consequence \"Requires a database server; team needs Postgres training\"\n\n# Learning: record the gotcha\nctx add learning \"SQL migrations must be idempotent\" \\\n --context \"Deploy failed when migration ran twice after rollback\" \\\n --lesson \"CREATE TABLE without IF NOT EXISTS fails on retry\" \\\n --application \"Always use IF NOT EXISTS guards in migrations\"\n\n# Convention: record the pattern\nctx add convention \"API handlers return structured errors\" --section \"API\"\n\n# Reindex after manual edits\nctx reindex\n\n# Reflect at breakpoints (from AI assistant)\n# /ctx-reflect\n
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#conversational-approach-interactive-sessions","level":3,"title":"Conversational Approach (Interactive Sessions)","text":"
You: \"We just picked Postgres over SQLite for the concurrency\n reasons. Save that?\"\n\nAgent: \"Recorded. Added to DECISIONS.md:\n - Decision: Use PostgreSQL over SQLite\n - Context: Need concurrent multi-user access\n - Rationale: SQLite locks on writes; Postgres handles concurrency\n - Consequence: Requires a database server; team needs Postgres\n training\"\n\nYou: \"Also that migration thing. The deploy blew up when the\n migration ran twice.\"\n\nAgent: \"Got it. Added to LEARNINGS.md:\n - Learning: SQL migrations must be idempotent\n - Lesson: CREATE TABLE without IF NOT EXISTS fails on retry\n - Application: Always use IF NOT EXISTS guards in migrations\"\n\nYou: \"Good. Anything else worth saving before we wrap up?\"\n\nAgent: \"We've been using structured error returns in every API\n handler this session. Want me to codify that as a convention?\"\n\nYou: \"Do it.\"\n\nAgent: \"Done. Added 'API handlers return structured errors' to\n CONVENTIONS.md under the API section.\"\n
Both approaches produce the same structured entries in the same context files.
The conversational approach is the natural fit for interactive sessions;
the CLI commands are better suited for scripts, hooks, and automation pipelines.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#tips","level":2,"title":"Tips","text":"
Record decisions at the moment of choice. The alternatives you considered and the reasons you rejected them fade quickly. Capture trade-offs while they are fresh.
Learnings should fail the Gemini test. If someone could find it in a 5-minute Gemini search, it does not belong in LEARNINGS.md.
Conventions earn their place through repetition. Add a convention the third time you see a pattern, not the first.
Use /ctx-reflect at natural breakpoints. The checklist catches items you might otherwise lose.
Keep the entries self-contained. Each entry should make sense on its own. A future session may load only one due to token budget constraints.
Reindex after every hand edit. It takes less than a second. A stale index causes AI tools to miss entries.
Prefer the structured fields. The verbosity forces clarity. A decision without a rationale is just a fact. A learning without an application is just a story.
Talk to your agent, do not type commands. In interactive sessions, the conversational approach is the recommended way to capture knowledge. Say \"save that as a learning\" or \"any decisions worth recording?\" and let the agent handle the structured fields. Reserve the CLI commands for scripting, automation, and CI/CD pipelines where there is no agent in the loop.
Trust the agent's proactive instincts. Agents trained on the ctx playbook will offer to persist context at milestones. A brief \"want me to save this?\" is cheaper than re-discovering the same lesson three sessions later.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#next-up","level":2,"title":"Next Up","text":"
Tracking Work Across Sessions →: Add, prioritize, complete, and archive tasks across sessions.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#see-also","level":2,"title":"See Also","text":"
Tracking Work Across Sessions: managing the tasks that decisions and learnings support
The Complete Session: full session lifecycle including reflection and context persistence
Detecting and Fixing Drift: keeping knowledge files accurate as the codebase evolves
CLI Reference: full documentation for ctx add, ctx decision, ctx learning
Context Files: format and conventions for DECISIONS.md, LEARNINGS.md, and CONVENTIONS.md
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/memory-bridge/","level":1,"title":"Bridging Claude Code Auto Memory","text":"","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#the-problem","level":2,"title":"The Problem","text":"
Claude Code maintains per-project auto memory at ~/.claude/projects/<slug>/memory/MEMORY.md. This file is:
Outside the repo - not version-controlled, not portable
Machine-specific - tied to one ~/.claude/ directory
Invisible to ctx - context loading and hooks don't read it
Meanwhile, ctx maintains structured context files (DECISIONS.md, LEARNINGS.md, CONVENTIONS.md) that are git-tracked, portable, and token-budgeted - but Claude Code doesn't automatically write to them.
The two systems hold complementary knowledge with no bridge between them.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#tldr","level":2,"title":"TL;DR","text":"
ctx memory sync # Mirror MEMORY.md into .context/memory/mirror.md\nctx memory status # Check for drift\nctx memory diff # See what changed since last sync\n
The check-memory-drift hook nudges automatically when MEMORY.md changes - you don't need to remember to sync manually.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx memory sync CLI command Copy MEMORY.md to mirror, archive previous ctx memory status CLI command Show drift, timestamps, line counts ctx memory diff CLI command Show changes since last sync ctx memory import CLI command Classify and promote entries to .context/ files ctx memory publish CLI command Push curated .context/ content to MEMORY.md ctx memory unpublish CLI command Remove published block from MEMORY.md ctx system check-memory-drift Hook Nudge when MEMORY.md has changed (once/session)","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#how-it-works","level":2,"title":"How It Works","text":"","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#discovery","level":3,"title":"Discovery","text":"
Claude Code encodes project paths as directory names under ~/.claude/projects/. The encoding replaces / with - and prefixes with -:
ctx memory uses this encoding to locate MEMORY.md automatically from your project root - no configuration needed.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#mirroring","level":3,"title":"Mirroring","text":"
When you run ctx memory sync:
The previous mirror is archived to .context/memory/archive/mirror-<timestamp>.md
MEMORY.md is copied to .context/memory/mirror.md
Sync state is updated in .context/state/memory-import.json
The mirror is git-tracked, so it travels with the project. Archives provide a fallback for projects that don't use git.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#drift-detection","level":3,"title":"Drift Detection","text":"
The check-memory-drift hook compares MEMORY.md's modification time against the mirror. When drift is detected, the agent sees:
┌─ Memory Drift ────────────────────────────────────────────────\n│ MEMORY.md has changed since last sync.\n│ Run: ctx memory sync\n│ Context: .context\n└────────────────────────────────────────────────────────────────\n
The nudge fires once per session to avoid noise.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#typical-workflow","level":2,"title":"Typical Workflow","text":"","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#at-session-start","level":3,"title":"At Session Start","text":"
If the hook fires a drift nudge, sync before diving into work:
ctx memory diff # Review what changed\nctx memory sync # Mirror the changes\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#periodic-check","level":3,"title":"Periodic Check","text":"
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#dry-run","level":3,"title":"Dry Run","text":"
Preview what sync would do without writing:
ctx memory sync --dry-run\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#storage-layout","level":2,"title":"Storage Layout","text":"
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#edge-cases","level":2,"title":"Edge Cases","text":"Scenario Behavior Auto memory not active sync exits 1 with message. status reports \"not active\". Hook skips silently. First sync (no mirror) Creates mirror without archiving. MEMORY.md is empty Syncs to empty mirror (valid). Not initialized Init guard rejects (same as all ctx commands).","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#importing-entries","level":2,"title":"Importing Entries","text":"
Once you've synced, you can classify and promote entries into structured .context/ files:
Keywords Target always use, prefer, never use, standard CONVENTIONS.md decided, chose, trade-off, approach DECISIONS.md gotcha, learned, watch out, bug, caveat LEARNINGS.md todo, need to, follow up TASKS.md Everything else Skipped
Entries that don't match any pattern are skipped - they stay in the mirror for manual review. Deduplication (hash-based) prevents re-importing the same entry on subsequent runs.
Review Before Importing
Use --dry-run first. The heuristic classifier is deliberately simple - it may misclassify ambiguous entries. Review the plan, then import.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#full-workflow","level":3,"title":"Full Workflow","text":"
ctx memory sync # 1. Mirror MEMORY.md\nctx memory import --dry-run # 2. Preview what would be imported\nctx memory import # 3. Promote entries to .context/ files\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#publishing-context-to-memorymd","level":2,"title":"Publishing Context to MEMORY.md","text":"
Push curated .context/ content back into MEMORY.md so Claude Code sees structured project context on session start - without needing hooks.
ctx memory publish --dry-run # Preview what would be published\nctx memory publish # Write to MEMORY.md\nctx memory publish --budget 40 # Tighter line budget\n
ctx memory publish replaces only inside the markers
To remove the published block entirely:
ctx memory unpublish\n
Publish at Wrap-Up, Not on Commit
The best time to publish is during session wrap-up, after persisting decisions and learnings. Never auto-publish - give yourself a chance to review what's going into MEMORY.md.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#full-bidirectional-workflow","level":3,"title":"Full Bidirectional Workflow","text":"
ctx memory sync # 1. Mirror MEMORY.md\nctx memory import --dry-run # 2. Check what Claude wrote\nctx memory import # 3. Promote entries to .context/\nctx memory publish --dry-run # 4. Check what would be published\nctx memory publish # 5. Push context to MEMORY.md\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/multi-tool-setup/","level":1,"title":"Setup Across AI Tools","text":"","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#the-problem","level":2,"title":"The Problem","text":"
You have installed ctx and want to set it up with your AI coding assistant so that context persists across sessions. Different tools have different integration depths. For example:
Claude Code supports native hooks that load and save context automatically.
Cursor injects context via its system prompt.
Aider reads context files through its --read flag.
This recipe walks through the complete setup for each tool, from initialization through verification, so you end up with a working memory layer regardless of which AI tool you use.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#tldr","level":2,"title":"TL;DR","text":"
Create a .ctxrc in your project root to configure token budgets, context directory, drift thresholds, and more.
Then start your AI tool and ask: \"Do you remember?\"
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Command/Skill Role in this workflow ctx init Create .context/ directory, templates, and permissions ctx setup Generate integration configuration for a specific AI tool ctx agent Print a token-budgeted context packet for AI consumption ctx load Output assembled context in read order (for manual pasting) ctx watch Auto-apply context updates from AI output (non-native tools) ctx completion Generate shell autocompletion for bash, zsh, or fish ctx journal import Import sessions to editable journal Markdown","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-1-initialize-ctx","level":3,"title":"Step 1: Initialize ctx","text":"
Run ctx init in your project root. This creates the .context/ directory with all template files and seeds ctx permissions in settings.local.json.
cd your-project\nctx init\n
This produces the following structure:
.context/\n CONSTITUTION.md # Hard rules the AI must never violate\n TASKS.md # Current and planned work\n CONVENTIONS.md # Code patterns and standards\n ARCHITECTURE.md # System overview\n DECISIONS.md # Architectural decisions with rationale\n LEARNINGS.md # Lessons learned, gotchas, tips\n GLOSSARY.md # Domain terms and abbreviations\n AGENT_PLAYBOOK.md # How AI tools should use this system\n
Using a Different .context Directory
The .context/ directory doesn't have to live inside your project. You can point ctx to an external folder via .ctxrc, the CTX_DIR environment variable, or the --context-dir CLI flag.
This is useful for monorepos or shared context across repositories.
See Configuration for details and External Context for a full recipe.
For Claude Code, install the ctx plugin to get hooks and skills:
claude /plugin marketplace add ActiveMemory/ctx\nclaude /plugin install ctx@activememory-ctx\n
If you only need the core files (useful for lightweight setups), use the --minimal flag:
ctx init --minimal\n
This creates only TASKS.md, DECISIONS.md, and CONSTITUTION.md.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-2-generate-tool-specific-hooks","level":3,"title":"Step 2: Generate Tool-Specific Hooks","text":"
If you are using a tool other than Claude Code (which is configured automatically by ctx init), generate its integration configuration:
# For Cursor\nctx setup cursor\n\n# For Aider\nctx setup aider\n\n# For GitHub Copilot\nctx setup copilot\n\n# For Windsurf\nctx setup windsurf\n
Each command prints the configuration you need. How you apply it depends on the tool.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#claude-code","level":4,"title":"Claude Code","text":"
No action needed. Just install ctx from the Marketplace as ActiveMemory/ctx.
Claude Code is a First-Class Citizen
With the ctx plugin installed, Claude Code gets hooks and skills automatically. The PreToolUse hook runs ctx agent --budget 4000 on every tool call (with a 10-minute cooldown so it only fires once per window).
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#cursor","level":4,"title":"Cursor","text":"
Add the system prompt snippet to .cursor/settings.json:
{\n \"ai.systemPrompt\": \"Read .context/TASKS.md and .context/CONVENTIONS.md before responding. Follow rules in .context/CONSTITUTION.md.\"\n}\n
Context files appear in Cursor's file tree. You can also paste a context packet directly into chat:
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#aider","level":4,"title":"Aider","text":"
Create .aider.conf.yml so context files are loaded on every session:
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-3-set-up-shell-completion","level":3,"title":"Step 3: Set Up Shell Completion","text":"
Shell completion lets you tab-complete ctx subcommands and flags, which is especially useful while learning the CLI.
# Bash (add to ~/.bashrc)\nsource <(ctx completion bash)\n\n# Zsh (add to ~/.zshrc)\nsource <(ctx completion zsh)\n\n# Fish\nctx completion fish > ~/.config/fish/completions/ctx.fish\n
After sourcing, typing ctx a<TAB> completes to ctx agent, and ctx journal <TAB> shows list, show, and export.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-4-verify-the-setup-works","level":3,"title":"Step 4: Verify the Setup Works","text":"
Start a fresh session in your AI tool and ask:
\"Do you remember?\"
A correctly configured tool responds with specific context: current tasks from TASKS.md, recent decisions, and previous session topics. It should not say \"I don't have memory\" or \"Let me search for files.\"
This question checks the passive side of memory. A properly set-up agent is also proactive: it treats context maintenance as part of its job:
After a debugging session, it offers to save a learning.
After a trade-off discussion, it asks whether to record the decision.
After completing a task, it suggests follow-up items.
The \"do you remember?\" check verifies both halves: recall and responsibility.
For example, after resolving a tricky bug, a proactive agent might say:
That Redis timeout issue was subtle. Want me to save this as a *learning*\nso we don't hit it again?\n
If you see behavior like this, the setup is working end to end.
In Claude Code, you can also invoke the /ctx-status skill:
/ctx-status\n
This prints a summary of all context files, token counts, and recent activity, confirming that hooks are loading context.
If context is not loading, check the basics:
Symptom Fix ctx: command not found Ensure ctx is in your PATH: which ctx Hook errors Verify plugin is installed: claude /plugin list Context not refreshing Cooldown may be active; wait 10 minutes or set --cooldown 0","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-5-enable-watch-mode-for-non-native-tools","level":3,"title":"Step 5: Enable Watch Mode for Non-Native Tools","text":"
Tools like Aider, Copilot, and Windsurf do not support native hooks for saving context automatically. For these, run ctx watch alongside your AI tool.
Pipe the AI tool's output through ctx watch:
# Terminal 1: Run Aider with output logged\naider 2>&1 | tee /tmp/aider.log\n\n# Terminal 2: Watch the log for context updates\nctx watch --log /tmp/aider.log\n
Or for any generic tool:
your-ai-tool 2>&1 | tee /tmp/ai.log &\nctx watch --log /tmp/ai.log\n
When the AI emits structured update commands, ctx watch parses and applies them automatically:
<context-update type=\"learning\"\n context=\"Debugging rate limiter\"\n lesson=\"Redis MULTI/EXEC does not roll back on error\"\n application=\"Wrap rate-limit checks in Lua scripts instead\"\n>Redis Transaction Behavior</context-update>\n
To preview changes without modifying files:
ctx watch --dry-run --log /tmp/ai.log\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-6-import-session-transcripts-optional","level":3,"title":"Step 6: Import Session Transcripts (Optional)","text":"
If you want to browse past session transcripts, import them to the journal:
ctx journal import --all\n
This converts raw session data into editable Markdown files in .context/journal/. You can then enrich them with metadata using /ctx-journal-enrich-all inside your AI assistant.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
Here is the condensed setup for all three tools:
# ## Common (run once per project) ##\ncd your-project\nctx init\nsource <(ctx completion zsh) # or bash/fish\n\n# ## Claude Code (automatic, just verify) ##\n# Start Claude Code, then ask: \"Do you remember?\"\n\n# ## Cursor ##\nctx setup cursor\n# Add the system prompt to .cursor/settings.json\n# Paste context: ctx agent --budget 4000 | pbcopy\n\n# ## Aider ##\nctx setup aider\n# Create .aider.conf.yml with read: paths\n# Run watch mode alongside: ctx watch --log /tmp/aider.log\n\n# ## Verify any Tool ##\n# Ask your AI: \"Do you remember?\"\n# Expect: specific tasks, decisions, recent context\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#tips","level":2,"title":"Tips","text":"
Start with ctx init (not --minimal) for your first project. The full template set gives the agent more to work with, and you can always delete files later.
For Claude Code, the token budget is configured in the plugin's hooks.json. To customize, adjust the --budget flag in the ctx agent hook command.
The --session $PPID flag isolates cooldowns per Claude Code process, so parallel sessions do not suppress each other.
Commit your .context/ directory to version control. Several ctx features (journals, changelogs, blog generation) rely on git history.
For Cursor and Copilot, keep CONVENTIONS.md visible. These tools treat open files as higher-priority context.
Run ctx drift periodically to catch stale references before they confuse the agent.
The agent playbook instructs the agent to persist context at natural milestones (completed tasks, decisions, gotchas). In practice, this works best when you reinforce the habit: a quick \"anything worth saving?\" after a debugging session goes a long way.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#companion-tools-highly-recommended","level":2,"title":"Companion Tools (Highly Recommended)","text":"
ctx skills can leverage external MCP servers for web search and code intelligence. ctx works without them, but they significantly improve agent behavior across sessions — the investment is small and the benefits compound. Skills like /ctx-code-review, /ctx-explain, and /ctx-refactor all become noticeably better with these tools connected.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#gemini-search","level":3,"title":"Gemini Search","text":"
Provides grounded web search with citations. Used by skills and the agent playbook as the preferred search backend (faster and more accurate than built-in web search).
Setup: Add the Gemini Search MCP server to your Claude Code settings. See the Gemini Search MCP documentation for installation.
Verification:
# The agent checks this automatically during /ctx-remember\n# Manual test: ask the agent to search for something\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#gitnexus","level":3,"title":"GitNexus","text":"
Provides a code knowledge graph with symbol resolution, blast radius analysis, and domain clustering. Used by skills like /ctx-refactor (impact analysis) and /ctx-code-review (dependency awareness).
Setup: Add the GitNexus MCP server to your Claude Code settings, then index your project:
npx gitnexus analyze\n
Verification:
# The agent checks this automatically during /ctx-remember\n# If the index is stale, it will suggest rehydrating\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#suppressing-the-check","level":3,"title":"Suppressing the Check","text":"
If you don't use companion tools and want to skip the availability check at session start, add to .ctxrc:
companion_check: false\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#future-direction","level":3,"title":"Future Direction","text":"
The companion tool integration is evolving toward a pluggable model: bring your own search engine, bring your own code intelligence. The current integration is MCP-based and limited to Gemini Search and GitNexus. If you use a different search or code intelligence tool, skills will degrade gracefully to built-in capabilities.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#next-up","level":2,"title":"Next Up","text":"
Keeping Context in a Separate Repo →: Store context files outside the project tree for multi-repo or open source setups.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#see-also","level":2,"title":"See Also","text":"
The Complete Session: full session lifecycle recipe
Multilingual Session Parsing: configure session header prefixes for other languages
CLI Reference: all commands and flags
Integrations: detailed per-tool integration docs
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multilingual-sessions/","level":1,"title":"Multilingual Session Parsing","text":"","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#the-problem","level":2,"title":"The Problem","text":"
Your team works across languages. Session files written by AI tools might use headers like # Oturum: 2026-01-15 - API Düzeltme (Turkish) or # セッション: 2026-01-15 - テスト (Japanese) instead of # Session: 2026-01-15 - Fix API.
By default, ctx only recognizes Session: as a session header prefix. Files with other prefixes are silently skipped during journal import and journal generation: They look like regular Markdown, not sessions.
session_prefixes:\n - \"Session:\" # English (include to keep default)\n - \"Oturum:\" # Turkish\n - \"セッション:\" # Japanese\n
Restart your session. All configured prefixes are now recognized.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#how-it-works","level":2,"title":"How It Works","text":"
The Markdown session parser detects session files by looking for an H1 header that starts with a known prefix followed by a date:
# Session: 2026-01-15 - Fix API Rate Limiting\n# Oturum: 2026-01-15 - API Düzeltme\n# セッション: 2026-01-15 - テスト\n
The list of recognized prefixes comes from session_prefixes in .ctxrc. When the key is absent or empty, ctx falls back to the built-in default: [\"Session:\"].
Date-only headers (# 2026-01-15 - Morning Work) are always recognized regardless of prefix configuration.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#configuration","level":2,"title":"Configuration","text":"","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#adding-a-language","level":3,"title":"Adding a language","text":"
Add the prefix with a trailing colon to your .ctxrc:
When you override session_prefixes, the default is replaced, not extended. If you still want English headers recognized, include \"Session:\" in your list.
Commit .ctxrc to the repo so all team members share the same prefix list. This ensures ctx journal import and journal generation pick up sessions from all team members regardless of language.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#common-prefixes","level":3,"title":"Common prefixes","text":"Language Prefix English Session: Turkish Oturum: Spanish Sesión: French Session: German Sitzung: Japanese セッション: Korean 세션: Portuguese Sessão: Chinese 会话:","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#verifying","level":3,"title":"Verifying","text":"
After configuring, test with ctx journal source. Sessions with the new prefixes should appear in the output.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#what-this-does-not-do","level":2,"title":"What This Does NOT Do","text":"
Change the interface language: ctx output is always English. This setting only controls which session files ctx can parse.
Generate headers: ctx never writes session headers. The prefix list is recognition-only (input, not output).
Affect JSONL sessions: Claude Code JSONL transcripts don't use header prefixes. This only applies to Markdown session files in .context/sessions/.
See also: Setup Across AI Tools - complete multi-tool setup including Markdown session configuration.
See also: CLI Reference - full .ctxrc field reference including session_prefixes.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/parallel-worktrees/","level":1,"title":"Parallel Agent Development with Git Worktrees","text":"","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#the-problem","level":2,"title":"The Problem","text":"
You have a large backlog (10, 20, 30 open tasks) and many of them are independent: docs work that doesn't touch Go code, a new package that doesn't overlap with existing ones, test coverage for a stable module.
Running one agent at a time means serial execution. You want 3-4 agents working in parallel, each on its own track, without stepping on each other's files.
Git worktrees solve this.
Each worktree is a separate working directory with its own branch, but they share the same .git object database. Combined with ctx's persistent context, each agent session picks up the full project state and works independently.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#tldr","level":2,"title":"TL;DR","text":"
/ctx-worktree # 1. group tasks by file overlap\ngit worktree add ../myproject-docs -b work/docs # 2. create worktrees\ncd ../myproject-docs && claude # 3. launch agents (one per track)\n/ctx-worktree teardown docs # 4. merge back and clean up\n
TASKS.md will conflict on merge: Accept all [x] completions from both sides.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-worktree Skill Create, list, and tear down worktrees /ctx-next Skill Pick tasks from the backlog for each track git worktree Command Underlying git worktree management git merge Command Merge completed tracks back to main","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-1-assess-the-backlog","level":3,"title":"Step 1: Assess the Backlog","text":"
Start in your main checkout. Ask the agent to analyze your tasks and group them by blast radius: which files and directories each task touches.
/ctx-worktree\nLook at TASKS.md and group the pending tasks into 2-3 independent\ntracks based on which files they'd touch. Show me the grouping\nbefore creating anything.\n
The agent reads TASKS.md, estimates file overlap, and proposes groups:
Proposed worktree groups:\n\n work/docs # recipe updates, blog post (touches: docs/)\n work/crypto # scratchpad encryption infra (touches: internal/crypto/)\n work/tests # journal test coverage (touches: internal/cli/journal/)\n
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-2-create-the-worktrees","level":3,"title":"Step 2: Create the Worktrees","text":"
Once you approve the grouping, the agent creates worktrees as sibling directories:
Each worktree is a full working copy on its own branch.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-3-launch-agents","level":3,"title":"Step 3: Launch Agents","text":"
Open a separate terminal (or editor window) for each worktree and start a Claude Code session:
Each agent sees the full project, including .context/, and can work independently.
Do Not Initialize Context in Worktrees
Do not run ctx init in worktrees: The .context directory is already tracked in git.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-4-work","level":3,"title":"Step 4: Work","text":"
Each agent works through its assigned tasks. They can read TASKS.md to know what's assigned to their track, use /ctx-next to pick the next item, and commit normally on their work/* branch.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-5-merge-back","level":3,"title":"Step 5: Merge Back","text":"
As each track finishes, return to the main checkout and merge:
/ctx-worktree teardown docs\n
The agent checks for uncommitted changes, merges work/docs into your current branch, removes the worktree, and deletes the branch.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-6-handle-tasksmd-conflicts","level":3,"title":"Step 6: Handle TASKS.md Conflicts","text":"
TASKS.md will almost always conflict when merging: Multiple agents will mark different tasks as [x]. This is expected and easy to resolve:
Accept all completions from both sides. No task should go from [x] back to [ ]. The merge resolution is always additive.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-7-cleanup","level":3,"title":"Step 7: Cleanup","text":"
After all tracks are merged, verify everything is clean:
/ctx-worktree list\n
Should show only the main working tree. All work/* branches should be gone.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't have to use the skill directly for every step. These natural prompts work:
\"I have a big backlog. Can we split it across worktrees?\"
\"Which of these tasks can run in parallel without conflicts?\"
\"Merge the docs track back in.\"
\"Clean up all the worktrees, we're done.\"
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#what-works-differently-in-worktrees","level":2,"title":"What Works Differently in Worktrees","text":"
The encryption key lives at ~/.ctx/.ctx.key (user-level, outside the project). Because all worktrees on the same machine share this path, ctx pad and ctx notify work in worktrees automatically - no special setup needed.
One thing to watch:
Journal enrichment: ctx journal import and ctx journal enrich write files relative to the current working directory. Enrichments created in a worktree stay there and are discarded on teardown. Enrich journals on the main branch after merging: the JSONL session logs are always intact, and you don't lose any data.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#tips","level":2,"title":"Tips","text":"
3-4 worktrees max. Beyond that, merge complexity outweighs the parallelism benefit. The skill enforces this limit.
Group by package or directory, not by priority. Two high-priority tasks that touch the same files must be in the same track.
TASKS.md will conflict on merge. This is normal. Accept all [x] completions: The resolution is always additive.
Don't run ctx init in worktrees. The .context/ directory is tracked in git. Running init overwrites shared context files.
Name worktrees by concern, not by number. work/docs and work/crypto are more useful than work/track-1 and work/track-2.
Commit frequently in each worktree. Smaller commits make merge conflicts easier to resolve.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#next-up","level":2,"title":"Next Up","text":"
Back to the beginning: Guide Your Agent →
Or explore the full recipe list.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#see-also","level":2,"title":"See Also","text":"
Running an Unattended AI Agent: for serial autonomous loops instead of parallel tracks
Tracking Work Across Sessions: managing the task backlog that feeds into parallelization
The Complete Session: the complete session workflow end-to-end, with examples
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/permission-snapshots/","level":1,"title":"Permission Snapshots","text":"","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#the-problem","level":2,"title":"The Problem","text":"
Claude Code's .claude/settings.local.json accumulates one-off permissions every time you click \"Allow\". After busy sessions the file is full of session-specific entries that expand the agent's surface area beyond intent.
Since settings.local.json is .gitignored, there is no PR review or CI check. The file drifts independently on every machine, and there is no built-in way to reset to a known-good state.
/ctx-sanitize-permissions # audit for dangerous patterns\nctx permission snapshot # save golden image\n# ... sessions accumulate cruft ...\nctx permission restore # reset to golden state\n
Save a curated settings.local.json as a golden image, then restore from it to drop session-accumulated permissions. The golden file (.claude/settings.golden.json) is committed to version control and shared with the team.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Command/Skill Role in this workflow ctx permission snapshot Save settings.local.json as golden image ctx permission restore Reset settings.local.json from golden image /ctx-sanitize-permissions Audit for dangerous patterns before snapshotting","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#step-by-step","level":2,"title":"Step by Step","text":"","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#1-curate-your-permissions","level":3,"title":"1. Curate Your Permissions","text":"
Start with a clean settings.local.json. Optionally run /ctx-sanitize-permissions to remove dangerous patterns first.
Review the file manually. Every entry should be there because you decided it belongs, not because you clicked \"Allow\" once during debugging.
See the Permission Hygiene recipe for recommended defaults.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#2-take-a-snapshot","level":3,"title":"2. Take a Snapshot","text":"
ctx permission snapshot\n# Saved golden image: .claude/settings.golden.json\n
This creates a byte-for-byte copy. No re-encoding, no indent changes.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#3-commit-the-golden-file","level":3,"title":"3. Commit the Golden File","text":"
git add .claude/settings.golden.json\ngit commit -m \"Add permission golden image\"\n
The golden file is not gitignored (unlike settings.local.json). This is intentional: it becomes a team-shared baseline.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#4-auto-restore-at-the-session-start","level":3,"title":"4. Auto-Restore at the Session Start","text":"
Add this instruction to your CLAUDE.md:
## On Session Start\n\nRun `ctx permission restore` to reset permissions to the golden image.\n
The agent will restore the golden image at the start of every session, automatically dropping any permissions accumulated during previous sessions.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#5-update-when-intentional-changes-are-made","level":3,"title":"5. Update When Intentional Changes Are Made","text":"
When you add a new permanent permission (not a one-off debugging entry):
# Edit settings.local.json with the new permission\n# Then update the golden image:\nctx permission snapshot\ngit add .claude/settings.golden.json\ngit commit -m \"Update permission golden image: add cargo test\"\n
You don't need to remember exact commands. These natural-language prompts work with agents trained on the ctx playbook:
What you say What happens \"Save my current permissions as baseline\" Agent runs ctx permission snapshot \"Reset permissions to the golden image\" Agent runs ctx permission restore \"Clean up my permissions\" Agent runs /ctx-sanitize-permissions then snapshot \"What permissions did I accumulate?\" Agent diffs local vs golden","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#next-up","level":2,"title":"Next Up","text":"
Turning Activity into Content →: Generate blog posts, changelogs, and journal sites from your project activity.
Permission Hygiene: recommended defaults and maintenance workflow
CLI Reference: ctx permission: full command documentation
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/publishing/","level":1,"title":"Turning Activity into Content","text":"","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#the-problem","level":2,"title":"The Problem","text":"
Your .context/ directory is full of decisions, learnings, and session history.
Your git log tells the story of a project evolving.
But none of this is visible to anyone outside your terminal.
You want to turn this raw activity into:
a browsable journal site,
blog posts,
changelog posts.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#tldr","level":2,"title":"TL;DR","text":"
ctx journal import --all # 1. import sessions to markdown\n\n/ctx-journal-enrich-all # 2. add metadata and tags\n\nctx journal site --serve # 3. build and serve the journal\n\n/ctx-blog about the caching layer # 4. draft a blog post\n/ctx-blog-changelog v0.1.0 \"v0.2\" # 5. write a changelog post\n
Read on for details on each stage.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx journal import Command Import session JSONL to editable markdown ctx journal site Command Generate a static site from journal entries ctx journal obsidian Command Generate an Obsidian vault from journal entries ctx serve Command Serve any zensical directory (default: journal) ctx site feed Command Generate Atom feed from finalized blog posts make journal Makefile Shortcut for import + site rebuild /ctx-journal-enrich-all Skill Full pipeline: import if needed, then batch-enrich (recommended) /ctx-journal-enrich Skill Add metadata, summaries, and tags to one entry /ctx-blog Skill Draft a blog post from recent project activity /ctx-blog-changelog Skill Write a themed post from a commit range","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-1-import-sessions-to-markdown","level":3,"title":"Step 1: Import Sessions to Markdown","text":"
Raw session data lives as JSONL files in Claude Code's internal storage. The first step is converting these into readable, editable markdown.
# Import all sessions from the current project\nctx journal import --all\n\n# Import from all projects (if you work across multiple repos)\nctx journal import --all --all-projects\n\n# Import a single session by ID or slug\nctx journal import abc123\nctx journal import gleaming-wobbling-sutherland\n
Imported files land in .context/journal/ as individual Markdown files with session metadata and the full conversation transcript.
--all is safe by default: Only new sessions are imported. Existing files are skipped. Use --regenerate to re-import existing files (YAML frontmatter is preserved). Use --regenerate --keep-frontmatter=false -y to regenerate everything including frontmatter.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-2-enrich-entries-with-metadata","level":3,"title":"Step 2: Enrich Entries with Metadata","text":"
Raw entries have timestamps and conversations but lack the structured metadata that makes a journal searchable. Use /ctx-journal-enrich-all to process your entire backlog at once:
/ctx-journal-enrich-all\n
The skill finds all unenriched entries, filters out noise (suggestion sessions, very short sessions, multipart continuations), and processes each one by extracting titles, topics, technologies, and summaries from the conversation.
For large backlogs (20+ entries), it can spawn subagents to process entries in parallel.
This metadata powers better navigation in the journal site:
titles replace slugs,
summaries appear in the index,
and search covers topics and technologies.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-3-generate-the-journal-site","level":3,"title":"Step 3: Generate the Journal Site","text":"
With entries exported and enriched, generate the static site:
# Generate site files\nctx journal site\n\n# Generate and build static HTML\nctx journal site --build\n\n# Generate and serve locally (opens at http://localhost:8000)\nctx journal site --serve\n\n# Custom output directory\nctx journal site --output ~/my-journal\n
The site is generated in .context/journal-site/ by default. It uses zensical for static site generation (pipx install zensical).
Or use the Makefile shortcut that combines export and rebuild:
make journal\n
This runs ctx journal import --all followed by ctx journal site --build, then reminds you to enrich before rebuilding. To serve the built site, use make journal-serve or ctx serve (serve-only, no regeneration).
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#alternative-export-to-obsidian-vault","level":3,"title":"Alternative: Export to Obsidian Vault","text":"
If you use Obsidian for knowledge management, generate a vault instead of (or alongside) the static site:
This produces an Obsidian-ready directory with wikilinks, MOC (Map of Content) pages for topics/files/types, and a \"Related Sessions\" footer on each entry for graph connectivity. Open the output directory in Obsidian as a vault.
The vault uses the same enriched source entries as the static site. Both outputs can coexist: The static site goes to .context/journal-site/, the vault to .context/journal-obsidian/.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-4-draft-blog-posts-from-activity","level":3,"title":"Step 4: Draft Blog Posts from Activity","text":"
When your project reaches a milestone worth sharing, use /ctx-blog to draft a post from recent activity. The skill gathers context from multiple sources: git log, DECISIONS.md, LEARNINGS.md, completed tasks, and journal entries.
/ctx-blog about the caching layer we just built\n/ctx-blog last week's refactoring work\n/ctx-blog lessons learned from the migration\n
The skill gathers recent commits, decisions, and learnings; identifies a narrative arc; drafts an outline for approval; writes the full post; and saves it to docs/blog/YYYY-MM-DD-slug.md.
Posts are written in first person with code snippets, commit references, and an honest discussion of what went wrong.
The Output is zensical-Flavored Markdown
The blog skills produce Markdown tuned for a zensical site: topics: frontmatter (zensical's tag field), a docs/blog/ output path, and a banner image reference.
The content is still standard Markdown and can be adapted to other static site generators, but the defaults assume a zensical project structure.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-5-write-changelog-posts-from-commit-ranges","level":3,"title":"Step 5: Write Changelog Posts from Commit Ranges","text":"
For release notes or \"what changed\" posts, /ctx-blog-changelog takes a starting commit and a theme, then analyzes everything that changed:
/ctx-blog-changelog 040ce99 \"building the journal system\"\n/ctx-blog-changelog HEAD~30 \"what's new in v0.2.0\"\n/ctx-blog-changelog v0.1.0 \"the road to v0.2.0\"\n
The skill diffs the commit range, identifies the most-changed files, and constructs a narrative organized by theme rather than chronology, including a key commits table and before/after comparisons.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-6-generate-the-blog-feed","level":3,"title":"Step 6: Generate the Blog Feed","text":"
After publishing blog posts, generate the Atom feed so readers and automation can discover new content:
ctx site feed\n
This scans docs/blog/ for finalized posts (reviewed_and_finalized: true), extracts title, date, author, topics, and summary, and writes a valid Atom 1.0 feed to site/feed.xml. The feed is also generated automatically as part of make site.
The feed is available at ctx.ist/feed.xml.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#the-conversational-approach","level":2,"title":"The Conversational Approach","text":"
You can also drive your publishing anytime with natural language:
\"write about what we did this week\"\n\"turn today's session into a blog post\"\n\"make a changelog post covering everything since the last release\"\n\"enrich the last few journal entries\"\n
The agent has full visibility into your .context/ state (tasks completed, decisions recorded, learnings captured), so its suggestions are grounded in what actually happened.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
The full pipeline from raw transcripts to published content:
# 1. Import all sessions\nctx journal import --all\n\n# 2. In Claude Code: enrich all entries with metadata\n/ctx-journal-enrich-all\n\n# 3. Build and serve the journal site\nmake journal\nmake journal-serve\n\n# 3b. Or generate an Obsidian vault\nctx journal obsidian\n\n# 4. In Claude Code: draft a blog post\n/ctx-blog about the features we shipped this week\n\n# 5. In Claude Code: write a changelog post\n/ctx-blog-changelog v0.1.0 \"what's new in v0.2.0\"\n
The journal pipeline is idempotent at every stage. You can rerun ctx journal import --all without losing enrichment. You can rebuild the site as many times as you want.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#tips","level":2,"title":"Tips","text":"
Import regularly. Run ctx journal import --all after each session to keep your journal current. Only new sessions are imported: Existing files are skipped by default.
Use batch enrichment. /ctx-journal-enrich-all filters noise (suggestion sessions, trivial sessions, multipart continuations) so you do not have to decide what is worth enriching.
Keep journal files in .gitignore. Session journals can contain sensitive data: file contents, commands, internal discussions, and error messages with stack traces. Add .context/journal/ and .context/journal-site/ to .gitignore.
Use /ctx-blog for narrative posts and /ctx-blog-changelog for release posts. One finds a story in recent activity, the other explains a commit range by theme.
Edit the drafts. These skills produce drafts, not final posts. Review the narrative, add your perspective, and remove anything that does not serve the reader.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#next-up","level":2,"title":"Next Up","text":"
Running an Unattended AI Agent →: Set up an AI agent that works through tasks overnight without you at the keyboard.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#see-also","level":2,"title":"See Also","text":"
CLI Reference: ctx serve: serve-only (no regeneration)
Browsing and Enriching Past Sessions: journal browsing workflow
The Complete Session: capturing context during a session
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/scratchpad-sync/","level":1,"title":"Syncing Scratchpad Notes Across Machines","text":"","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#the-problem","level":2,"title":"The Problem","text":"
You work from multiple machines: a desktop and a laptop, or a local machine and a remote dev server.
The scratchpad entries are encrypted. The ciphertext (.context/scratchpad.enc) travels with git, but the encryption key lives outside the project at ~/.ctx/.ctx.key and is never committed. Without the key on each machine, you cannot read or write entries.
How do you distribute the key and keep the scratchpad in sync?
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#tldr","level":2,"title":"TL;DR","text":"
ctx init # 1. generates key\nscp ~/.ctx/.ctx.key user@machine-b:~/.ctx/.ctx.key # 2. copy key\nchmod 600 ~/.ctx/.ctx.key # 3. secure it\n# Normal git push/pull syncs the encrypted scratchpad.enc\n# On conflict: ctx pad resolve → rebuild → git add + commit\n
Finding Your Key File
The key is always at ~/.ctx/.ctx.key - one key, one machine.
Treat the Key Like a Password
The scratchpad key is the only thing protecting your encrypted entries.
Store a backup in a secure enclave such as a password manager, and treat it with the same care you would give passwords, certificates, or API tokens.
Anyone with the key can decrypt every scratchpad entry.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx init CLI command Initialize context (generates the key automatically) ctx pad add CLI command Add a scratchpad entry ctx pad rm CLI command Remove a scratchpad entry ctx pad edit CLI command Edit a scratchpad entry ctx pad resolve CLI command Show both sides of a merge conflict ctx pad merge CLI command Merge entries from other scratchpad files ctx pad import CLI command Bulk-import lines from a file ctx pad export CLI command Export blob entries to a directory scp Shell Copy the key file between machines git push / git pull Shell Sync the encrypted file via git/ctx-pad Skill Natural language interface to pad commands","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-1-initialize-on-machine-a","level":3,"title":"Step 1: Initialize on Machine A","text":"
Run ctx init on your first machine. The key is created automatically at ~/.ctx/.ctx.key:
ctx init\n# ...\n# Created ~/.ctx/.ctx.key (0600)\n# Created .context/scratchpad.enc\n
The key lives outside the project directory and is never committed. The .enc file is tracked in git.
Key Folder Change (v0.7.0+)
If you built ctx from source or upgraded past v0.6.0, the key location changed to ~/.ctx/.ctx.key. Check these legacy folders and copy your key manually:
# Old locations (pick whichever exists)\nls ~/.local/ctx/keys/ # pre-v0.7.0 user-level\nls .context/.ctx.key # pre-v0.6.0 project-local\n\n# Copy to the new location\nmkdir -p ~/.ctx && chmod 700 ~/.ctx\ncp <old-key-path> ~/.ctx/.ctx.key\nchmod 600 ~/.ctx/.ctx.key\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-2-copy-the-key-to-machine-b","level":3,"title":"Step 2: Copy the Key to Machine B","text":"
Use any secure transfer method. The key is always at ~/.ctx/.ctx.key:
# scp - create the target directory first\nssh user@machine-b \"mkdir -p ~/.ctx && chmod 700 ~/.ctx\"\nscp ~/.ctx/.ctx.key user@machine-b:~/.ctx/.ctx.key\n\n# Or use a password manager, USB drive, etc.\n
Set permissions on Machine B:
chmod 600 ~/.ctx/.ctx.key\n
Secure the Transfer
The key is a raw 256-bit AES key. Anyone with the key can decrypt the scratchpad. Use an encrypted channel (SSH, password manager, vault).
Never paste it in plaintext over email or chat.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-3-normal-pushpull-workflow","level":3,"title":"Step 3: Normal Push/Pull Workflow","text":"
The encrypted file is committed, so standard git sync works:
# Machine A: add entries and push\nctx pad add \"staging API key: sk-test-abc123\"\ngit add .context/scratchpad.enc\ngit commit -m \"Update scratchpad\"\ngit push\n\n# Machine B: pull and read\ngit pull\nctx pad\n# 1. staging API key: sk-test-abc123\n
Both machines have the same key, so both can decrypt the same .enc file.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-4-read-and-write-from-either-machine","level":3,"title":"Step 4: Read and Write from Either Machine","text":"
Once the key is distributed, all ctx pad commands work identically on both machines. Entries added on Machine A are visible on Machine B after a git pull, and vice versa.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-5-handle-merge-conflicts","level":3,"title":"Step 5: Handle Merge Conflicts","text":"
If both machines add entries between syncs, pulling will create a merge conflict on .context/scratchpad.enc. Git cannot merge binary (encrypted) content automatically.
The fastest approach is ctx pad merge: It reads both conflict sides, deduplicates, and writes the union:
# Extract theirs to a temp file, then merge it in\ngit show :3:.context/scratchpad.enc > /tmp/theirs.enc\ngit checkout --ours .context/scratchpad.enc\nctx pad merge /tmp/theirs.enc\n\n# Done: Commit the resolved scratchpad:\ngit add .context/scratchpad.enc\ngit commit -m \"Resolve scratchpad merge conflict\"\n
Alternatively, use ctx pad resolve to inspect both sides manually:
ctx pad resolve\n# === Ours (this machine) ===\n# 1. staging API key: sk-test-abc123\n# 2. check DNS after deploy\n#\n# === Theirs (incoming) ===\n# 1. staging API key: sk-test-abc123\n# 2. new endpoint: api.example.com/v2\n
Then reconstruct the merged scratchpad:
# Start fresh with all entries from both sides\nctx pad add \"staging API key: sk-test-abc123\"\nctx pad add \"check DNS after deploy\"\nctx pad add \"new endpoint: api.example.com/v2\"\n\n# Mark the conflict resolved\ngit add .context/scratchpad.enc\ngit commit -m \"Resolve scratchpad merge conflict\"\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#merge-conflict-walkthrough","level":2,"title":"Merge Conflict Walkthrough","text":"
Here's a full scenario showing how conflicts arise and how to resolve them:
1. Both machines start in sync (1 entry):
Machine A: 1. staging API key: sk-test-abc123\nMachine B: 1. staging API key: sk-test-abc123\n
2. Both add entries independently:
Machine A adds: \"check DNS after deploy\"\nMachine B adds: \"new endpoint: api.example.com/v2\"\n
3. Machine A pushes first. Machine B pulls and gets a conflict:
git pull\n# CONFLICT (content): Merge conflict in .context/scratchpad.enc\n
4. Machine B runs ctx pad resolve:
ctx pad resolve\n# === Ours ===\n# 1. staging API key: sk-test-abc123\n# 2. new endpoint: api.example.com/v2\n#\n# === Theirs ===\n# 1. staging API key: sk-test-abc123\n# 2. check DNS after deploy\n
5. Rebuild with entries from both sides and commit:
# Clear and rebuild (or use the skill to guide you)\nctx pad add \"staging API key: sk-test-abc123\"\nctx pad add \"check DNS after deploy\"\nctx pad add \"new endpoint: api.example.com/v2\"\n\ngit add .context/scratchpad.enc\ngit commit -m \"Merge scratchpad: keep entries from both machines\"\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#conversational-approach","level":3,"title":"Conversational Approach","text":"
When working with an AI assistant, you can resolve conflicts naturally:
You: \"I have a scratchpad merge conflict. Can you resolve it?\"\n\nAgent: \"Let me extract theirs and merge it in.\"\n [runs git show :3:.context/scratchpad.enc > /tmp/theirs.enc]\n [runs git checkout --ours .context/scratchpad.enc]\n [runs ctx pad merge /tmp/theirs.enc]\n \"Merged 2 new entries (1 duplicate skipped). Want me to\n commit the resolution?\"\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#tips","level":2,"title":"Tips","text":"
Back up the key: If you lose it, you lose access to all encrypted entries. Store a copy in your password manager.
One key per project: Each ctx init generates a unique key. Don't reuse keys across projects.
Keys work in worktrees: Because the key lives at ~/.ctx/.ctx.key (outside the project), git worktrees on the same machine share the key automatically. No special setup needed.
Plaintext fallback for non-sensitive projects: If encryption adds friction and you have nothing sensitive, set scratchpad_encrypt: false in .ctxrc. Merge conflicts become trivial text merges.
Never commit the key: The key is stored outside the project at ~/.ctx/.ctx.key and should never be copied into the repository.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#next-up","level":2,"title":"Next Up","text":"
Hook Output Patterns →: Choose the right output pattern for your Claude Code hooks.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#see-also","level":2,"title":"See Also","text":"
Scratchpad: feature overview, all commands, when to use scratchpad vs context files
Persisting Decisions, Learnings, and Conventions: for structured knowledge that outlives the scratchpad
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-with-claude/","level":1,"title":"Using the Scratchpad","text":"","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#the-problem","level":2,"title":"The Problem","text":"
During a session you accumulate quick notes, reminders, intermediate values, and sometimes sensitive tokens. They don't fit TASKS.md (not work items) or DECISIONS.md (not decisions). They don't have the structured fields that LEARNINGS.md requires.
Without somewhere to put them, they get lost between sessions.
How do you capture working memory that persists across sessions without polluting your structured context files?
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#tldr","level":2,"title":"TL;DR","text":"
ctx pad add \"check DNS propagation after deploy\"\nctx pad # list entries\nctx pad show 1 # print entry (pipe-friendly)\n
Entries are encrypted at rest and travel with git.
Use the /ctx-pad skill to manage entries from inside your AI session.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx pad CLI command List all scratchpad entries ctx pad show N CLI command Output raw text of entry N (pipe-friendly) ctx pad add CLI command Add a new entry ctx pad edit CLI command Replace, append to, or prepend to an entry ctx pad add --file CLI command Ingest a file as a blob entry ctx pad show N --out CLI command Extract a blob entry to a file ctx pad rm CLI command Remove an entry ctx pad mv CLI command Reorder entries ctx pad import CLI command Bulk-import lines from a file (or stdin) ctx pad export CLI command Export all blob entries to a directory /ctx-pad Skill Natural language interface to all pad commands","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-1-add-a-note-naturally","level":3,"title":"Step 1: Add a Note Naturally","text":"
You don't need to remember any syntax. Just tell your assistant what to jot down:
You: \"jot down: check DNS propagation after deploy\"\n\nAgent: \"Added to scratchpad:\n 1. check DNS propagation after deploy\"\n
Behind the scenes, the agent runs ctx pad add \"check DNS propagation after deploy\".
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-2-view-your-scratchpad","level":3,"title":"Step 2: View Your Scratchpad","text":"
You: \"show my scratchpad\"\n\nAgent: \"Your scratchpad has 3 entries:\n 1. check DNS propagation after deploy\n 2. staging API endpoint: api.staging.example.com/v2\n 3. retry limit should be 5, not 3\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-3-edit-an-entry","level":3,"title":"Step 3: Edit an Entry","text":"
You: \"update entry 2 to include the new endpoint\"\n\nAgent: \"Updated entry 2:\n was: staging API endpoint: api.staging.example.com/v2\n now: staging API endpoint: api.prod.example.com/v2\"\n
For partial edits, the agent can use --append or --prepend:
You: \"append the port number to entry 2\"\n\nAgent: \"Updated entry 2:\n staging API endpoint: api.prod.example.com/v2 :8443\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-4-compose-entries","level":3,"title":"Step 4: Compose Entries","text":"
Use ctx pad show N to pipe entry content into other operations:
You: \"append the content of entry 3 to entry 1\"\n\nAgent: [runs ctx pad edit 1 --append \"$(ctx pad show 3)\"]\n \"Done. Entry 1 is now:\n check DNS propagation after deploy retry limit should be 5, not 3\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-5-reorder-by-priority","level":3,"title":"Step 5: Reorder by Priority","text":"
You: \"move entry 4 to the top\"\n\nAgent: [runs ctx pad mv 4 1]\n \"Moved entry 4 to position 1. Scratchpad reordered.\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-6-clean-up","level":3,"title":"Step 6: Clean Up","text":"
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-7-store-a-file-as-a-blob","level":3,"title":"Step 7: Store a File as a Blob","text":"
The scratchpad can hold small files (up to 64 KB) as encrypted blob entries. The file is base64-encoded and stored alongside a label you provide:
# Ingest a file: the first argument is the label\nctx pad add \"deploy config\" --file ./deploy.yaml\n\n# List shows the label with a [BLOB] marker\nctx pad\n# 1. check DNS propagation after deploy\n# 2. deploy config [BLOB]\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-8-extract-a-blob","level":3,"title":"Step 8: Extract a Blob","text":"
Use show --out to write the decoded file back to disk:
# Write blob entry to a file\nctx pad show 2 --out ./recovered-deploy.yaml\n\n# Or print to stdout (for piping)\nctx pad show 2 | head -5\n
Blob entries are encrypted identically to text entries: They're just base64-encoded before encryption. The --out flag decodes and writes the raw bytes.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-9-bulk-import-notes","level":3,"title":"Step 9: Bulk Import Notes","text":"
When you have a file with many notes (one per line), import them in bulk instead of adding one at a time:
# Import from a file: Each non-empty line becomes an entry\nctx pad import notes.txt\n\n# Or pipe from stdin\ngrep TODO *.go | ctx pad import -\n
All entries are written in a single encrypt/write cycle, regardless of how many lines the file contains.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-10-export-blobs-to-disk","level":3,"title":"Step 10: Export Blobs to Disk","text":"
Export all blob entries to a directory as individual files. Each blob's label becomes the filename:
# Export to a directory (created if needed)\nctx pad export ./ideas\n\n# Preview what would be exported\nctx pad export --dry-run ./ideas\n\n# Force overwrite existing files\nctx pad export --force ./backup\n
When a file already exists, a unix timestamp is prepended to the filename to avoid collisions. Use --force to overwrite instead.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#using-ctx-pad-in-a-session","level":2,"title":"Using /ctx-pad in a Session","text":"
Invoke the /ctx-pad skill first, then describe what you want in natural language. Without the skill prefix, the agent may route your request to TASKS.md or another context file instead of the scratchpad.
You: /ctx-pad jot down: check DNS after deploy\nYou: /ctx-pad show my scratchpad\nYou: /ctx-pad delete entry 3\n
Once the skill is active, it translates intent into commands:
You say (after /ctx-pad) What the agent does \"jot down: check DNS after deploy\" ctx pad add \"check DNS after deploy\" \"remember this: retry limit is 5\" ctx pad add \"retry limit is 5\" \"show my scratchpad\" / \"what's on my pad\" ctx pad \"show me entry 3\" ctx pad show 3 \"delete the third one\" / \"remove entry 3\" ctx pad rm 3 \"change entry 2 to ...\" ctx pad edit 2 \"new text\" \"append ' +important' to entry 3\" ctx pad edit 3 --append \" +important\" \"prepend 'URGENT:' to entry 1\" ctx pad edit 1 --prepend \"URGENT: \" \"prioritize entry 4\" / \"move to the top\" ctx pad mv 4 1 \"import my notes from notes.txt\" ctx pad import notes.txt \"export all blobs to ./ideas\" ctx pad export ./ideas
When in Doubt, Use the CLI Directly
The ctx pad commands work the same whether you run them yourself or let the skill invoke them.
If the agent misroutes a request, fall back to ctx pad add \"...\" in your terminal.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#when-to-use-scratchpad-vs-context-files","level":2,"title":"When to Use Scratchpad vs Context Files","text":"Situation Use Temporary reminders (\"check X after deploy\") Scratchpad Session-start reminders (\"remind me next session\") ctx remind Working values during debugging (ports, endpoints, counts) Scratchpad Sensitive tokens or API keys (short-term storage) Scratchpad Quick notes that don't fit anywhere else Scratchpad Work items with completion tracking TASKS.md Trade-offs between alternatives with rationale DECISIONS.md Reusable lessons with context/lesson/application LEARNINGS.md Codified patterns and standards CONVENTIONS.md
Decision Guide
If it has structured fields (context, rationale, lesson, application), it belongs in a context file like DECISIONS.md or LEARNINGS.md.
If it's a work item you'll mark done, it belongs in TASKS.md.
If you want a message relayed VERBATIM at the next session start, it belongs in ctx remind.
If it's a quick note, reminder, or working value (especially if it's sensitive or ephemeral) it belongs on the scratchpad.
Scratchpad Is Not a Junk Drawer
The scratchpad is for working memory, not long-term storage.
If a note is still relevant after several sessions, promote it:
A persistent reminder becomes a task, a recurring value becomes a convention, a hard-won insight becomes a learning.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#tips","level":2,"title":"Tips","text":"
Entries persist across sessions: The scratchpad is committed (encrypted) to git, so entries survive session boundaries. Pick up where you left off.
Entries are numbered and reorderable: Use ctx pad mv to put high-priority items at the top.
ctx pad show N enables unix piping: Output raw entry text with no numbering prefix. Compose with --append, --prepend, or other shell tools.
Never mention the key file contents to the AI: The agent knows how to use ctx pad commands but should never read or print the encryption key (~/.ctx/.ctx.key) directly.
Encryption is transparent: You interact with plaintext; the encryption/decryption happens automatically on every read/write.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#next-up","level":2,"title":"Next Up","text":"
Syncing Scratchpad Notes Across Machines →: Distribute encryption keys and scratchpad data across environments.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#see-also","level":2,"title":"See Also","text":"
Scratchpad: feature overview, all commands, encryption details, plaintext override
Persisting Decisions, Learnings, and Conventions: for structured knowledge that outlives the scratchpad
The Complete Session: full session lifecycle showing how the scratchpad fits into the broader workflow
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/session-archaeology/","level":1,"title":"Browsing and Enriching Past Sessions","text":"","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#the-problem","level":2,"title":"The Problem","text":"
After weeks of AI-assisted development you have dozens of sessions scattered across JSONL files in ~/.claude/projects/. Finding the session where you debugged the Redis connection pool, or remembering what you decided about the caching strategy three Tuesdays ago, often means grepping raw JSON.
There is no table of contents, no search, and no summaries.
This recipe shows how to turn that raw session history into a browsable, searchable, and enriched journal site you can navigate in your browser.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#tldr","level":2,"title":"TL;DR","text":"
Export and Generate
ctx journal import --all\nctx journal site --serve\n
Enrich
/ctx-journal-enrich-all\n
Rebuild
ctx journal site --serve\n
Read on for what each stage does and why.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx journal source Command List parsed sessions with metadata ctx journal source --show Command Inspect a specific session in detail ctx journal import Command Import sessions to editable journal Markdown ctx journal site Command Generate a static site from journal entries ctx journal obsidian Command Generate an Obsidian vault from journal entries ctx serve Command Serve any zensical directory (default: journal) /ctx-history Skill Browse sessions inside your AI assistant /ctx-journal-enrich Skill Add frontmatter metadata to a single entry /ctx-journal-enrich-all Skill Full pipeline: import if needed, then batch-enrich","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#the-workflow","level":2,"title":"The Workflow","text":"
The session journal follows a four-stage pipeline.
Each stage is idempotent and safe to re-run:
By default, each stage skips entries that have already been processed.
import -> enrich -> rebuild\n
Stage Tool What it does Skips if Where Import ctx journal import --all Converts session JSONL to Markdown File already exists (safe default) CLI or agent Enrich /ctx-journal-enrich-all Adds frontmatter, summaries, topic tags Frontmatter already present Agent only Rebuild ctx journal site --build Generates browsable static HTML N/A CLI only Obsidian ctx journal obsidian Generates Obsidian vault with wikilinks N/A CLI only
Where Do You Run Each Stage?
Import (Steps 1 to 3) works equally well from the terminal or inside your AI assistant via /ctx-history. The CLI is fine here: the agent adds no special intelligence, it just runs the same command.
Enrich (Step 4) requires the agent: it reads conversation content and produces structured metadata.
Rebuild and serve (Step 5) is a terminal operation that starts a long-running server.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-1-list-your-sessions","level":3,"title":"Step 1: List Your Sessions","text":"
Start by seeing what sessions exist for the current project:
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-2-inspect-a-specific-session","level":3,"title":"Step 2: Inspect a Specific Session","text":"
Before exporting everything, inspect a single session to see its metadata and conversation summary:
ctx journal source --show --latest\n
Or look up a specific session by its slug, partial ID, or UUID:
Add --full to see the complete message content instead of the summary view:
ctx journal source --show --latest --full\n
This is useful for checking what happened before deciding whether to export and enrich it.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-3-import-sessions-to-the-journal","level":3,"title":"Step 3: Import Sessions to the Journal","text":"
Import converts raw session data into editable Markdown files in .context/journal/:
# Import all sessions from the current project\nctx journal import --all\n\n# Import a single session\nctx journal import gleaming-wobbling-sutherland\n\n# Include sessions from all projects\nctx journal import --all --all-projects\n
--keep-frontmatter=false Discards Enrichments
--keep-frontmatter=false discards enriched YAML frontmatter during regeneration.
Back up your journal before using this flag.
Each imported file contains session metadata (date, time, duration, model, project, git branch), a tool usage summary, and the full conversation transcript.
Re-importing is safe. Running ctx journal import --all only imports new sessions: Existing files are never touched. Use --dry-run to preview what would be imported without writing anything.
To re-import existing files (e.g., after a format improvement), use --regenerate: Conversation content is regenerated while preserving any YAML frontmatter you or the enrichment skill has added. You'll be prompted before any files are overwritten.
--regenerate Replaces the Markdown Body
--regenerate preserves YAML frontmatter but replaces the entire Markdown body with freshly generated content from the source JSONL.
If you manually edited the conversation transcript (added notes, redacted sensitive content, restructured sections), those edits will be lost.
BACK UP YOUR JOURNAL FIRST.
To protect entries you've hand-edited, you can explicitly lock them:
ctx journal lock <pattern>\n
Locked entries are always skipped, regardless of flags.
If you prefer to add locked: true directly in frontmatter during enrichment, run ctx journal sync to propagate the lock state to .state.json:
ctx journal sync\n
See ctx journal lock --help and ctx journal sync --help for details.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-4-enrich-with-metadata","level":3,"title":"Step 4: Enrich with Metadata","text":"
Raw imports have timestamps and transcripts but lack the semantic metadata that makes sessions searchable: topics, technology tags, outcome status, and summaries. The /ctx-journal-enrich* skills add this structured frontmatter.
Locked entries are skipped by enrichment skills, just as they are by import. Lock entries you want to protect before running batch enrichment.
Batch enrichment (recommended):
/ctx-journal-enrich-all\n
The skill finds all unenriched entries, filters out noise (suggestion sessions, very short sessions, multipart continuations), and processes each one by extracting titles, topics, technologies, and summaries from the conversation.
It shows you a grouped summary before applying changes so you can scan quickly rather than reviewing one by one.
For large backlogs (20+ entries), the skill can spawn subagents to process entries in parallel.
The skill also generates a summary and can extract decisions, learnings, and tasks mentioned during the session.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-5-generate-and-serve-the-site","level":3,"title":"Step 5: Generate and Serve the Site","text":"
With imported and enriched journal files, generate the static site:
# Generate site structure only\nctx journal site\n\n# Generate and build static HTML\nctx journal site --build\n\n# Generate, build, and serve locally\nctx journal site --serve\n
Then open http://localhost:8000 to browse.
The site includes a date-sorted index, individual session pages with full conversations, search (press /), dark mode, and enriched titles in the navigation when frontmatter exists.
You can also serve an already-generated site without regenerating using ctx serve (serve-only, no regeneration).
The site generator requires zensical (pipx install zensical).
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#where-does-agent-add-value","level":2,"title":"Where Does Agent Add Value?","text":"
Export, list, and show are mechanical: The agent runs the same CLI commands you would, so you can stay in your terminal for those.
The agent earns its keep in enrich. It reads conversation content, understands what happened, and produces structured metadata. That is agent work, not CLI work.
You can also ask your agent to browse sessions conversationally instead of remembering flags:
What did we work on last week?\nShow me the session about Redis.\nImport everything to the journal.\n
This is convenient but not required: ctx journal source gives you the same inventory.
Where the agent genuinely helps is chaining the pipeline:
You: What happened last Tuesday?\nAgent: Last Tuesday you worked on two sessions:\n - bright-dancing-hopper (2h 10m): refactored the middleware\n pipeline and added Redis caching\n - quiet-flowing-dijkstra (18m): quick fix for a nil pointer\n in the config loader\n Want me to export and enrich them?\nYou: Yes, do it.\nAgent: Exports both, enriches, then proposes frontmatter.\n
The value is staying in one context while the agent runs import -> enrich without you manually switching tools.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
A typical pipeline from raw sessions to a browsable site:
# Terminal: import and generate\nctx journal import --all\nctx journal site --serve\n
# AI assistant: enrich\n/ctx-journal-enrich-all\n
# Terminal: rebuild with enrichments\nctx journal site --serve\n
If your project includes Makefile.ctx (deployed by ctx init), use make journal to combine import and rebuild stages. Then enrich inside Claude Code, then make journal again to pick up enrichments.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#session-retention-and-cleanup","level":2,"title":"Session Retention and Cleanup","text":"
Claude Code does not keep JSONL transcripts forever. Understanding its cleanup behavior helps you avoid losing session history.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#default-behavior","level":3,"title":"Default Behavior","text":"
Claude Code retains session transcripts for approximately 30 days. After that, JSONL files are automatically deleted during cleanup. Once deleted, ctx journal can no longer see those sessions - the data is gone.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#the-cleanupperioddays-setting","level":3,"title":"The cleanupPeriodDays Setting","text":"
Claude Code exposes a cleanupPeriodDays setting in its configuration (~/.claude/settings.json) that controls retention:
Value Behavior 30 (default) Transcripts older than 30 days are deleted 60, 90, etc. Extends the retention window 0 Disables writing new transcripts entirely - not \"keep forever\"
Setting cleanupPeriodDays to 0
Setting this to 0 does not mean \"never delete.\" It disables transcript creation altogether. No new JSONL files are written, which means ctx journal sees nothing new. This is rarely what you want.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#why-journal-import-matters","level":3,"title":"Why Journal Import Matters","text":"
The journal import pipeline (Steps 1-4 above) is your archival mechanism. Imported Markdown files in .context/journal/ persist independently of Claude Code's cleanup cycle. Even after the source JSONL files are deleted, your journal entries remain.
Recommendation: import regularly - weekly, or after any session worth revisiting. A quick ctx journal import --all takes seconds and ensures nothing falls through the 30-day window.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#quick-archival-checklist","level":3,"title":"Quick Archival Checklist","text":"
Run ctx journal import --all at least weekly
Enrich high-value sessions with /ctx-journal-enrich before the details fade from your own memory
Lock enriched entries (ctx journal lock <pattern>) to protect them from accidental regeneration
Rebuild the journal site periodically to keep it current
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#tips","level":2,"title":"Tips","text":"
Start with /ctx-history inside your AI assistant. If you want to quickly check what happened in a recent session without leaving your editor, /ctx-history lets you browse interactively without importing.
Large sessions may be split automatically. Sessions with 200+ messages can be split into multiple parts (session-abc123.md, session-abc123-p2.md, session-abc123-p3.md) with navigation links between them. The site generator can handle this.
Suggestion sessions can be separated. Claude Code can generate short suggestion sessions for autocomplete. These may appear under a separate section in the site index, so they do not clutter your main session list.
Your agent is a good session browser. You do not need to remember slugs, dates, or flags. Ask \"what did we do yesterday?\" or \"find the session about Redis\" and it can map the question to recall commands.
Journal Files Are Sensitive
Journal files MUST be .gitignored.
Session transcripts can contain sensitive data such as file contents, commands, error messages with stack traces, and potentially API keys.
Add .context/journal/, .context/journal-site/, and .context/journal-obsidian/ to your .gitignore.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#next-up","level":2,"title":"Next Up","text":"
Persisting Decisions, Learnings, and Conventions →: Record decisions, learnings, and conventions so they survive across sessions.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#see-also","level":2,"title":"See Also","text":"
The Complete Session: where session saving fits in the daily workflow
Turning Activity into Content: generating blog posts from session history
Session Journal: full documentation of the journal system
CLI Reference: ctx journal: all journal subcommands and flags
CLI Reference: ctx serve: serve-only (no regeneration)
Context Files: the .context/ directory structure
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-ceremonies/","level":1,"title":"Session Ceremonies","text":"","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#the-problem","level":2,"title":"The Problem","text":"
Sessions have two critical moments: the start and the end.
At the start, you need the agent to load context and confirm it knows what is going on.
At the end, you need to capture whatever the session produced before the conversation disappears.
Most ctx skills work conversationally: \"jot down: check DNS after deploy\" is as good as /ctx-pad add \"check DNS after deploy\". But session boundaries are different. They are well-defined moments with specific requirements, and partial execution is costly.
If the agent only half-loads context at the start, it works from stale assumptions. If it only half-persists at the end, learnings and decisions are lost.
This Is One of the Few Times Being Explicit Matters
Session ceremonies are the two bookend skills that mark these boundaries.
They are the exception to the conversational rule:
Invoke /ctx-remember and /ctx-wrap-up explicitly as slash commands.
Most ctx skills encourage natural language. These two are different:
Well-defined moments: Sessions have clear boundaries. A slash command marks the boundary unambiguously.
Ambiguity risk: \"Do you remember?\" could mean many things. /ctx-remember means exactly one thing: load context and present a structured readback.
Completeness: Conversational triggers risk partial execution. The agent might load some files but skip the session history, or persist one learning but forget to check for uncommitted changes. The slash command runs the full ceremony.
Muscle memory: Typing /ctx-remember at session start and /ctx-wrap-up at session end becomes a habit, like opening and closing braces.
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-remember Skill Load context and present structured readback /ctx-wrap-up Skill Gather session signal, propose and persist context /ctx-commit Skill Commit with context capture (offered by wrap-up) ctx agent CLI Load token-budgeted context packet ctx journal source CLI List recent sessions ctx add CLI Persist learnings, decisions, conventions, tasks","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#session-start-ctx-remember","level":2,"title":"Session Start: /ctx-remember","text":"
Invoke at the beginning of every session:
/ctx-remember\n
The skill silently:
Loads the context packet via ctx agent --budget 4000
Reads TASKS.md, DECISIONS.md, LEARNINGS.md
Checks recent sessions via ctx journal source --limit 3
Then presents a structured readback with four sections:
Last session: topic, date, what was accomplished
Active work: pending and in-progress tasks
Recent context: 1-2 relevant decisions or learnings
Next step: suggestion or question about what to focus on
The readback should feel like recall, not a file system tour. If the agent says \"Let me check if there are files...\" instead of a confident summary, the skill is not working correctly.
What About 'do you remember?'
The conversational trigger still works. But /ctx-remember guarantees the full ceremony runs:
After persisting, the skill marks the session as wrapped up via ctx system mark-wrapped-up. This suppresses context checkpoint nudges for 2 hours so the wrap-up ceremony itself does not trigger noisy reminders.
If there are uncommitted changes, offers to run /ctx-commit. Does not auto-commit.
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#when-to-skip","level":2,"title":"When to Skip","text":"
Not every session needs ceremonies.
Skip /ctx-remember when:
You are doing a quick one-off lookup (reading a file, checking a value)
Context was already loaded this session via /ctx-agent
You are continuing immediately after a previous session and context is still fresh
Skip /ctx-wrap-up when:
Nothing meaningful happened (only read files, answered a question)
You already persisted everything manually during the session
The session was trivial (typo fix, quick config change)
A good heuristic: if the session produced something a future session should know about, run /ctx-wrap-up. If not, just close.
# Session start\n/ctx-remember\n\n# ... do work ...\n\n# Session end\n/ctx-wrap-up\n
That is the complete ceremony. Two commands, bookending your session.
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#relationship-to-other-skills","level":2,"title":"Relationship to Other Skills","text":"Skill When Purpose /ctx-remember Session start Load and confirm context /ctx-reflect Mid-session breakpoints Checkpoint at milestones /ctx-wrap-up Session end Full session review and persist /ctx-commit After completing work Commit with context capture
/ctx-reflect is for mid-session checkpoints. /ctx-wrap-up is for end-of-session: it is more thorough, covers the full session arc, and includes the commit offer. If you already ran /ctx-reflect recently, /ctx-wrap-up avoids proposing the same candidates again.
Make it a habit: The value of ceremonies compounds over sessions. Each /ctx-wrap-up makes the next /ctx-remember richer.
Trust the candidates: The agent scans the full conversation. It often catches learnings you forgot about.
Edit before approving: If a proposed candidate is close but not quite right, tell the agent what to change. Do not settle for a vague learning when a precise one is possible.
Do not force empty ceremonies: If /ctx-wrap-up finds nothing worth persisting, that is fine. A session that only read files and answered questions does not need artificial learnings.
The Complete Session: the full session workflow that ceremonies bookend
Persisting Decisions, Learnings, and Conventions: deep dive on what gets persisted during wrap-up
Detecting and Fixing Drift: keeping context files accurate between ceremonies
Pausing Context Hooks: skip ceremonies entirely for quick tasks that don't need them
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-changes/","level":1,"title":"Reviewing Session Changes","text":"","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#what-changed-while-you-were-away","level":2,"title":"What Changed While You Were Away?","text":"
Between sessions, teammates commit code, context files get updated, and decisions pile up. ctx change gives you a single-command summary of everything that moved since your last session.
# Auto-detects your last session and shows what changed\nctx change\n\n# Check what changed in the last 48 hours\nctx change --since 48h\n\n# Check since a specific date\nctx change --since 2026-03-10\n
","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#how-reference-time-works","level":2,"title":"How Reference Time Works","text":"
ctx change needs a reference point to compare against. It tries these sources in order:
--since flag: explicit duration (24h, 72h) or date (2026-03-10, RFC3339 timestamp)
Session markers: ctx-loaded-* files in .context/state/; picks the second-most-recent (your previous session start)
Event log: last context-load-gate event from .context/state/events.jsonl
Fallback: 24 hours ago
The marker-based detection means ctx change usually just works without any flags: it knows when you last loaded context and shows everything after that.
","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#what-it-reports","level":2,"title":"What It Reports","text":"","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#context-file-changes","level":3,"title":"Context file changes","text":"
Any .md file in .context/ modified after the reference time:
No changes? If nothing shows up, the reference time might be wrong. Use --since 48h to widen the window.
Works without git. Context file changes are detected by filesystem mtime, not git. Code changes require git.
Hook integration. The context-load-gate hook writes the session marker that ctx change uses for auto-detection. If you're not using the ctx plugin, markers won't exist and it falls back to the event log or 24h window.
\"What does a full ctx session look like from start to finish?\"
You have ctx installed and your .context/ directory initialized, but the individual commands and skills feel disconnected.
How do they fit together into a coherent workflow?
This recipe walks through a complete session, from opening your editor to persisting context before you close it, so you can see how each piece connects.
Load: /ctx-remember: load context, get structured readback.
Orient: /ctx-status: check file health and token usage.
Pick: /ctx-next: choose what to work on.
Work: implement, test, iterate.
Commit: /ctx-commit: commit and capture decisions/learnings.
Reflect: /ctx-reflect: identify what to persist (at milestones)
Wrap up: /ctx-wrap-up: end-of-session ceremony.
Read on for the full walkthrough with examples.
What is a Readback?
A readback is a structured summary where the agent plays back what it knows:
last session,
active tasks,
recent decisions.
This way, you can confirm it loaded the right context.
The term \"readback\" comes from aviation, where pilots repeat instructions back to air traffic control to confirm they heard correctly.
Same idea in ctx: The agent tells you what it \"thinks\" is going on, and you correct anything that's off before the work begins.
Last session: topic, date, what was accomplished
Active work: pending and in-progress tasks
Recent context: 1-2 decisions or learnings that matter now
Next step: suggestion or question about what to focus on
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx status CLI command Quick health check on context files ctx agent CLI command Load token-budgeted context packet ctx journal source CLI command List previous sessions ctx journal source --show CLI command Inspect a specific session in detail /ctx-remember Skill Recall project context with structured readback /ctx-agent Skill Load full context packet inside the assistant /ctx-status Skill Show context summary with commentary /ctx-next Skill Suggest what to work on with rationale /ctx-commit Skill Commit code and prompt for context capture /ctx-reflect Skill Structured reflection checkpoint /ctx-history Skill Browse session history inside your AI assistant","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#the-workflow","level":2,"title":"The Workflow","text":"
The session lifecycle has seven steps. You will not always use every step (for example, a quick bugfix might skip reflection, and a research session might skip committing), but the full arc looks like this:
Load context > Orient > Pick a Task > Work > Commit > Reflect
Start every session by loading what you know. The fastest way is a single prompt:
Do you remember what we were working on?\n
This triggers the /ctx-remember skill. Behind the scenes, the assistant runs ctx agent --budget 4000, reads the files listed in the context packet (TASKS.md, DECISIONS.md, LEARNINGS.md, CONVENTIONS.md), checks ctx journal source --limit 3 for recent sessions, and then presents a structured readback.
The readback should feel like a recall, not a file system tour. If you see \"Let me check if there are files...\" instead of a confident summary, the context system is not loaded properly.
As an alternative, if you want raw data instead of a readback, run ctx status in your terminal or invoke /ctx-status for a summarized health check showing file counts, token usage, and recent activity.
After loading context, verify you understand the current state.
/ctx-status\n
The status output shows which context files are populated, how many tokens they consume, and which files were recently modified. Look for:
Empty core files: TASKS.md or CONVENTIONS.md with no content means the context is sparse
High token count (over 30k): the context is bloated and might need ctx compact
No recent activity: files may be stale and need updating
If the status looks healthy and the readback from Step 1 gave you enough context, skip ahead.
If something seems off (stale tasks, missing decisions...), spend a minute reading the relevant file before proceeding.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-3-pick-what-to-work-on","level":3,"title":"Step 3: Pick What to Work On","text":"
With context loaded, choose a task. You can pick one yourself, or ask the assistant to recommend:
/ctx-next\n
The skill reads TASKS.md, checks recent sessions to avoid re-suggesting completed work, and presents 1-3 ranked recommendations with rationale.
It prioritizes in-progress tasks over new starts (finishing is better than starting), respects explicit priority tags, and favors momentum: continuing a thread from a recent session is cheaper than context-switching.
If you already know what you want to work on, state it directly:
Let's work on the session enrichment feature.\n
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-4-do-the-work","level":3,"title":"Step 4: Do the Work","text":"
This is the main body of the session: write code, fix bugs, refactor, research: whatever the task requires.
During this phase, a few ctx-specific patterns help:
Check decisions before choosing: when you face a design choice, check if a prior decision covers it.
Is this consistent with our decisions?\n
Constrain scope: keep the assistant focused on the task at hand.
Only change files in internal/cli/session/. Nothing else.\n
Use /ctx-implement for multistep plans: if the task has multiple steps, this skill executes them one at a time with build/test verification between each step.
Context monitoring runs automatically: the check-context-size hook monitors context capacity at adaptive intervals. Early in a session it stays silent. After 16+ prompts it starts monitoring, and past 30 prompts it checks frequently. If context capacity is running high, it will suggest saving unsaved work. No manual invocation is needed.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-5-commit-with-context","level":3,"title":"Step 5: Commit with Context","text":"
When the work is ready, use the context-aware commit instead of raw git commit:
/ctx-commit\n
The Agent May Recommend Committing
You do not always need to invoke /ctx-commit explicitly.
After a commit, the agent may proactively offer to capture context:
\"We just made a trade-off there. Want me to record it as a decision?\"
This is normal: The Agent Playbook encourages persisting at milestones, and a commit is a natural milestone.
As an alternative, you can ask the assistant \"can we commit this?\" and it will pick up the /ctx-commit skill for you.
The skill runs a pre-commit build check (for Go projects, go build), reviews the staged changes, drafts a commit message focused on \"why\" rather than \"what\", and then commits.
After the commit succeeds, it prompts you:
**Any context to capture?**\n\n- **Decision**: Did you make a design choice or trade-off?\n- **Learning**: Did you hit a gotcha or discover something?\n- **Neither**: No context to capture; we are done.\n
If you made a decision, the skill records it with ctx add decision. If you learned something, it records it with ctx add learning including context, lesson, and application fields. This is the bridge between committing code and remembering why the code looks the way it does.
If source code changed in areas that affect documentation, the skill also offers to check for doc drift.
At natural breakpoints (after finishing a feature, resolving a complex bug, or before switching tasks) pause to reflect:
/ctx-reflect\n
Agents Reflect at Milestones
Agents often reflect without explicit invocation.
After completing a significant piece of work, the agent may naturally surface items worth persisting:
\"We discovered that $PPID resolves differently inside hooks. Should I save that as a learning?\"
This is the agent following the Work-Reflect-Persist cycle from the Agent Playbook.
You do not need to say /ctx-reflect for this to happen; the agent treats milestones as reflection triggers on its own.
The skill works through a checklist: learnings discovered, decisions made, tasks completed or created, and whether there are items worth persisting. It then presents a summary with specific items to persist, each with the exact command to run:
I would suggest persisting:\n\n- **Learning**: `$PPID` in PreToolUse hooks resolves to the Claude Code PID\n `ctx add learning --context \"...\" --lesson \"...\" --application \"...\"`\n- **Task**: mark \"Add cooldown to ctx agent\" as done\n- **Decision**: tombstone-based cooldown with 10m default\n `ctx add decision \"...\"`\n\nWant me to persist any of these?\n
The skill asks before persisting anything. You choose what to keep.
Not every commit needs reflection. A typo fix does not. But when you have been debugging for an hour and finally understand the root cause, that is worth a reflection checkpoint.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-7-persist-before-ending","level":3,"title":"Step 7: Persist Before Ending","text":"
Before ending the session, run the wrap-up ceremony to capture outstanding learnings, decisions, conventions, and tasks:
/ctx-wrap-up\n
Ceremony Skills
/ctx-remember at session start and /ctx-wrap-up at session end are ceremony skills: Invoke them as explicit slash commands for completeness, not conversationally.
See Session Ceremonies for details on why these two skills are the exception to the conversational rule.
The skill gathers signal from git diff, recent commits, and the conversation, then proposes structured candidates for your approval. After you select which to keep, it persists them via ctx add and offers /ctx-commit if uncommitted changes remain.
Session transcripts are automatically captured by Claude Code and can be browsed later with ctx journal source and ctx journal source --show.
Last session (Feb 7, caching-layer): we implemented the in-memory cache with\nTTL expiration and wrote tests for the eviction policy. \n\nYou have 3 pending tasks; the biggest one is adding Redis as a backend.\n\nRecent decision:<br>\nwe chose LRU over LFU because the access pattern is mostly recency-based.\n\nWant to continue with the Redis backend, or tackle something else?\n
You: Let's work on the Redis caching backend.
Agent (picks up the task, begins implementation):
Got it. I see the task in TASKS.md: \"Add Redis backend for cache layer.\"\n\nI will start with the connection pool setup. Only touching\n`internal/cache/redis/` for now.\n
(The agent works... writes code, runs tests, iterates.)
Agent (after implementing the connection pool and choosing a retry strategy):
The connection pool is working. I went with exponential backoff with jitter\nfor retries instead of fixed intervals; it handles thundering herd better\nunder reconnection storms. That is a design choice worth recording. \n\nWant me to save it as a decision?\n
You: Yes, go ahead.
Agent:
Done. Added to DECISIONS.md: \"Exponential backoff with jitter for Redis\nreconnection, to avoid thundering herd on pool recovery.\"\n
(More work happens... the agent implements the cache read/write paths.)
You: What else did we learn during this session?
Agent:
A few things worth capturing:\n\n* **Learning**: Redis `SET` with `NX` and `PX` flags handles atomic\n set-if-not-exists with TTL in a single call. No need for a separate\n `EXPIRE` command. We initially did it in two calls and hit a race condition.\n* **Learning**: The `go-redis` library's `PoolSize` default is 10 per CPU.\n For our workload, explicit sizing at 20 was more predictable.\n* **Task update**: The connection pool subtask is done; the read/write path\n is implemented but needs integration tests.\n\nWant me to persist these?\n
You: Yes. And let's wrap up.
Agent (reflects and persists... all in one flow):
Persisted:\n\n* 2 learnings added to LEARNINGS.md\n* Marked \"connection pool setup\" done in TASKS.md\n* Added \"write integration tests for Redis cache\" as a new task\n\nContext is up to date. You are good to close.\n
Notice What Happened
In the above workflow, the user never typed /ctx-reflect or ctx add learning.
The agent moved through Load, Orient, Pick, Work, Commit, and Reflect driven by natural conversation.
\"Let's wrap up\" was enough to trigger the full reflect-and-persist flow.
The agent surfaced persist-worthy items at milestones: after a design choice, after discovering a gotcha: without waiting to be asked.
This is the intended experience.
The commands and skills still exist for when you want precise control, but the agent is a proactive partner in the lifecycle, not a passive executor of slash commands.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
Quick-reference checklist for a complete session:
Load: /ctx-remember: load context and confirm readback
Orient: /ctx-status: check file health and token usage
Pick: /ctx-next: choose what to work on
Work: implement, test, iterate (scope with \"only change X\")
Commit: /ctx-commit: commit and capture decisions/learnings
Reflect: /ctx-reflect: identify what to persist (at milestones)
Wrap up: /ctx-wrap-up: end-of-session ceremony
Conversational equivalents: you can drive the same lifecycle with plain language:
Step Slash command Natural language Load /ctx-remember \"Do you remember?\" / \"What were we working on?\" Orient /ctx-status \"How's our context looking?\" Pick /ctx-next \"What should we work on?\" / \"Let's do the caching task\" Work -- \"Only change files in internal/cache/\" Commit /ctx-commit \"Commit this\" / \"Ship it\" Reflect /ctx-reflect \"What did we learn?\" / (agent offers at milestones) Wrap up /ctx-wrap-up (use the slash command for completeness)
The agent understands both columns.
In practice, most sessions use a mix:
Explicit Commands when you want precision;
Natural Language when you want flow and agentic autonomy.
The agent will also initiate steps on its own (particularly \"Reflect\") when it recognizes a milestone.
Short sessions (quick bugfix) might only use: Load, Work, Commit.
Long sessions should Reflect after each major milestone and persist learnings and decisions before ending.
Persist early if context is running low. A hook monitors context capacity and notifies you when it gets high, but do not wait for the notification. If you have been working for a while and have unpersisted learnings, persist proactively.
Browse previous sessions by topic. If you need context from a prior session, ctx journal source --show auth will match by keyword. You do not need to remember the exact date or slug.
Reflection is optional but valuable. You can skip /ctx-reflect for small changes, but always persist learnings and decisions before ending a session where you did meaningful work. These are what the next session loads.
Let the hook handle context loading. The PreToolUse hook runs ctx agent automatically with a cooldown, so context loads on first tool use without you asking. The /ctx-remember prompt at session start is for your benefit (to get a readback), not because the assistant needs it.
The agent is a proactive partner, not a passive tool. A ctx-aware agent follows the Agent Playbook: it watches for milestones (completed tasks, design decisions, discovered gotchas) and offers to persist them without being asked. If you finish a tricky debugging session, it may say \"That root cause is worth saving as a learning. Want me to record it?\" before you think to ask. This is by design.
Not every session needs the full ceremony. Quick investigations, one-off questions, small fixes unrelated to active project work: These tasks don't benefit from persistence nudges, ceremony reminders, or knowledge checks. Every hook still fires, consuming tokens and attention on work that won't produce learnings or decisions worth capturing.
","path":["Recipes","Sessions","Pausing Context Hooks"],"tags":[]},{"location":"recipes/session-pause/#tldr","level":2,"title":"TL;DR","text":"Command What it does ctx pause or /ctx-pause Silence all nudge hooks for this session ctx resume or /ctx-resume Restore normal hook behavior
Pause is session-scoped: It only affects the current session. Other sessions (same project, different terminal) are unaffected.
","path":["Recipes","Sessions","Pausing Context Hooks"],"tags":[]},{"location":"recipes/session-pause/#what-still-fires","level":2,"title":"What Still Fires","text":"
Security hooks always run, even when paused:
block-non-path-ctx: prevents ./ctx invocations
block-dangerous-commands: blocks sudo, force push, etc.
# 1. Session starts: Context loads normally.\n\n# 2. You realize this is a quick task\nctx pause\n\n# 3. Work without interruption: hooks are silent\n\n# 4. Session evolves into real work? Resume first\nctx resume\n\n# 5. Now wrap up normally\n# /ctx-wrap-up\n
Resume before wrapping up. If your quick task turns into real work, resume hooks before running /ctx-wrap-up. The wrap-up ceremony needs active hooks to capture learnings properly.
Initial context load is unaffected. The ~8k token startup injection (CLAUDE.md, playbook, constitution) happens before any command runs. Pause only affects hooks that fire during the session.
Use for quick investigations. Debugging a stack trace? Checking a git log? Answering a colleague's question? Pause, do the work, close the session. No ceremony needed.
Don't use for real work. If you're implementing features, fixing bugs, or making decisions: keep hooks active. The nudges exist to prevent context loss.
You're deep in a session and realize: \"I need to refactor the swagger definitions next time.\" You could add a task, but this isn't a work item: it's a note to future-you. You could jot it on the scratchpad, but scratchpad entries don't announce themselves.
How do you leave a message that your next session opens with?
Reminders surface automatically at session start: VERBATIM, every session, until you dismiss them.
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx remind CLI command Add a reminder (default action) ctx remind list CLI command Show all pending reminders ctx remind dismiss CLI command Remove a reminder by ID (or --all) /ctx-remind Skill Natural language interface to reminders","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#step-1-leave-a-reminder","level":3,"title":"Step 1: Leave a Reminder","text":"
Tell your agent what to remember, or run it directly:
You: \"remind me to refactor the swagger definitions\"\n\nAgent: [runs ctx remind \"refactor the swagger definitions\"]\n \"Reminder set:\n + [1] refactor the swagger definitions\"\n
Or from the terminal:
ctx remind \"refactor the swagger definitions\"\n
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#step-2-set-a-date-gate-optional","level":3,"title":"Step 2: Set a Date Gate (Optional)","text":"
If the reminder shouldn't fire until a specific date:
You: \"remind me to check the deploy logs after Tuesday\"\n\nAgent: [runs ctx remind \"check the deploy logs\" --after 2026-02-25]\n \"Reminder set:\n + [2] check the deploy logs (after 2026-02-25)\"\n
The reminder stays silent until that date, then fires every session.
The agent converts natural language dates (\"tomorrow\", \"next week\", \"after the release on Friday\") to YYYY-MM-DD. If it's ambiguous, it asks.
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#step-3-start-a-new-session","level":3,"title":"Step 3: Start a New Session","text":"
Next session, the reminder appears automatically before anything else:
[1] refactor the swagger definitions\n [3] review auth token expiry logic\n [4] check deploy logs (after 2026-02-25, not yet due)\n
Date-gated reminders that haven't reached their date show (not yet due).
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#using-ctx-remind-in-a-session","level":2,"title":"Using /ctx-remind in a Session","text":"
Invoke the /ctx-remind skill, then describe what you want:
You: /ctx-remind remind me to update the API docs\nYou: /ctx-remind what reminders do I have?\nYou: /ctx-remind dismiss reminder 3\n
You say (after /ctx-remind) What the agent does \"remind me to update the API docs\" ctx remind \"update the API docs\" \"remind me next week to check staging\" ctx remind \"check staging\" --after 2026-03-02 \"what reminders do I have?\" ctx remind list \"dismiss reminder 3\" ctx remind dismiss 3 \"clear all reminders\" ctx remind dismiss --all","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#reminders-vs-scratchpad-vs-tasks","level":2,"title":"Reminders vs Scratchpad vs Tasks","text":"You want to... Use Leave a note that announces itself next session ctx remind Jot down a quick value or sensitive token ctx pad Track work with status and completion TASKS.md Record a decision or lesson for all sessions Context files
Decision guide:
If it should announce itself at session start → ctx remind
If it's a quiet note you'll check manually → ctx pad
If it's a work item you'll mark done → TASKS.md
Reminders Are Sticky Notes, Not Tasks
A reminder has no status, no priority, no lifecycle. It's a message to \"future you\" that fires until dismissed.
Reminders fire every session: Unlike nudges (which throttle to once per day), reminders repeat until you dismiss them. This is intentional: You asked to be reminded.
Date gating is session-scoped, not clock-scoped: --after 2026-02-25 means \"don't show until sessions on or after Feb 25.\" It does not mean \"alarm at midnight on Feb 25.\"
The agent handles date parsing: Say \"next week\" or \"after Friday\": The agent converts it to YYYY-MM-DD. The CLI only accepts the explicit date format.
Reminders are committed to git: They travel with the repo. If you switch machines, your reminders follow.
IDs never reuse: After dismissing reminder 3, the next reminder gets ID 4 (or higher). No confusion from recycled numbers.
Every session creates tombstone files in .context/state/ - small markers that suppress repeat hook nudges (\"already checked context size\", \"already sent persistence reminder\"). Over days and weeks, these accumulate into hundreds of files from long-dead sessions.
The files are harmless individually, but the clutter makes it harder to reason about state, and stale global tombstones can suppress nudges across sessions entirely.
ctx system prune --dry-run # preview what would be removed\nctx system prune # prune files older than 7 days\nctx system prune --days 1 # more aggressive: keep only today\n
","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/state-maintenance/#commands-used","level":2,"title":"Commands Used","text":"Tool Type Purpose ctx system prune Command Remove old per-session state files ctx status Command Quick health overview including state dir","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/state-maintenance/#understanding-state-files","level":2,"title":"Understanding State Files","text":"
State files fall into two categories:
Session-scoped (contain a UUID in the filename): Created per-session to suppress repeat nudges. Safe to prune once the session ends. Examples:
Global (no UUID): Persist across sessions. ctx system prune preserves these automatically. Some are legitimate state (events.jsonl, memory-import.json); others may be stale tombstones that need manual review.
ctx system prune # older than 7 days\nctx system prune --days 3 # older than 3 days\nctx system prune --days 1 # older than 1 day (aggressive)\n
","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/state-maintenance/#step-3-review-global-files","level":3,"title":"Step 3: Review Global Files","text":"
After pruning, check what prune preserved:
ls .context/state/ | grep -v '[0-9a-f]\\{8\\}-[0-9a-f]\\{4\\}'\n
Legitimate global files (keep):
events.jsonl - event log
memory-import.json - import tracking state
Stale global tombstones (safe to delete):
Files like backup-reminded, ceremony-reminded, version-checked with no session UUID are one-shot markers. If they are from a previous session, they are stale and can be removed manually.
Pruning active sessions is safe but noisy: If you prune a file belonging to a still-running session, the corresponding hook will re-fire its nudge on the next prompt. Minor UX annoyance, not data loss.
No context files are stored in state: The state directory contains only tombstones, counters, and diagnostic data. Nothing in .context/state/ affects your decisions, learnings, tasks, or conventions.
Test artifacts sneak in: Files like context-check-statstest or heartbeat-unknown are artifacts from development or testing. They lack UUIDs so prune preserves them. Delete manually.
Detecting and Fixing Drift: broader context maintenance including drift detection and archival
Troubleshooting: diagnostic workflow using ctx doctor and event logs
CLI Reference: system: full flag documentation for ctx system prune and related commands
","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/system-hooks-audit/","level":1,"title":"Auditing System Hooks","text":"","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#the-problem","level":2,"title":"The Problem","text":"
ctx runs 14 system hooks behind the scenes: nudging your agent to persist context, warning about resource pressure, gating commits on QA. But these hooks are invisible by design. You never see them fire. You never know if they stopped working.
How do you verify your hooks are actually running, audit what they do, and get alerted when they go silent?
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#tldr","level":2,"title":"TL;DR","text":"
ctx system check-resources # run a hook manually\nls -la .context/logs/ # check hook execution logs\nctx notify setup # get notified when hooks fire\n
Or ask your agent: \"Are our hooks running?\"
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx system <hook> CLI command Run a system hook manually ctx system resources CLI command Show system resource status ctx system stats CLI command Stream or dump per-session token stats ctx notify setup CLI command Configure webhook for audit trail ctx notify test CLI command Verify webhook delivery .ctxrcnotify.events Configuration Subscribe to relay for full hook audit .context/logs/ Log files Local hook execution ledger","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#what-are-system-hooks","level":2,"title":"What Are System Hooks?","text":"
System hooks are plumbing commands that ctx registers with your AI tool (Claude Code, Cursor, etc.) via the plugin's hooks.json. They fire automatically at specific events during your AI session:
Event When Hooks UserPromptSubmit Before the agent sees your prompt 10 check hooks + heartbeat PreToolUse Before the agent uses a tool block-non-path-ctx, qa-reminderPostToolUse After a tool call succeeds post-commit
You never run these manually. Your AI tool runs them for you: That's the point.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#the-complete-hook-catalog","level":2,"title":"The Complete Hook Catalog","text":"","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#prompt-time-checks-userpromptsubmit","level":3,"title":"Prompt-Time Checks (UserPromptSubmit)","text":"
These fire before every prompt, but most are throttled to avoid noise.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-context-size-context-capacity-warning","level":4,"title":"check-context-size: Context Capacity Warning","text":"
What: Adaptive prompt counter. Silent for the first 15 prompts, then nudges with increasing frequency (every 5th, then every 3rd).
Why: Long sessions lose coherence. The nudge reminds both you and the agent to persist context before the window fills up.
Output: VERBATIM relay box with prompt count.
┌─ Context Checkpoint (prompt #20) ────────────────\n│ This session is getting deep. Consider wrapping up\n│ soon. If there are unsaved learnings, decisions, or\n│ conventions, now is a good time to persist them.\n│ ⏱ Context window: ~45k tokens (~22% of 200k)\n└──────────────────────────────────────────────────\n
Stats: Every prompt records token usage to .context/state/stats-{session}.jsonl. Monitor live with ctx system stats --follow or query with ctx system stats --json. Stats are recorded even during wrap-up suppression (event: suppressed).
Billing guard: When billing_token_warn is set in .ctxrc, a one-shot warning fires if session tokens exceed the threshold. This warning is independent of all other triggers - it fires even during wrap-up suppression.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-persistence-context-staleness-nudge","level":4,"title":"check-persistence: Context Staleness Nudge","text":"
What: Tracks when .context/*.md files were last modified. If too many prompts pass without a write, nudges the agent to persist.
Why: Sessions produce insights that evaporate if not recorded. This catches the \"we talked about it but never wrote it down\" failure mode.
Output: VERBATIM relay after 20+ prompts without a context file change.
┌─ Persistence Checkpoint (prompt #20) ───────────\n│ No context files updated in 20+ prompts.\n│ Have you discovered learnings, made decisions,\n│ established conventions, or completed tasks\n│ worth persisting?\n│\n│ Run /ctx-wrap-up to capture session context.\n└──────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-ceremonies-session-ritual-adoption","level":4,"title":"check-ceremonies: Session Ritual Adoption","text":"
What: Scans your last 3 journal entries for /ctx-remember and /ctx-wrap-up usage. Nudges once per day if missing.
Why: Session ceremonies are the highest-leverage habit in ctx. This hook bootstraps the habit until it becomes automatic.
Output: Tailored nudge depending on which ceremony is missing.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-journal-unimported-session-reminder","level":4,"title":"check-journal: Unimported Session Reminder","text":"
What: Detects unimported Claude Code sessions and unenriched journal entries. Fires once per day.
Why: Exported sessions become searchable history. Unenriched entries lack metadata for filtering. Both decay in value over time.
Output: VERBATIM relay with counts and exact commands.
┌─ Journal Reminder ─────────────────────────────\n│ You have 3 new session(s) not yet exported.\n│ 5 existing entries need enrichment.\n│\n│ Export and enrich:\n│ ctx journal import --all\n│ /ctx-journal-enrich-all\n└────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-resources-system-resource-pressure","level":4,"title":"check-resources: System Resource Pressure","text":"
What: Monitors memory, swap, disk, and CPU load. Only fires at DANGER severity (memory >= 90%, swap >= 75%, disk >= 95%, load >= 1.5x CPU count).
Why: Resource exhaustion mid-session can corrupt work. This provides early warning to persist and exit.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-knowledge-knowledge-file-growth","level":4,"title":"check-knowledge: Knowledge File Growth","text":"
What: Counts entries in LEARNINGS.md, DECISIONS.md, and lines in CONVENTIONS.md. Fires once per day when thresholds are exceeded.
Why: Large knowledge files dilute agent context. 35 learnings compete for attention; 15 focused ones get applied. Thresholds are configurable in .ctxrc.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-version-binaryplugin-version-drift","level":4,"title":"check-version: Binary/Plugin Version Drift","text":"
What: Compares the ctx binary version against the plugin version. Fires once per day. Also checks encryption key age for rotation nudge.
Why: Version drift means hooks reference features the binary doesn't have. The key rotation nudge prevents indefinite key reuse.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-reminders-pending-reminder-relay","level":4,"title":"check-reminders: Pending Reminder Relay","text":"
What: Reads .context/reminders.json and surfaces any due reminders via VERBATIM relay. No throttle: fires every session until dismissed.
Why: Reminders are sticky notes to future-you. Unlike nudges (which throttle to once per day), reminders repeat deliberately until the user dismisses them.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-freshness-technology-constant-staleness","level":4,"title":"check-freshness: Technology Constant Staleness","text":"
What: Stats files listed in .ctxrcfreshness_files and warns if any haven't been modified in over 6 months. Daily throttle. Silent when no files are configured (opt-in via .ctxrc).
Why: Model capabilities evolve - token budgets, attention limits, and context window sizes that were accurate 6 months ago may no longer reflect best practices. This hook reminds you to review and touch the file to confirm values are still current.
Config (.ctxrc):
freshness_files:\n - path: config/thresholds.yaml\n desc: Model token limits and batch sizes\n review_url: https://docs.example.com/limits # optional\n
Each entry has a path (relative to project root), desc (what constants live there), and optional review_url (where to check current values). When review_url is set, the nudge includes \"Review against: {url}\". When absent, just \"Touch the file to mark it as reviewed.\"
Output: VERBATIM relay listing stale files, silent otherwise.
┌─ Technology Constants Stale ──────────────────────\n│ config/thresholds.yaml (210 days ago)\n│ - Model token limits and batch sizes\n│ Review against: https://docs.example.com/limits\n│ Touch each file to mark it as reviewed.\n└───────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-map-staleness-architecture-map-drift","level":4,"title":"check-map-staleness: Architecture Map Drift","text":"
What: Checks whether map-tracking.json is older than 30 days and there are commits touching internal/ since the last map refresh. Daily throttle prevents repeated nudges.
Why: Architecture documentation drifts silently as code evolves. This hook detects structural changes that the map hasn't caught up with and suggests running /ctx-architecture to refresh.
Output: VERBATIM relay when stale and modules changed, silent otherwise.
┌─ Architecture Map Stale ────────────────────────────\n│ ARCHITECTURE.md hasn't been refreshed since 2026-01-15\n│ and there are commits touching 12 modules.\n│ /ctx-architecture keeps architecture docs drift-free.\n│\n│ Want me to run /ctx-architecture to refresh?\n└─────────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#heartbeat-session-heartbeat-webhook","level":4,"title":"heartbeat: Session Heartbeat Webhook","text":"
What: Fires on every prompt. Sends a webhook notification with prompt count, session ID, context modification status, and token usage telemetry. Never produces stdout.
Why: Other hooks only send webhooks when they \"speak\" (nudge/relay). When silent, you have no visibility into session activity. The heartbeat provides a continuous session-alive signal with token consumption data for observability dashboards or liveness monitoring.
Token fields (tokens, context_window, usage_pct) are included when usage data is available from the session JSONL file.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#tool-time-hooks-pretooluse-posttooluse","level":3,"title":"Tool-Time Hooks (PreToolUse / PostToolUse)","text":"","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#block-non-path-ctx-path-enforcement-hard-gate","level":4,"title":"block-non-path-ctx: PATH Enforcement (Hard Gate)","text":"
What: Blocks any Bash command that invokes ./ctx, ./dist/ctx, go run ./cmd/ctx, or an absolute path to ctx. Only PATH invocations are allowed.
Why: Enforces CONSTITUTION.md's invocation invariant. Running a dev-built binary in production context causes version confusion and silent behavior drift.
Output: Block response (prevents the tool call):
{\"decision\": \"block\", \"reason\": \"Use 'ctx' from PATH, not './ctx'...\"}\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#qa-reminder-pre-commit-qa-gate","level":4,"title":"qa-reminder: Pre-Commit QA Gate","text":"
What: Fires on every Edit tool use. Reminds the agent to lint and test the entire project before committing.
Why: Agents tend to \"I'll test later\" and then commit untested code. Repetition is intentional: the hook reinforces the habit on every edit, not just before commits.
Output: Agent directive with hard QA gate instructions.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#post-commit-context-capture-after-commit","level":4,"title":"post-commit: Context Capture After Commit","text":"
What: Fires after any git commit (excludes --amend). Prompts the agent to offer context capture (decision? learning?) and suggest running lints/tests before pushing.
Why: Commits are natural reflection points. The nudge converts mechanical git operations into context-capturing opportunities.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#auditing-hooks-via-the-local-event-log","level":2,"title":"Auditing Hooks via the Local Event Log","text":"
If you don't need an external audit trail, enable the local event log for a self-contained record of hook activity:
# .ctxrc\nevent_log: true\n
Once enabled, every hook that fires writes an entry to .context/state/events.jsonl. Query it with ctx system events:
ctx system events # last 50 events\nctx system events --hook qa-reminder # filter by hook\nctx system events --session <id> # filter by session\nctx system events --json | jq '.' # raw JSONL for processing\n
The event log is local, queryable, and doesn't require any external service. For a full diagnostic workflow combining event logs with structural health checks, see Troubleshooting.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#auditing-hooks-via-webhooks","level":2,"title":"Auditing Hooks via Webhooks","text":"
The most powerful audit setup pipes all hook output to a webhook, giving you a real-time external record of what your agent is being told.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#step-1-set-up-the-webhook","level":3,"title":"Step 1: Set Up the Webhook","text":"
ctx notify setup\n# Enter your webhook URL (Slack, Discord, ntfy.sh, IFTTT, etc.)\n
See Webhook Notifications for service-specific setup.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#step-2-subscribe-to-relay-events","level":3,"title":"Step 2: Subscribe to relay Events","text":"
# .ctxrc\nnotify:\n events:\n - relay # all hook output: VERBATIM relays, directives, blocks\n - nudge # just the user-facing VERBATIM relays\n
The relay event fires for every hook that produces output. This includes:
Hook Event sent check-context-sizerelay + nudgecheck-persistencerelay + nudgecheck-ceremoniesrelay + nudgecheck-journalrelay + nudgecheck-resourcesrelay + nudgecheck-knowledgerelay + nudgecheck-versionrelay + nudgecheck-remindersrelay + nudgecheck-freshnessrelay + nudgecheck-map-stalenessrelay + nudgeheartbeatheartbeat only block-non-path-ctxrelay only post-commitrelay only qa-reminderrelay only","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#step-3-cross-reference","level":3,"title":"Step 3: Cross-Reference","text":"
With relay enabled, your webhook receives a JSON payload every time a hook fires:
{\n \"event\": \"relay\",\n \"message\": \"check-persistence: No context updated in 20+ prompts\",\n \"session_id\": \"b854bd9c\",\n \"timestamp\": \"2026-02-22T14:30:00Z\",\n \"project\": \"my-project\"\n}\n
This creates an external audit trail independent of the agent. You can now cross-verify: did the agent actually relay the checkpoint the hook told it to relay?
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#verifying-hooks-actually-fire","level":2,"title":"Verifying Hooks Actually Fire","text":"
Hooks are invisible. An invisible thing that breaks is indistinguishable from an invisible thing that never existed. Three verification methods, from simplest to most robust:
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#method-1-ask-the-agent","level":3,"title":"Method 1: Ask the Agent","text":"
The simplest check. After a few prompts into a session:
\"Did you receive any hook output this session? Print the last\ncontext checkpoint or persistence nudge you saw.\"\n
The agent should be able to recall recent hook output from its context window. If it says \"I haven't received any hook output\", either:
The hooks aren't firing (check installation);
The session is too short (hooks throttle early);
The hooks fired but the agent absorbed them silently.
Limitation: You are trusting the agent to report accurately. Agents sometimes confabulate or miss context. Use this as a quick smoke test, not definitive proof.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#method-2-check-the-webhook-trail","level":3,"title":"Method 2: Check the Webhook Trail","text":"
If you have relay events enabled, check your webhook receiver. Every hook that fires sends a timestamped notification. No notification = no fire.
This is the ground truth. The webhook is called directly by the ctx binary, not by the agent. The agent cannot fake, suppress, or modify webhook deliveries.
Compare what the webhook received against what the agent claims to have relayed. Discrepancies mean the agent is absorbing nudges instead of surfacing them.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#method-3-read-the-local-logs","level":3,"title":"Method 3: Read the Local Logs","text":"
Hooks that support logging write to .context/logs/:
Logs are append-only and written by the ctx binary, not the agent.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#detecting-silent-hook-failures","level":2,"title":"Detecting Silent Hook Failures","text":"
The hardest failure mode: hooks that stop firing without error. The plugin config changes, a binary update drops a hook, or a PATH issue silently breaks execution. Nothing errors: The hook just never runs.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#the-staleness-signal","level":3,"title":"The Staleness Signal","text":"
If .context/logs/check-context-size.log has no entries newer than 5 days but you've been running sessions daily, something is wrong. The absence of evidence is evidence of absence: but only if you control for inactivity.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#false-positive-protection","level":3,"title":"False Positive Protection","text":"
A naive \"hooks haven't fired in N days\" alert fires incorrectly when you simply haven't used ctx. The correct check needs two inputs:
Last hook fire time: from .context/logs/ or webhook history
Last session activity: from journal entries or ctx journal source
If sessions are happening but hooks aren't firing, that's a real problem. If neither sessions nor hooks are happening, that's a vacation.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#what-to-check","level":3,"title":"What to Check","text":"
When you suspect hooks aren't firing:
# 1. Verify the plugin is installed\nls ~/.claude/plugins/\n\n# 2. Check hook registration\ncat ~/.claude/plugins/ctx/hooks.json | head -20\n\n# 3. Run a hook manually to see if it errors\necho '{\"session_id\":\"test\"}' | ctx system check-context-size\n\n# 4. Check for PATH issues\nwhich ctx\nctx --version\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#tips","level":2,"title":"Tips","text":"
Start with nudge, graduate to relay: The nudge event covers user-facing VERBATIM relays. Add relay when you want full visibility into agent directives and hard gates.
Webhooks are your trust anchor: The agent can ignore a nudge, but it can't suppress the webhook. If the webhook fired and the agent didn't relay, you have proof of a compliance gap.
Hooks are throttled by design: Most check hooks fire once per day or use adaptive frequency. Don't expect a notification every prompt: Silence usually means the throttle is working, not that the hook is broken.
Daily markers live in .context/state/: Throttle files are stored in .context/state/ alongside other project-scoped state. If you need to force a hook to re-fire during testing, delete the corresponding marker file.
The QA reminder is intentionally noisy: Unlike other hooks, qa-reminder fires on every Edit call with no throttle. This is deliberate: The commit quality degrades when the reminder fades from salience.
Log files are safe to commit: .context/logs/ contains only timestamps, session IDs, and status keywords. No secrets, no code.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#next-up","level":2,"title":"Next Up","text":"
Detecting and Fixing Drift →: Keep context files accurate as your codebase evolves.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#see-also","level":2,"title":"See Also","text":"
Troubleshooting: full diagnostic workflow using ctx doctor, event logs, and /ctx-doctor
Customizing Hook Messages: override what hooks say without changing what they do
Webhook Notifications: setting up and configuring the webhook system
Hook Output Patterns: understanding VERBATIM relays, agent directives, and hard gates
Detecting and Fixing Drift: structural checks that complement runtime hook auditing
CLI Reference: full ctx system command reference
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/task-management/","level":1,"title":"Tracking Work Across Sessions","text":"","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-problem","level":2,"title":"The Problem","text":"
You have work that spans multiple sessions. Tasks get added during one session, partially finished in another, and completed days later.
Without a system, follow-up items fall through the cracks, priorities drift, and you lose track of what was done versus what still needs doing. TASKS.md grows cluttered with completed checkboxes that obscure the remaining work.
How do you manage work items that span multiple sessions without losing context?
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#tldr","level":2,"title":"TL;DR","text":"
Read on for the full workflow and conversational patterns.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx add task Command Add a new task to TASKS.mdctx task complete Command Mark a task as done by number or text ctx task snapshot Command Create a point-in-time backup of TASKS.mdctx task archive Command Move completed tasks to archive file /ctx-add-task Skill AI-assisted task creation with validation /ctx-archive Skill AI-guided archival with safety checks /ctx-next Skill Pick what to work on based on priorities","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-1-add-tasks-with-priorities","level":3,"title":"Step 1: Add Tasks with Priorities","text":"
Every piece of follow-up work gets a task. Use ctx add task from the terminal or /ctx-add-task from your AI assistant. Tasks should start with a verb and be specific enough that someone unfamiliar with the session could act on them.
# High-priority bug found during code review\nctx add task \"Fix race condition in session cooldown\" --priority high\n\n# Medium-priority feature work\nctx add task \"Add --format json flag to ctx status for CI integration\" --priority medium\n\n# Low-priority cleanup\nctx add task \"Remove deprecated --raw flag from ctx load\" --priority low\n
The /ctx-add-task skill validates your task before recording it. It checks that the description is actionable, not a duplicate, and specific enough for someone else to pick up.
If you say \"fix the bug,\" it will ask you to clarify which bug and where.
Tasks Are Often Created Proactively
In practice, many tasks are created proactively by the agent rather than by explicit CLI commands.
After completing a feature, the agent will often identify follow-up work: tests, docs, edge cases, error handling, and offer to add them as tasks.
You do not need to dictate ctx add task commands; the agent picks up on work context and suggests tasks naturally.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-2-organize-with-phase-sections","level":3,"title":"Step 2: Organize with Phase Sections","text":"
Tasks live in phase sections inside TASKS.md.
Phases provide logical groupings that preserve order and enable replay.
A task does not move between sections. It stays in its phase permanently, and status is tracked via checkboxes and inline tags.
## Phase 1: Core CLI\n\n- [x] Implement ctx add command `#done:2026-02-01-143022`\n- [x] Implement ctx task complete command `#done:2026-02-03-091544`\n- [ ] Add --section flag to ctx add task `#priority:medium`\n\n## Phase 2: AI Integration\n\n- [ ] Implement ctx agent cooldown `#priority:high` `#in-progress`\n- [ ] Add ctx watch XML parsing `#priority:medium`\n - Blocked by: Need to finalize agent output format\n\n## Backlog\n\n- [ ] Performance optimization for large TASKS.md files `#priority:low`\n- [ ] Add metrics dashboard to ctx status `#priority:deferred`\n
Use --section when adding a task to a specific phase:
ctx add task \"Add ctx watch XML parsing\" --priority medium --section \\\n \"Phase 2: AI Integration\"\n
Without --section, the task is inserted before the first unchecked task in TASKS.md.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-3-pick-what-to-work-on","level":3,"title":"Step 3: Pick What to Work On","text":"
At the start of a session, or after finishing a task, use /ctx-next to get prioritized recommendations.
The skill reads TASKS.md, checks recent sessions, and ranks candidates using explicit priority, blocking status, in-progress state, momentum from recent work, and phase order.
You can also ask naturally: \"what should we work on?\" or \"what's the highest priority right now?\"
/ctx-next\n
The output looks like this:
**1. Implement ctx agent cooldown** `#priority:high`\n\n Still in-progress from yesterday's session. The tombstone file approach is\n half-built. Finishing is cheaper than context-switching.\n\n**2. Add --section flag to ctx add task** `#priority:medium`\n\n Last Phase 1 item. Quick win that unblocks organized task entry.\n\n---\n\n*Based on 8 pending tasks across 3 phases.\n\nLast session: agent-cooldown (2026-02-06).*\n
In-progress tasks almost always come first:
Finishing existing work takes priority over starting new work.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-4-complete-tasks","level":3,"title":"Step 4: Complete Tasks","text":"
When a task is done, mark it complete by number or partial text match:
# By task number (as shown in TASKS.md)\nctx task complete 3\n\n# By partial text match\nctx task complete \"agent cooldown\"\n
The task's checkbox changes from [ ] to [x] and a #done timestamp is added. Tasks are never deleted: they stay in their phase section so history is preserved.
Be Conversational
You rarely need to run ctx task complete yourself during an interactive session.
When you say something like \"the rate limiter is done\" or \"we finished that,\" the agent marks the task complete and moves on to suggesting what is next.
The CLI commands are most useful for manual housekeeping, scripted workflows, or when you want precision.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-5-snapshot-before-risky-changes","level":3,"title":"Step 5: Snapshot Before Risky Changes","text":"
Before a major refactor or any change that might break things, snapshot your current task state. This creates a copy of TASKS.md in .context/archive/ without modifying the original.
# Default snapshot\nctx task snapshot\n\n# Named snapshot (recommended before big changes)\nctx task snapshot \"before-refactor\"\n
This creates a file like .context/archive/tasks-before-refactor-2026-02-08-1430.md. If the refactor goes sideways, and you need to confirm what the task state looked like before you started, the snapshot is there.
Snapshots are cheap: Take them before any change you might want to undo or review later.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-6-archive-when-tasksmd-gets-cluttered","level":3,"title":"Step 6: Archive When TASKS.md Gets Cluttered","text":"
After several sessions, TASKS.md accumulates completed tasks that make it hard to see what is still pending.
Use ctx task archive to move all [x] items to a timestamped archive file.
Start with a dry run to preview what will be moved:
ctx task archive --dry-run\n
Then archive:
ctx task archive\n
Completed tasks move to .context/archive/tasks-2026-02-08.md. Phase headers are preserved in the archive for traceability. Pending tasks ([ ]) remain in TASKS.md.
The /ctx-archive skill adds two safety checks before archiving: it verifies that completed tasks are genuinely cluttering the view and that nothing was marked [x] prematurely.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#letting-the-agent-drive","level":2,"title":"Letting the Agent Drive","text":"
The steps above show the CLI commands for task management.
In practice, most task management happens conversationally:
An agent that has loaded the context files does not need you to type ctx add task. It tracks work naturally and offers the right operations.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#use-natural-language-instead-of-commands","level":3,"title":"Use Natural Language Instead of Commands","text":"
These conversational prompts replace explicit commands during interactive sessions:
Instead of typing... Just say... ctx add task \"Write tests for X\" \"We should add tests for this: track that?\" /ctx-next \"What should we work on?\" ctx task complete \"rate limiting\" \"The rate limiter is done, what's next?\" ctx task archive \"TASKS.md is getting long, can you clean it up?\" ctx add task ... && ctx add task ... \"Add follow-ups for what we just built.\"
The agent translates these into the right ctx operations behind the scenes.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-agent-creates-tasks-proactively","level":3,"title":"The Agent Creates Tasks Proactively","text":"
After completing a piece of work, the agent does not wait for you to dictate follow-up items. It identifies them from the work context and offers:
**Agent**: \"I've finished implementing the rate limiter middleware. I noticed\nthree follow-up items:\n\n1. Write integration tests for the rate limiter (the unit tests pass but\n there's no end-to-end coverage)\n2. Add rate limit headers to API responses (X-RateLimit-Remaining, etc.)\n3. Document the rate limit configuration in the ops runbook\n\nWant me to add these as tasks?\"\n
This is how an agent behaves once it has read the Agent Playbook.
The playbook's \"Work, Reflect, Persist\" cycle means the agent asks itself after every milestone: Did I just complete a task? Did new work emerge?
When the answer is yes, it acts.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#asking-the-agent-what-to-capture","level":3,"title":"Asking the Agent What to Capture","text":"
You don't always know what's worth persisting.
Instead of reviewing the session yourself and typing specific commands, ask the agent to do the thinking:
You say What happens \"What tasks came out of this session?\" Agent reviews the work, identifies follow-ups, and offers to add them \"Are there any tasks, learnings, or decisions worth adding?\" Agent scans the full session for all three types and proposes a batch \"What should we remember from today?\" Agent surfaces everything persist-worthy: not just tasks \"Anything we should track before we move on?\" Agent checks for loose ends specific to the work just completed
This is especially powerful after long or exploratory sessions where multiple threads were touched.
The agent has the full conversation in context and can spot follow-ups you might miss:
You: \"Are there any tasks, learnings, or decisions worth adding\n from this session?\"\n\nAgent: \"Yes. Here's what I'd suggest:\n\n Tasks:\n - Add retry logic to the webhook delivery (we discussed it when\n the timeout test failed but never tracked it)\n - Update the API docs to reflect the new rate limit headers\n\n Learnings:\n - net/http's default client has no timeout: we burned 15 minutes\n on that before adding an explicit 30s timeout\n\n Decisions:\n - We chose token-bucket over sliding-window for rate limiting\n because it handles bursts better and the library is simpler\n\n Want me to add all of these?\"\n\nYou: \"All of them.\"\n
The agent fills in structured fields from what it observed during the session. You get the same quality entries as if you typed verbose commands, without needing to remember what happened or how to format it.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-agent-completes-tasks-and-moves-on","level":3,"title":"The Agent Completes Tasks and Moves On","text":"
When you confirm that work is finished, or when the agent can see that it is, it marks the task done and suggests the next step:
You: \n\"The cooldown logic is working now.\"\n\nAgent: \n\"Marked 'Implement ctx agent cooldown' as done. Your highest-\npriority remaining task is 'Add --section flag to ctx add task'.\n\nIt's the last item in Phase 1 and a quick win. Want to start on that?\"\n
The agent chains completion into next-task selection.
One sentence from you triggers two operations.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#end-of-session-capture","level":3,"title":"End-of-Session Capture","text":"
At the end of a session, the agent reviews what happened and catches loose ends:
You: \n\"I think we're good for today.\"\n\nAgent: \n\"Before we wrap up: during the middleware work you mentioned\nwanting to add metrics for rate limit hits. That never made it into TASKS.md.\n\nWant me to add it as a task so it's there next session?\"\n
This is the proactive persistence model at work.
The agent monitors the conversation for mentioned-but-not-tracked work and surfaces it before context is lost.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#a-conversational-session-example","level":3,"title":"A Conversational Session Example","text":"
Here is what task management looks like in a real session when the agent is driving:
You: What's left to do?\nAgent: You have 5 pending tasks across 2 phases. The highest priority is\n \"Implement ctx agent cooldown\": it's marked in-progress from\n yesterday and half-built. Want to pick that up?\n\nYou: Yeah, let's finish it.\n [... 20 minutes of implementation work ...]\n\nYou: That's working. Tests pass.\nAgent: Nice. I'll mark \"Implement ctx agent cooldown\" as done.\n I noticed two things from the implementation:\n 1. The tombstone file cleanup should happen on a timer, not just\n at startup. Want me to add that as a task?\n 2. We learned that file locks don't work reliably on NFS. I'll\n add that to LEARNINGS.md.\n\nYou: Yes to both. What's next?\nAgent: Added the cleanup task as high priority. Next up is \"Add --section\n flag to ctx add task\": last item in Phase 1. It should be\n straightforward since the flag parsing pattern already exists\n in the codebase. Ready?\n
It's All Conversational
Notice what did not happen: The user never typed a ctx command.
The agent handled task completion, follow-up creation, learning capture, and next-task selection: all from natural conversation.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
# Add a task\nctx add task \"Implement rate limiting for API endpoints\" --priority high\n\n# Add to a specific phase\nctx add task \"Write integration tests for rate limiter\" --section \"Phase 2\"\n\n# See what to work on\n# (from AI assistant) /ctx-next\n\n# Mark done by text\nctx task complete \"rate limiting\"\n\n# Mark done by number\nctx task complete 5\n\n# Snapshot before a risky refactor\nctx task snapshot \"before-middleware-rewrite\"\n\n# Archive completed tasks when the list gets long\nctx task archive --dry-run # preview first\nctx task archive # then archive\n
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#tips","level":2,"title":"Tips","text":"
Start tasks with a verb: \"Add,\" \"Fix,\" \"Implement,\" \"Investigate\": not just a topic like \"Authentication.\"
Include the why in the task description. Future sessions lack the context of why you added the task. \"Add rate limiting\" is worse than \"Add rate limiting to prevent abuse on the public API after the load test showed 10x traffic spikes.\"
Use #in-progress sparingly. Only one or two tasks should carry this tag at a time. If everything is in-progress, nothing is.
Snapshot before, not after. The point of a snapshot is to capture the state before a change, not to celebrate what you just finished.
Archive regularly. Once completed tasks outnumber pending ones, it is time to archive. A clean TASKS.md helps both you and your AI assistant focus.
Never delete tasks. Mark them [x] (completed) or [-] (skipped with a reason). Deletion breaks the audit trail.
Trust the agent's task instincts. When the agent suggests follow-up items after completing work, it is drawing on the full context of what just happened.
Conversational prompts beat commands in interactive sessions. Saying \"what should we work on?\" is faster and more natural than running /ctx-next. Save explicit commands for scripts, CI, and unattended runs.
Let the agent chain operations. A single statement like \"that's done, what's next?\" can trigger completion, follow-up identification, and next-task selection in one flow.
Review proactive task suggestions before moving on. The best follow-ups come from items spotted in-context right after the work completes.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#next-up","level":2,"title":"Next Up","text":"
Using the Scratchpad →: Store short-lived sensitive notes in an encrypted scratchpad.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#see-also","level":2,"title":"See Also","text":"
The Complete Session: full session lifecycle including task management in context
Persisting Decisions, Learnings, and Conventions: capturing the \"why\" behind your work
Detecting and Fixing Drift: keeping TASKS.md accurate over time
CLI Reference: full documentation for ctx add, ctx task complete, ctx task
Context Files: TASKS.md: format and conventions for TASKS.md
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/troubleshooting/","level":1,"title":"Troubleshooting","text":"","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#the-problem","level":2,"title":"The Problem","text":"
Something isn't working: a hook isn't firing, nudges are too noisy, context seems stale, or the agent isn't following instructions. The information to diagnose it exists (across status, drift, event logs, hook config, and session history), but assembling it manually is tedious.
ctx doctor # structural health check\nctx system events --last 20 # recent hook activity\n# or ask: \"something seems off, can you diagnose?\"\n
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx doctor CLI command Structural health report ctx doctor --json CLI command Machine-readable health report ctx system events CLI command Query local event log /ctx-doctor Skill Agent-driven diagnosis with analysis","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#quick-check-ctx-doctor","level":3,"title":"Quick Check: ctx doctor","text":"
Run ctx doctor for an instant structural health report. It checks context initialization, required files, drift, hook configuration, event logging, webhooks, reminders, task completion ratio, and context token size: all in one pass:
ctx doctor\n
ctx doctor\n==========\n\nStructure\n ✓ Context initialized (.context/)\n ✓ Required files present (4/4)\n\nQuality\n ⚠ Drift: 2 warnings (stale path in ARCHITECTURE.md, high entry count in LEARNINGS.md)\n\nHooks\n ✓ hooks.json valid (14 hooks registered)\n ○ Event logging disabled (enable with event_log: true in .ctxrc)\n\nState\n ✓ No pending reminders\n ⚠ Task completion ratio high (18/22 = 82%): consider archiving\n\nSize\n ✓ Context size: ~4200 tokens (budget: 8000)\n\nSummary: 2 warnings, 0 errors\n
Warnings are non-critical but worth fixing. Errors need attention. Informational notes (○) flag optional features that aren't enabled.
For power users: ctx system events with filters gives direct access to the event log.
# Last 50 events (default)\nctx system events\n\n# Events from a specific session\nctx system events --session eb1dc9cd-0163-4853-89d0-785fbfaae3a6\n\n# Only QA reminder events\nctx system events --hook qa-reminder\n\n# Raw JSONL for jq processing\nctx system events --json | jq '.message'\n\n# Include rotated (older) events\nctx system events --all --last 100\n
Filters use AND logic: --hook qa-reminder --session abc123 returns only QA reminder events from that specific session.
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#common-problems","level":2,"title":"Common Problems","text":"","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#ctx-not-initialized","level":3,"title":"\"ctx: not initialized\"","text":"
Symptoms: Any ctx command fails with ctx: not initialized - run \"ctx init\" first.
Cause: You're running ctx in a directory without an initialized .context/ directory. This guard runs on all user-facing commands to prevent confusing downstream errors.
Fix:
ctx init # create .context/ with template files\nctx init --minimal # or just the essentials (CONSTITUTION, TASKS, DECISIONS)\n
Commands that work without initialization: ctx init, ctx setup, ctx doctor, and help-only grouping commands (ctx, ctx system).
Symptoms: No nudges appearing, webhook silent, event log shows no entries for the expected hook.
Diagnosis:
# 1. Check if ctx is installed and on PATH\nwhich ctx && ctx --version\n\n# 2. Check if the hook is registered\ngrep \"check-persistence\" ~/.claude/plugins/ctx/hooks.json\n\n# 3. Run the hook manually to see if it errors\necho '{\"session_id\":\"test\"}' | ctx system check-persistence\n\n# 4. Check event log for the hook (if enabled)\nctx system events --hook check-persistence\n
Common causes:
Plugin is not installed: run ctx init --claude to reinstall
PATH issue: the hook invokes ctx from PATH; ensure it resolves
Throttle active: most hooks fire once per day: check .context/state/ for daily marker files
Hook silenced: a custom message override may be an empty file: check ctx system message list for overrides
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#too-many-nudges","level":3,"title":"\"Too many nudges\"","text":"
Symptoms: The agent is overwhelmed with hook output. Context checkpoints, persistence reminders, and QA gates fire constantly.
Diagnosis:
# Check how often hooks fired recently\nctx system events --last 50\n\n# Count fires per hook\nctx system events --json | jq -r '.detail.hook // \"unknown\"' \\\n | sort | uniq -c | sort -rn\n
Common causes:
QA reminder is noisy by design: it fires on every Edit call with no throttle. This is intentional. If it's too much, silence it with an empty override: ctx system message edit qa-reminder gate, then empty the file
Long session: context checkpoint fires with increasing frequency after prompt 15. This is the system telling you the session is getting long: consider wrapping up
Short throttle window: if you deleted marker files in .context/state/, daily-throttled hooks will re-fire
Outdated Claude Code plugin: Update the plugin using Claude Code → /plugin → \"Marketplace\"
ctx version mismatch: Build (or download) and install the latest ctx vesion.
Symptoms: The agent references outdated information, paths that don't exist, or decisions that were reversed.
Diagnosis:
# Structural drift check\nctx drift\n\n# Full doctor check (includes drift + more)\nctx doctor\n\n# Check when context files were last modified\nctx status --verbose\n
Common causes:
Drift accumulated: stale path references in ARCHITECTURE.md or CONVENTIONS.md. Fix with ctx drift --fix or ask the agent to clean up.
Task backlog: too many completed tasks diluting active context. Archive with ctx task archive or ctx compact --archive.
Large context files: LEARNINGS.md with 40+ entries competes for attention. Consolidate with /ctx-consolidate.
Missing session ceremonies: if /ctx-remember and /ctx-wrap-up aren't being used, context doesn't get refreshed. See Session Ceremonies.
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#the-agent-isnt-following-instructions","level":3,"title":"\"The agent isn't following instructions\"","text":"
Symptoms: The agent ignores conventions, forgets decisions, or acts contrary to CONSTITUTION.md rules.
Diagnosis:
# Check context token size: Is it too large for the model?\nctx doctor --json | jq '.results[] | select(.name == \"context_size\")'\n\n# Check if context is actually being loaded\nctx system events --hook context-load-gate\n
Common causes:
Context too large: if total tokens exceed the model's effective attention, instructions get diluted. Check ctx doctor for the size check. Compact with ctx compact --archive.
Context not loading: if context-load-gate hasn't fired, the agent may not have received context. Verify the hook is registered.
Conflicting instructions: CONVENTIONS.md says one thing, AGENT_PLAYBOOK.md says another. Review both files for consistency.
Agent drift: the agent's behavior diverges from instructions over long sessions. This is normal. Use /ctx-reflect to re-anchor, or start a new session.
Event logging (optional but recommended): event_log: true in .ctxrc
ctx initialized: ctx init
Event logging is not required for ctx doctor or /ctx-doctor to work. Both degrade gracefully: structural checks run regardless, and the skill notes when event data is unavailable.
Start with ctx doctor: It's the fastest way to get a comprehensive health picture. Save event log inspection for when you need to understand when and how often something happened.
Enable event logging early: The log is opt-in and low-cost (~250 bytes per event, 1MB rotation cap). Enable it before you need it: Diagnosing a problem without historical data is much harder.
Use the skill for correlation: ctx doctor tells you what is wrong. /ctx-doctor tells you why by correlating structural findings with event patterns. The agent can spot connections that individual commands miss.
Event log is gitignored: It's machine-local diagnostic data, not project context. Different machines produce different event streams.
Auditing System Hooks: the complete hook catalog and webhook-based audit trails
Detecting and Fixing Drift: structural and semantic drift detection and repair
Webhook Notifications: push notifications for hook activity
ctx doctor CLI: full command reference
ctx system events CLI: event log query reference
/ctx-doctor skill: agent-driven diagnosis
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/webhook-notifications/","level":1,"title":"Webhook Notifications","text":"","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#the-problem","level":2,"title":"The Problem","text":"
Your agent runs autonomously (loops, implements, releases) while you are away from the terminal. You have no way to know when it finishes, hits a limit, or when a hook fires a nudge.
How do you get notified about agent activity without watching the terminal?
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#tldr","level":2,"title":"TL;DR","text":"
ctx notify setup # configure webhook URL (encrypted)\nctx notify test # verify delivery\n# Hooks auto-notify on: session-end, loop-iteration, resource-danger\n
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx notify setup CLI command Configure and encrypt webhook URL ctx notify test CLI command Send a test notification ctx notify --event <name> \"msg\" CLI command Send a notification from scripts/skills .ctxrcnotify.events Configuration Filter which events reach your webhook","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-1-get-a-webhook-url","level":3,"title":"Step 1: Get a Webhook URL","text":"
Any service that accepts HTTP POST with JSON works. Common options:
Service How to get a URL IFTTT Create an applet with the \"Webhooks\" trigger Slack Create an Incoming Webhook Discord Channel Settings > Integrations > Webhooks ntfy.sh Use https://ntfy.sh/your-topic (no signup) Pushover Use API endpoint with your user key
The URL contains auth tokens. ctx encrypts it; it never appears in plaintext in your repo.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-2-configure-the-webhook","level":3,"title":"Step 2: Configure the Webhook","text":"
This encrypts the URL with AES-256-GCM using the same key as the scratchpad (~/.ctx/.ctx.key). The encrypted file (.context/.notify.enc) is safe to commit. The key lives outside the project and is never committed.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-3-test-it","level":3,"title":"Step 3: Test It","text":"
If you see No webhook configured, run ctx notify setup first.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-4-configure-events","level":3,"title":"Step 4: Configure Events","text":"
Notifications are opt-in: no events are sent unless you configure an event list in .ctxrc:
# .ctxrc\nnotify:\n events:\n - loop # loop completion or max-iteration hit\n - nudge # VERBATIM relay hooks (context checkpoint, persistence, etc.)\n - relay # all hook output (verbose, for debugging)\n - heartbeat # every-prompt session-alive signal with metadata\n
Only listed events fire. Omitting an event silently drops it.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-5-use-in-your-own-skills","level":3,"title":"Step 5: Use in Your Own Skills","text":"
Add ctx notify calls to any skill or script:
# In a release skill\nctx notify --event release \"v1.2.0 released successfully\" 2>/dev/null || true\n\n# In a backup script\nctx notify --event backup \"Nightly backup completed\" 2>/dev/null || true\n
The 2>/dev/null || true suffix ensures the notification never breaks your script: If there's no webhook or the HTTP call fails, it's a silent noop.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#event-types","level":2,"title":"Event Types","text":"
ctx fires these events automatically:
Event Source When loop Loop script Loop completes or hits max iterations nudge System hooks VERBATIM relay nudge is emitted (context checkpoint, persistence, ceremonies, journal, resources, knowledge, version) relay System hooks Any hook output (VERBATIM relays, agent directives, block responses) heartbeat System hook Every prompt: session-alive signal with prompt count and context modification status testctx notify test Manual test notification (custom) Your skills You wire ctx notify --event <name> in your own scripts
nudge vs relay: The nudge event fires only for VERBATIM relay hooks (the ones the agent is instructed to show verbatim). The relay event fires for all hook output: VERBATIM relays, agent directives, and hard gates. Subscribe to relay for debugging (\"did the agent get the post-commit nudge?\"), nudge for user-facing assurance (\"was the checkpoint emitted?\").
Webhooks as a Hook Audit Trail
Subscribe to relay events and you get an external record of every hook that fires, independent of the agent.
This lets you verify hooks are running and catch cases where the agent absorbs a nudge instead of surfacing it.
See Auditing System Hooks for the full workflow.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#payload-format","level":2,"title":"Payload Format","text":"
The detail field is a structured template reference containing the hook name, variant, and any template variables. This lets receivers filter by hook or variant without parsing rendered text. The field is omitted when no template reference applies (e.g. custom ctx notify calls).
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#heartbeat-payload","level":3,"title":"Heartbeat Payload","text":"
The heartbeat event fires on every prompt with session metadata and token usage telemetry:
The tokens, context_window, and usage_pct fields are included when token data is available from the session JSONL file. They are omitted when no usage data has been recorded yet (e.g. first prompt).
Unlike other events, heartbeat fires every prompt (not throttled). Use it for observability dashboards or liveness monitoring of long-running sessions.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#security-model","level":2,"title":"Security Model","text":"Component Location Committed? Permissions Encryption key ~/.ctx/.ctx.key No (user-level) 0600 Encrypted URL .context/.notify.enc Yes (safe) 0600 Webhook URL Never on disk in plaintext N/A N/A
The key is shared with the scratchpad. If you rotate the encryption key, re-run ctx notify setup to re-encrypt the webhook URL with the new key.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#key-rotation","level":2,"title":"Key Rotation","text":"
ctx checks the age of the encryption key once per day. If it's older than 90 days (configurable via key_rotation_days), a VERBATIM nudge is emitted suggesting rotation.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#worktrees","level":2,"title":"Worktrees","text":"
The webhook URL is encrypted with the same encryption key (~/.ctx/.ctx.key). Because the key lives at the user level, it is shared across all worktrees on the same machine - notifications work in worktrees automatically.
This means agents running in worktrees cannot send webhook alerts. For autonomous runs where worktree agents are opaque, monitor them from the terminal rather than relying on webhooks. Enrich journals and review results on the main branch after merging.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#event-log-the-local-complement","level":2,"title":"Event Log: The Local Complement","text":"
Don't need a webhook but want diagnostic visibility? Enable event_log: true in .ctxrc. The event log writes the same payload as webhooks to a local JSONL file (.context/state/events.jsonl) that you can query without any external service:
ctx system events --last 20 # recent hook activity\nctx system events --hook qa-reminder # filter by hook\n
Webhooks and event logging are independent: you can use either, both, or neither. Webhooks give you push notifications and an external audit trail. The event log gives you local queryability and ctx doctor integration.
See Troubleshooting for how they work together.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#tips","level":2,"title":"Tips","text":"
Fire-and-forget: Notifications never block. HTTP errors are silently ignored. No retry, no response parsing.
No webhook = no cost: When no webhook is configured, ctx notify exits immediately. System hooks that call notify.Send() add zero overhead.
Multiple projects: Each project has its own .notify.enc. You can point different projects at different webhooks.
Event filter is per-project: Configure notify.events in each project's .ctxrc independently.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#next-up","level":2,"title":"Next Up","text":"
Auditing System Hooks →: Verify your hooks are running, audit what they do, and get alerted when they go silent.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#see-also","level":2,"title":"See Also","text":"
CLI Reference: ctx notify: full command reference
Configuration: .ctxrc settings including notify options
Running an Unattended AI Agent: how loops work and how notifications fit in
Hook Output Patterns: understanding VERBATIM relays, agent directives, and hard gates
Auditing System Hooks: using webhooks as an external audit trail for hook execution
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/","level":1,"title":"When to Use a Team of Agents","text":"","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-problem","level":2,"title":"The Problem","text":"
You have a task, and you are wondering: \"should I throw more agents at it?\"
More agents can mean faster results, but they also mean coordination overhead, merge conflicts, divergent mental models, and wasted tokens re-reading context.
The wrong setup costs more than it saves.
This recipe is a decision framework: It helps you choose between a single agent, parallel worktrees, and a full agent team, and explains what ctx provides at each level.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#tldr","level":2,"title":"TL;DR","text":"
Single agent for most work;
Parallel worktrees when tasks touch disjoint file sets;
Agent teams only when tasks need real-time coordination. When in doubt, start with one agent.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-spectrum","level":2,"title":"The Spectrum","text":"
There are three modes, ordered by complexity:
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#1-single-agent-default","level":3,"title":"1. Single Agent (Default)","text":"
One agent, one session, one branch. This is correct for most work.
Use this when:
The task has linear dependencies (step 2 needs step 1's output);
Changes touch overlapping files;
You need tight feedback loops (review each change before the next);
The task requires deep understanding of a single area;
Total effort is less than a few hours of agent time.
ctx provides: Full .context/: tasks, decisions, learnings, conventions, all in one session.
The agent builds a coherent mental model and persists it as it goes.
Example tasks: Bug fixes, feature implementation, refactoring a module, writing documentation for one area, debugging.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#2-parallel-worktrees-independent-tracks","level":3,"title":"2. Parallel Worktrees (Independent Tracks)","text":"
2-4 agents, each in a separate git worktree on its own branch, working on non-overlapping parts of the codebase.
Use this when:
You have 5+ independent tasks in the backlog;
Tasks group cleanly by directory or package;
File overlap between groups is zero or near-zero;
Each track can be completed and merged independently;
You want parallelism without coordination complexity.
ctx provides: Shared .context/ via git (each worktree sees the same tasks, decisions, conventions). /ctx-worktree skill for setup and teardown. TASKS.md as a lightweight work queue.
Example tasks: Docs + new package + test coverage (three tracks that don't touch the same files). Parallel recipe writing. Independent module development.
See: Parallel Agent Development with Git Worktrees
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#3-agent-team-coordinated-swarm","level":3,"title":"3. Agent Team (Coordinated Swarm)","text":"
Multiple agents communicating via messages, sharing a task list, with a lead agent coordinating. Claude Code's team/swarm feature.
Use this when:
Tasks have dependencies but can still partially overlap;
You need research and implementation happening simultaneously;
The work requires different roles (researcher, implementer, tester);
A lead agent needs to review and integrate others' work;
The task is large enough that coordination cost is justified.
ctx provides: .context/ as shared state that all agents can read. Task tracking for work assignment. Decisions and learnings as team memory that survives individual agent turnover.
Example tasks: Large refactor across modules where a lead reviews merges. Research and implementation where one agent explores options while another builds. Multi-file feature that needs integration testing after parallel implementation.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-decision-framework","level":2,"title":"The Decision Framework","text":"
Ask these questions in order:
Can one agent do this in a reasonable time?\n YES → Single agent. Stop here.\n NO ↓\n\nCan the work be split into non-overlapping file sets?\n YES → Parallel worktrees (2-4 tracks)\n NO ↓\n\nDo the subtasks need to communicate during execution?\n YES → Agent team with lead coordination\n NO → Parallel worktrees with a merge step\n
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-file-overlap-test","level":3,"title":"The File Overlap Test","text":"
This is the critical decision point. Before choosing multi-agent, list the files each subtask would touch. If two subtasks modify the same file, they belong in the same track (or the same single-agent session).
You: \"I want to parallelize these tasks. Which files would each one touch?\"\n\nAgent: [reads `TASKS.md`, analyzes codebase]\n \"Task A touches internal/config/ and internal/cli/initialize/\n Task B touches docs/ and site/\n Task C touches internal/config/ and internal/cli/status/\n\n Tasks A and C overlap on internal/config/ # they should be\n in the same track. Task B is independent.\"\n
When in doubt, keep things in one track. A merge conflict in a critical file costs more time than the parallelism saves.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#when-teams-make-things-worse","level":2,"title":"When Teams Make Things Worse","text":"
\"More agents\" is not always better. Watch for these patterns:
Merge hell: If you are spending more time resolving conflicts than the parallel work saved, you split wrong: Re-group by file overlap.
Context divergence: Each agent builds its own mental model. After 30 minutes of independent work, agent A might make assumptions that contradict agent B's approach. Shorter tracks with frequent merges reduce this.
Coordination theater: A lead agent spending most of its time assigning tasks, checking status, and sending messages instead of doing work. If the task list is clear enough, worktrees with no communication are cheaper.
Re-reading overhead: Every agent reads .context/ on startup. A team of 4 agents each reading 4000 tokens of context = 16000 tokens before anyone does any work. For small tasks, that overhead dominates.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#what-ctx-gives-you-at-each-level","level":2,"title":"What ctx Gives You at Each Level","text":"ctx Feature Single Agent Worktrees Team .context/ files Full access Shared via git Shared via filesystem TASKS.md Work queue Split by track Assigned by lead Decisions/Learnings Persisted in session Persisted per branch Persisted by any agent /ctx-next Picks next task Picks within track Lead assigns /ctx-worktree N/A Setup + teardown Optional /ctx-commit Normal commits Per-branch commits Per-agent commits","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#team-composition-recipes","level":2,"title":"Team Composition Recipes","text":"
Four practical team compositions for common workflows.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#feature-development-3-agents","level":3,"title":"Feature Development (3 agents)","text":"Role Responsibility Architect Writes spec in specs/, breaks work into TASKS.md phases Implementer Picks tasks from TASKS.md, writes code, marks [x] done Reviewer Runs tests, ctx drift, lint; files issues as new tasks
Coordination: TASKS.md checkboxes. Architect writes tasks before implementer starts. Reviewer runs after each implementer commit.
Anti-pattern: All three agents editing the same file simultaneously. Sequence the work so only one agent touches a file at a time.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#consolidation-sprint-3-4-agents","level":3,"title":"Consolidation Sprint (3-4 agents)","text":"Role Responsibility Auditor Runs ctx drift, identifies stale paths and broken refs Code Fixer Updates source code to match context (or vice versa) Doc Writer Updates ARCHITECTURE.md, CONVENTIONS.md, and docs/ Test Fixer (Optional) Fixes tests broken by the fixer's changes
Coordination: Auditor's ctx drift output is the shared work queue. Each agent claims a subset of issues by adding #in-progress labels.
Anti-pattern: Fixer and doc writer both editing ARCHITECTURE.md. Assign file ownership explicitly.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#release-prep-2-agents","level":3,"title":"Release Prep (2 agents)","text":"Role Responsibility Release Notes Generates changelog from commits, writes release notes Validation Runs full test suite, lint, build across platforms
Coordination: Both read TASKS.md to identify what shipped. Release notes agent works from git log; validation agent works from make audit.
Anti-pattern: Release notes agent running tests \"to verify.\" Each agent stays in its lane.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#documentation-sprint-3-agents","level":3,"title":"Documentation Sprint (3 agents)","text":"Role Responsibility Content Writes new pages, expands existing docs Cross-linker Adds nav entries, cross-references, \"See Also\" sections Verifier Builds site, checks broken links, validates rendering
Coordination: Content agent writes files first. Cross-linker updates zensical.toml and index pages after content lands. Verifier builds after each batch.
Antipattern: Content and cross-linker both editing zensical.toml. Batch nav updates into the cross-linker's pass.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#tips","level":2,"title":"Tips","text":"
Start with one agent: Only add parallelism when you have identified the bottleneck. \"This would go faster with more agents\" is usually wrong for tasks under 2 hours.
The 3-4 agent ceiling is real: Coordination overhead grows quadratically. 2 agents = 1 communication pair. 4 agents = 6 pairs. Beyond 4, you are managing agents more than doing work.
Worktrees > teams for most parallelism needs: If agents don't need to talk to each other during execution, worktrees give you parallelism with zero coordination overhead.
Use ctx as the shared brain: Whether it's one agent or four, the .context/ directory is the single source of truth. Decisions go in DECISIONS.md, not in chat messages between agents.
Merge early, merge often: Long-lived parallel branches diverge. Merge a track as soon as it's done rather than waiting for all tracks to finish.
TASKS.md conflicts are normal: Multiple agents completing different tasks will conflict on merge. The resolution is always additive: accept all [x] completions from both sides.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#next-up","level":2,"title":"Next Up","text":"
Parallel Agent Development with Git Worktrees →: Run multiple agents on independent task tracks using git worktrees.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#go-deeper","level":2,"title":"Go Deeper","text":"
CLI Reference: all commands and flags
Integrations: setup for Claude Code, Cursor, Aider
Session Journal: browse and search session history
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#see-also","level":2,"title":"See Also","text":"
Parallel Agent Development with Git Worktrees: the mechanical \"how\" for worktree-based parallelism
Running an Unattended AI Agent: serial autonomous loops: a different scaling strategy
Tracking Work Across Sessions: managing the task backlog that feeds into any multi-agent setup
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"reference/","level":1,"title":"Reference","text":"
Technical reference for ctx commands, skills, and internals.
","path":["Reference"],"tags":[]},{"location":"reference/#the-system-explains-itself","level":3,"title":"The System Explains Itself","text":"
The 12 properties that must hold for any valid ctx implementation. Not features: constraints. The system's contract with its users and contributors.
","path":["Reference"],"tags":[]},{"location":"reference/audit-conventions/","level":1,"title":"Audit Conventions: Common Patterns and Fixes","text":"
This guide documents the code conventions enforced by internal/audit/ AST tests. Each section shows the violation pattern, the fix, and the rationale. When a test fails, find the matching section below.
All tests skip _test.go files. The patterns apply only to production code under internal/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#variable-shadowing-bare-err-reuse","level":2,"title":"Variable Shadowing (bare err := reuse)","text":"
Test: TestNoVariableShadowing
When a function has multiple := assignments to err, each shadows the previous one. This makes it impossible to tell which error a later if err != nil is checking.
Rule: Use descriptive error names (readErr, writeErr, parseErr, walkErr, absErr, relErr) so each error site is independently identifiable.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#import-name-shadowing","level":2,"title":"Import Name Shadowing","text":"
Test: TestNoImportNameShadowing
When a local variable has the same name as an imported package, the import becomes inaccessible in that scope.
Before:
import \"github.com/ActiveMemory/ctx/internal/session\"\n\nfunc process(session *entity.Session) { // param shadows import\n // session package is now unreachable here\n}\n
After:
import \"github.com/ActiveMemory/ctx/internal/session\"\n\nfunc process(sess *entity.Session) {\n // session package still accessible\n}\n
Rule: Parameters, variables, and return values must not reuse imported package names. Common renames: session -> sess, token -> tok, config -> cfg, entry -> ent.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#magic-strings","level":2,"title":"Magic Strings","text":"
Test: TestNoMagicStrings
String literals in function bodies are invisible to refactoring tools and cause silent breakage when the value changes in one place but not another.
Before (string literals):
func loadContext() {\n data := filepath.Join(dir, \"TASKS.md\")\n if strings.HasSuffix(name, \".yaml\") {\n // ...\n }\n}\n
After:
func loadContext() {\n data := filepath.Join(dir, config.FilenameTask)\n if strings.HasSuffix(name, config.ExtYAML) {\n // ...\n }\n}\n
func EntryHash(text string) string {\n h := sha256.Sum256([]byte(text))\n return hex.EncodeToString(h[:cfgFmt.HashPrefixLen])\n}\n
Before (URL schemes — also caught):
if strings.HasPrefix(target, \"https://\") ||\n strings.HasPrefix(target, \"http://\") {\n return target\n}\n
After:
if strings.HasPrefix(target, cfgHTTP.PrefixHTTPS) ||\n strings.HasPrefix(target, cfgHTTP.PrefixHTTP) {\n return target\n}\n
Exempt from this check:
Empty string \"\", single space \" \", indentation strings
Regex capture references ($1, ${name})
const and var definition sites (that's where constants live)
Struct tags
Import paths
Packages under internal/config/, internal/assets/tpl/
Rule: If a string is used for comparison, path construction, or appears in 3+ files, it belongs in internal/config/ as a constant. Format strings belong in internal/config/ as named constants (e.g., cfgGit.FlagLastN, cfgTrace.RefFormat). User-facing prose belongs in internal/assets/ YAML files accessed via desc.Text().
Common fix for fmt.Sprintf with format verbs:
Pattern Fix fmt.Sprintf(\"%d\", n)strconv.Itoa(n)fmt.Sprintf(\"%d\", int64Val)strconv.FormatInt(int64Val, 10)fmt.Sprintf(\"%x\", bytes)hex.EncodeToString(bytes)fmt.Sprintf(\"%q\", s)strconv.Quote(s)fmt.Sscanf(s, \"%d\", &n)strconv.Atoi(s)fmt.Sprintf(\"-%d\", n)fmt.Sprintf(cfgGit.FlagLastN, n)\"https://\"cfgHTTP.PrefixHTTPS\"<\" config constant in config/html/","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#direct-printf-calls","level":2,"title":"Direct Printf Calls","text":"
Test: TestNoPrintfCalls
cmd.Printf and cmd.PrintErrf bypass the write-package formatting pipeline and scatter user-facing text across the codebase.
Rule: All formatted output goes through internal/write/ which uses cmd.Print/cmd.Println with pre-formatted strings from desc.Text().
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#raw-time-format-strings","level":2,"title":"Raw Time Format Strings","text":"
Test: TestNoRawTimeFormats
Inline time format strings (\"2006-01-02\", \"15:04:05\") drift when one call site is updated but others are missed.
Rule: All time format strings must use constants from internal/config/time/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#direct-flag-registration","level":2,"title":"Direct Flag Registration","text":"
Test: TestNoFlagBindOutsideFlagbind
Direct cobra flag calls (.Flags().StringVar(), etc.) scatter flag wiring across dozens of cmd.go files. Centralizing through internal/flagbind/ gives one place to audit flag names, defaults, and description key lookups.
Before:
func Cmd() *cobra.Command {\n var output string\n c := &cobra.Command{Use: cmd.UseStatus}\n c.Flags().StringVarP(&output, \"output\", \"o\", \"\",\n \"output format\")\n return c\n}\n
After:
func Cmd() *cobra.Command {\n var output string\n c := &cobra.Command{Use: cmd.UseStatus}\n flagbind.StringFlagShort(c, &output, flag.Output,\n flag.OutputShort, cmd.DescKeyOutput)\n return c\n}\n
Rule: All flag registration goes through internal/flagbind/. If the helper you need doesn't exist, add it to flagbind/flag.go before using it.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#todo-comments","level":2,"title":"TODO Comments","text":"
Test: TestNoTODOComments
TODO, FIXME, HACK, and XXX comments in production code are invisible to project tracking. They accumulate silently and never get addressed.
Remove the comment and add a task to .context/TASKS.md:
- [ ] Handle pagination in listEntries (internal/task/task.go)\n
Rule: Deferred work lives in TASKS.md, not in source comments.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#dead-exports","level":2,"title":"Dead Exports","text":"
Test: TestNoDeadExports
Exported symbols with zero references outside their definition file are dead weight. They increase API surface, confuse contributors, and cost maintenance.
Fix: Either delete the export (preferred) or demote it to unexported if it's still used within the file.
If the symbol existed for historical reasons and might be needed again, move it to quarantine/deadcode/ with a .dead extension. This preserves the code in git without polluting the live codebase:
// Dead exports quarantined from internal/config/flag/flag.go\n// Quarantined: 2026-04-02\n// Restore from git history if needed.\n
Rule: If a test-only allowlist entry is needed (the export exists only for test use), add the fully qualified symbol to testOnlyExports in dead_exports_test.go. Keep this list small — prefer eliminating the export.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#core-package-structure","level":2,"title":"Core Package Structure","text":"
Test: TestCoreStructure
core/ directories under internal/cli/ must contain only doc.go and test files at the top level. All domain logic lives in subpackages. This prevents core/ from becoming a god package.
Rule: Extract each logical unit into its own subpackage under core/. Each subpackage gets a doc.go. The subpackage name should match the domain concept (golang, check, fix, store), not a generic label (util, helper).
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#cross-package-types","level":2,"title":"Cross-Package Types","text":"
Test: TestCrossPackageTypes
When a type defined in one package is used from a different module (e.g., cli/doctor importing a type from cli/notify), the type has crossed its module boundary. Cross-cutting types belong in internal/entity/ for discoverability.
Exempt: Types inside entity/, proto/, core/ subpackages, and config/ packages. Same-module usage (e.g., cli/doctor/cmd/ using cli/doctor/core/) is not flagged.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#type-file-convention","level":2,"title":"Type File Convention","text":"
Exported types in core/ subpackages should live in types.go (the convention from CONVENTIONS.md), not scattered across implementation files. This makes type definitions discoverable. TestTypeFileConventionReport generates a diagnostic summary of all type placements for triage.
Exception: entity/ organizes by domain (task.go, session.go), proto/ uses schema.go, and err/ packages colocate error types with their domain context.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#desckey-yaml-linkage","level":2,"title":"DescKey / YAML Linkage","text":"
Test: TestDescKeyYAMLLinkage
Every DescKey constant must have a corresponding key in the YAML asset files, and every YAML key must have a corresponding DescKey constant. Orphans in either direction mean dead text or runtime panics.
Fix for orphan YAML key: Delete the YAML entry, or add the corresponding DescKey constant in config/embed/{text,cmd,flag}/.
Fix for orphan DescKey: Delete the constant, or add the corresponding entry in the YAML file under internal/assets/commands/text/, cmd/, or flag/.
If the orphan YAML entry was once valid but the feature was removed, move the YAML entry to a .dead file in quarantine/deadcode/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#package-doc-quality","level":2,"title":"Package Doc Quality","text":"
Test: TestPackageDocQuality
Every package under internal/ must have a doc.go with a meaningful package doc comment (at least 8 lines of real content). One-liners and file-list patterns (// - foo.go, // Source files:) are flagged because they drift as files change.
Template:
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n\n// Package mypackage does X.\n//\n// It handles Y by doing Z. The main entry point is [FunctionName]\n// which accepts A and returns B.\n//\n// Configuration is read from [config.SomeConstant]. Output is\n// written through [write.SomeHelper].\n//\n// This package is used by [parentpackage] during the W lifecycle\n// phase.\npackage mypackage\n
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#inline-regex-compilation","level":2,"title":"Inline Regex Compilation","text":"
Test: TestNoInlineRegexpCompile
regexp.MustCompile and regexp.Compile inside function bodies recompile the pattern on every call. Compiled patterns belong at package level.
Before:
func parse(s string) bool {\n re := regexp.MustCompile(`\\d{4}-\\d{2}-\\d{2}`)\n return re.MatchString(s)\n}\n
After:
// In internal/config/regex/regex.go:\n// DatePattern matches ISO date format (YYYY-MM-DD).\nvar DatePattern = regexp.MustCompile(`\\d{4}-\\d{2}-\\d{2}`)\n\n// In calling package:\nfunc parse(s string) bool {\n return regex.DatePattern.MatchString(s)\n}\n
Rule: All compiled regexes live in internal/config/regex/ as package-level var declarations. Two tests enforce this: TestNoInlineRegexpCompile catches function-body compilation, and TestNoRegexpOutsideRegexPkg catches package-level compilation outside config/regex/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#doc-comments","level":2,"title":"Doc Comments","text":"
Test: TestDocComments
All functions (exported and unexported), structs, and package-level variables must have a doc comment. Config packages allow group doc comments for const blocks.
// buildIndex maps entry names to their position in the\n// ordered slice for O(1) lookup during reconciliation.\n//\n// Parameters:\n// - entries: ordered slice of entries to index\n//\n// Returns:\n// - map[string]int: name-to-position mapping\nfunc buildIndex(entries []Entry) map[string]int {\n
Rule: Every function, struct, and package-level var gets a doc comment in godoc format. Functions include Parameters: and Returns: sections. Structs with 2+ fields document every field. See CONVENTIONS.md for the full template.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#line-length","level":2,"title":"Line Length","text":"
Test: TestLineLength
Lines in non-test Go files must not exceed 80 characters. This is a hard check, not a suggestion.
Rule: Break at natural points: function arguments, struct fields, chained calls. Long strings (URLs, struct tags) are the rare acceptable exception.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#literal-whitespace","level":2,"title":"Literal Whitespace","text":"
Test: TestNoLiteralWhitespace
Bare whitespace string and byte literals (\"\\n\", \"\\r\\n\", \"\\t\") must not appear outside internal/config/token/. All other packages use the token constants.
Before:
output := strings.Join(lines, \"\\n\")\n
After:
output := strings.Join(lines, token.Newline)\n
Rule: Whitespace literals are defined once in internal/config/token/. Use token.Newline, token.Tab, token.CRLF, etc.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#magic-numeric-values","level":2,"title":"Magic Numeric Values","text":"
Test: TestNoMagicValues
Numeric literals in function bodies need constants, with narrow exceptions.
Before:
if len(entries) > 100 {\n entries = entries[:100]\n}\n
After:
if len(entries) > config.MaxEntries {\n entries = entries[:config.MaxEntries]\n}\n
Exempt: 0, 1, -1, 2–10, strconv radix/bitsize args (10, 32, 64 in strconv.Parse*/Format*), octal permissions (caught separately by TestNoRawPermissions), and const/var definition sites.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#inline-separators","level":2,"title":"Inline Separators","text":"
Test: TestNoInlineSeparators
strings.Join calls must use token constants for their separator argument, not string literals.
Before:
result := strings.Join(parts, \", \")\n
After:
result := strings.Join(parts, token.CommaSep)\n
Rule: Separator strings live in internal/config/token/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#stuttery-function-names","level":2,"title":"Stuttery Function Names","text":"
Test: TestNoStutteryFunctions
Function names must not redundantly include their package name as a PascalCase word boundary. Go callers already write pkg.Function, so pkg.PkgFunction stutters.
Before:
// In package write\nfunc WriteJournal(cmd *cobra.Command, ...) {\n
After:
// In package write\nfunc Journal(cmd *cobra.Command, ...) {\n
Exempt: Identity functions like write.Write / write.write.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#mixed-visibility","level":2,"title":"Mixed Visibility","text":"
Test: TestNoMixedVisibility
Files with exported functions must not also contain unexported functions. Public API and private helpers live in separate files.
Exempt: Files with exactly one function, doc.go, test files.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#stray-errgo-files","level":2,"title":"Stray err.go Files","text":"
Test: TestNoStrayErrFiles
err.go files must only exist under internal/err/. Error constructors anywhere else create a broken-window pattern where contributors add local error definitions when they see a local err.go.
Fix: Move the error constructor to internal/err/<domain>/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#cli-cmd-structure","level":2,"title":"CLI Cmd Structure","text":"
Test: TestCLICmdStructure
Each cmd/$sub/ directory under internal/cli/ may contain only cmd.go, run.go, doc.go, and test files. Extra .go files (helpers, output formatters, types) belong in the corresponding core/ subpackage.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#desckey-namespace","level":2,"title":"DescKey Namespace","text":"
Three tests enforce DescKey/Use constant discipline:
Use* constants appear only in cobra Use: struct field assignments — never as arguments to desc.Text() or elsewhere.
DescKey* constants are passed only to assets.CommandDesc(), assets.FlagDesc(), or desc.Text() — never to cobra Use:.
No cross-namespace lookups — TextDescKey must not be passed to CommandDesc(), FlagDescKey must not be passed to Text(), etc.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#yaml-examples-registry-linkage","level":2,"title":"YAML Examples / Registry Linkage","text":"
Every key in examples.yaml and registry.yaml must match a known entry type constant. Prevents orphan entries that are never rendered.
Fix: Delete the orphan YAML entry, or add the corresponding constant in config/entry/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#other-enforced-patterns","level":2,"title":"Other Enforced Patterns","text":"
These tests follow the same fix approach — extract the operation to its designated package:
Test Violation Fix TestNoNakedErrorsfmt.Errorf/errors.New outside internal/err/ Add error constructor to internal/err/<domain>/TestNoRawFileIO Direct os.ReadFile, os.Create, etc. Use io.SafeReadFile, io.SafeWriteFile, etc. TestNoRawLogging Direct fmt.Fprintf(os.Stderr, ...) Use log/warn.Warn() or log/event.Append()TestNoExecOutsideExecPkgexec.Command outside internal/exec/ Add command to internal/exec/<domain>/TestNoCmdPrintOutsideWritecmd.Print* outside internal/write/ Add output helper to internal/write/<domain>/TestNoRawPermissions Octal literals (0644, 0755) Use config/fs.PermFile, config/fs.PermExec, etc. TestNoErrorsAserrors.As() Use errors.AsType() (generic, Go 1.23+) TestNoStringConcatPathsdir + \"/\" + file Use filepath.Join(dir, file)","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#general-fix-workflow","level":2,"title":"General Fix Workflow","text":"
When an audit test fails:
Read the error message. It includes file:line and a description of the violation.
Find the matching section above. The test name maps directly to a section.
Apply the pattern. Most fixes are mechanical: extract to the right package, rename a variable, or replace a literal with a constant.
Run make test before committing. Audit tests run as part of go test ./internal/audit/.
Don't add allowlist entries as a first resort. Fix the code. Allowlists exist only for genuinely unfixable cases (test-only exports, config packages that are definitionally exempt).
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/comparison/","level":1,"title":"Tool Ecosystem","text":"","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#high-level-mental-model","level":2,"title":"High-Level Mental Model","text":"
Many tools help AI think.
ctx helps AI remember.
Not by storing thoughts,
but by preserving intent.
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#how-ctx-differs-from-similar-tools","level":2,"title":"How ctx Differs from Similar Tools","text":"
There are many tools in the AI ecosystem that touch parts of the context problem:
Some manage prompts.
Some retrieve data.
Some provide runtime context objects.
Some offer enterprise platforms.
ctx focuses on a different layer entirely.
This page explains where ctx fits, and where it intentionally does not.
That single difference explains nearly all of ctx's design choices.
Question Most tools ctx Where does context live? In prompts or APIs In files How long does it last? One request / one session Across time Who can read it? The model Humans and tools How is it updated? Implicitly Explicitly Is it inspectable? Rarely Always","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#prompt-management-tools","level":2,"title":"Prompt Management Tools","text":"
Examples include:
prompt templates;
reusable system prompts;
prompt libraries;
prompt versioning tools.
These tools help you start a session.
They do not help you continue one.
Prompt tools:
inject text at session start;
are ephemeral by design;
do not evolve with the project.
ctx:
persists knowledge over time;
accumulates decisions and learnings;
makes the context part of the repository itself.
Prompt tooling and ctx are complementary; not competing. Yet, they operate in different layers.
Users often evaluate ctx against specific tools they already use. These comparisons clarify where responsibilities overlap, where they diverge, and where the tools are genuinely complementary.
Anthropic's auto-memory is tool-managed memory (L2): the model decides what to remember, stores it automatically, and retrieves it implicitly. ctx is system memory (L3): humans and agents explicitly curate decisions, learnings, and tasks in inspectable files.
Auto-memory is convenient - you do not configure anything. But it is also opaque: you cannot see what was stored, edit it precisely, or share it across tools. ctx files are plain Markdown in your repository, visible in diffs and code review.
The two are complementary. ctx can absorb auto-memory as an input source (importing what the model remembered into structured context files) while providing the durable, inspectable layer that auto-memory lacks.
Static rule files (.cursorrules, .claude/rules/) declare conventions: coding style, forbidden patterns, preferred libraries. They are effective for what to do and load automatically at session start.
ctx adds dimensions that rule files do not cover: architectural decisions with rationale, learnings discovered during development, active tasks, and a constitution that governs agent behavior. Critically, ctx context accumulates - each session can add to it, and token budgeting ensures only the most relevant context is injected.
Use rule files for static conventions. Use ctx for evolving project memory.
Aider's --read flag injects file contents at session start; --watch reloads them on change. The concept is similar to ctx's \"load\" step: make the agent aware of specific files.
The differences emerge beyond loading. Aider has no persistence model -- nothing the agent learns during a session is written back. There is no token budgeting (large files consume the full context window), no priority ordering across file types, and no structured format for decisions or learnings. ctx provides the full lifecycle: load, accumulate, persist, and budget.
GitHub Copilot's @workspace performs workspace-wide code search. It answers \"what code exists?\" - finding function definitions, usages, and file structure across the repository.
ctx answers a different question: \"what did we decide?\" It stores architectural intent, not code indices. Copilot's workspace search and ctx's project memory are orthogonal; one finds code, the other preserves the reasoning behind it.
Cline's memory bank stores session context within the Cline extension. The motivation is similar to ctx: help the agent remember across sessions.
The key difference is portability. Cline memory is tied to Cline - it does not transfer to Claude Code, Cursor, Aider, or any other tool. ctx is tool-agnostic: context lives in plain files that any editor, agent, or script can read. Switching tools does not mean losing memory.
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#when-ctx-is-a-good-fit","level":2,"title":"When ctx Is a Good Fit","text":"
ctx works best when:
you want AI work to compound over time;
architectural decisions matter;
context must be inspectable;
humans and AI must share the same source of truth;
Git history should include why, not just what.
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#when-ctx-is-not-the-right-tool","level":2,"title":"When ctx Is Not the Right Tool","text":"
ctx is probably not what you want if:
you only need one-off prompts;
you rely exclusively on RAG;
you want autonomous agents without a human-readable state;
You Can't Import Expertise: why project-specific context matters more than generic best practices
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/design-invariants/","level":1,"title":"Invariants","text":"","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#the-system-explains-itself","level":1,"title":"The System Explains Itself","text":"
These are the properties that must hold for any valid ctx implementation.
These are not features.
These are constraints.
A change that violates an invariant is a category error, not an improvement.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#cognitive-state-tiers","level":2,"title":"Cognitive State Tiers","text":"
ctx distinguishes between three forms of state:
Authoritative state: Versioned, inspectable artifacts that define intent and survive time.
Delivery views: Deterministic assemblies of the authoritative state for a specific budget or workflow.
Ephemeral working state: Local, transient, or sensitive data that assists interaction but does not define system truth.
The invariants below apply primarily to the authoritative cognitive state.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#1-cognitive-state-is-explicit","level":2,"title":"1. Cognitive State Is Explicit","text":"
All authoritative context lives in artifacts that can be inspected, reviewed, and versioned.
If something is important, it must exist as a file: Not only in a prompt, a chat, or a model's hidden memory.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#2-assembly-is-reproducible","level":2,"title":"2. Assembly Is Reproducible","text":"
Given the same:
repository state,
configuration,
and inputs,
context assembly produces the same result.
Heuristics may rank or filter for delivery under constraints.
They do not alter the authoritative state.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#3-the-authoritative-state-is-human-readable","level":2,"title":"3. The Authoritative State Is Human-Readable","text":"
The authoritative cognitive state must be stored in formats that a human can:
read,
diff,
review,
and edit directly.
Sensitive working memory may be encrypted at rest. However, encryption must not become the only representation of authoritative knowledge.
Reasoning, decisions, and outcomes must remain available after the interaction that produced them has ended.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#5-authority-is-user-defined","level":2,"title":"5. Authority Is User-Defined","text":"
What enters the authoritative context is an explicit human decision.
Models may suggest.
Automation may assist.
Selection is never implicit.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#6-operation-is-local-first","level":2,"title":"6. Operation Is Local-First","text":"
The core system must function without requiring network access or a remote service.
External systems may extend ctx.
They must not be required for its operation.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#7-versioning-is-the-memory-model","level":2,"title":"7. Versioning Is the Memory Model","text":"
The evolution of the authoritative cognitive state must be:
preserved,
inspectable,
and branchable.
Ephemeral and sensitive working state may use different retention and diff strategies by design.
Understanding includes understanding how we arrived here.
Authoritative cognitive state must have a defined layout that:
communicates intent,
supports navigation,
and prevents drift.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#9-verification-is-the-scoreboard","level":2,"title":"9. Verification Is the Scoreboard","text":"
Claims without recorded outcomes are noise.
Reality (observed and captured) is the only signal that compounds.
This invariant defines a required direction:
The authoritative state must be able to record expectation and result.
Work that has already produced understanding must not be re-derived from scratch.
Explored paths, rejected options, and validated conclusions are permanent assets.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#11-policies-are-encoded-not-remembered","level":2,"title":"11. Policies Are Encoded, not Remembered","text":"
Alignment must not depend on recall or goodwill.
Constraints that matter must exist in machine-readable form and participate in context assembly.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#12-the-system-explains-itself","level":2,"title":"12. The System Explains Itself","text":"
From the repository state alone it must be possible to determine:
To avoid category errors, ctx does not attempt to be:
a skill,
a prompt management tool,
a chat history viewer,
an autonomous agent runtime,
a vector database,
a hosted memory service.
Such systems may integrate with ctx.
They do not define it.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#implications-for-contributions","level":1,"title":"Implications for Contributions","text":"
Valid contributions:
strengthen an invariant,
reduce the cost of maintaining an invariant,
or extend the system without violating invariants.
Invalid contributions:
introduce hidden authoritative state,
replace reproducible assembly with non-reproducible behavior,
make core operation depend on external services,
reduce human inspectability of authoritative state,
or bypass explicit user authority over what becomes authoritative.
Everything else (commands, skills, layouts, integrations, optimizations) is an implementation detail.
These invariants are the system.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/scratchpad/","level":1,"title":"Scratchpad","text":"","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#what-is-ctx-scratchpad","level":2,"title":"What Is ctx Scratchpad?","text":"
A one-liner scratchpad, encrypted at rest, synced via git.
Quick notes that don't fit decisions, learnings, or tasks: reminders, intermediate values, sensitive tokens, working memory during debugging. Entries are numbered, reorderable, and persist across sessions.
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#encrypted-by-default","level":2,"title":"Encrypted by Default","text":"
Scratchpad entries are encrypted with AES-256-GCM before touching the disk.
Component Path Git status Encryption key ~/.ctx/.ctx.key User-level, 0600 permissions Encrypted data .context/scratchpad.enc Committed
The key is generated automatically during ctx init (256-bit via crypto/rand) and stored at ~/.ctx/.ctx.key. One key per machine, shared across all projects.
The ciphertext format is [12-byte nonce][ciphertext+tag]. No external dependencies: Go stdlib only.
Because the key is .gitignored and the data is committed, you get:
At-rest encryption: the .enc file is opaque without the key
Git sync: push/pull the encrypted file like any other tracked file
Key separation: the key never leaves the machine unless you copy it
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#commands","level":2,"title":"Commands","text":"Command Purpose ctx pad List all entries (numbered 1-based) ctx pad show N Output raw text of entry N (no prefix, pipe-friendly) ctx pad add \"text\" Append a new entry ctx pad rm N Remove entry at position N ctx pad edit N \"text\" Replace entry N with new text ctx pad edit N --append \"text\" Append text to the end of entry N ctx pad edit N --prepend \"text\" Prepend text to the beginning of entry N ctx pad add TEXT --file PATH Ingest a file as a blob entry (TEXT is the label) ctx pad show N --out PATH Write decoded blob content to a file ctx pad mv N M Move entry from position N to position M ctx pad resolve Show both sides of a merge conflict for resolution ctx pad import FILE Bulk-import lines from a file (or stdin with -) ctx pad import --blob DIR Import directory files as blob entries ctx pad export [DIR] Export all blob entries to a directory as files ctx pad merge FILE... Merge entries from other scratchpad files into current
All commands decrypt on read, operate on plaintext in memory, and re-encrypt on write. The key file is never printed to stdout.
# Add a note\nctx pad add \"check DNS propagation after deploy\"\n\n# List everything\nctx pad\n# 1. check DNS propagation after deploy\n# 2. staging API key: sk-test-abc123\n\n# Show raw text (for piping)\nctx pad show 2\n# sk-test-abc123\n\n# Compose entries\nctx pad edit 1 --append \"$(ctx pad show 2)\"\n\n# Reorder\nctx pad mv 2 1\n\n# Clean up\nctx pad rm 2\n
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#bulk-import-and-export","level":2,"title":"Bulk Import and Export","text":"
Import lines from a file in bulk (each non-empty line becomes an entry):
# Import from a file\nctx pad import notes.txt\n\n# Import from stdin\ngrep TODO *.go | ctx pad import -\n
Export all blob entries to a directory as files:
# Export to a directory\nctx pad export ./ideas\n\n# Preview without writing\nctx pad export --dry-run\n\n# Overwrite existing files\nctx pad export --force ./backup\n
Combine entries from other scratchpad files into your current pad. Useful when merging work from parallel worktrees, other machines, or teammates:
# Merge from a worktree's encrypted scratchpad\nctx pad merge worktree/.context/scratchpad.enc\n\n# Merge from multiple sources (encrypted and plaintext)\nctx pad merge pad-a.enc notes.md\n\n# Merge a foreign encrypted pad using its key\nctx pad merge --key /other/.ctx.key foreign.enc\n\n# Preview without writing\nctx pad merge --dry-run pad-a.enc pad-b.md\n
Each input file is auto-detected as encrypted or plaintext: decryption is attempted first, and on failure the file is parsed as plain text. Entries are deduplicated by exact content, so running merge twice with the same file is safe.
The scratchpad can store small files (up to 64 KB) as blob entries. Files are base64-encoded and stored with a human-readable label.
# Ingest a file: first argument is the label\nctx pad add \"deploy config\" --file ./deploy.yaml\n\n# Listing shows label with a [BLOB] marker\nctx pad\n# 1. check DNS propagation after deploy\n# 2. deploy config [BLOB]\n\n# Extract to a file\nctx pad show 2 --out ./recovered.yaml\n\n# Or print decoded content to stdout\nctx pad show 2\n
Blob entries are encrypted identically to text entries. The internal format is label:::base64data: You never need to construct this manually.
Constraint Value Max file size (pre-encoding) 64 KB Storage format label:::base64(content) Display label [BLOB] in listings
When Should You Use Blobs
Blobs are for small files you want encrypted and portable: config snippets, key fragments, deployment manifests, test fixtures. For anything larger than 64 KB, use the filesystem directly.
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#using-with-ai","level":2,"title":"Using with AI","text":"
Use Natural Language
As in many ctx features, the ctx scratchpad can also be used with natural langauge. You don't have to memorize the CLI commands.
CLI gives you \"precision\", whereas natural language gives you flow.
The /ctx-pad skill maps natural language to ctx pad commands. You don't need to remember the syntax:
You say What happens \"jot down: check DNS after deploy\" ctx pad add \"check DNS after deploy\" \"show my scratchpad\" ctx pad \"delete the third entry\" ctx pad rm 3 \"update entry 2 to include the new endpoint\" ctx pad edit 2 \"...\" \"move entry 4 to the top\" ctx pad mv 4 1 \"import my notes from notes.txt\" ctx pad import notes.txt \"export all blobs to ./backup\" ctx pad export ./backup \"merge the scratchpad from the worktree\" ctx pad merge worktree/.context/scratchpad.enc
The skill handles the translation. You describe what you want in plain English; the agent picks the right command.
The encryption key lives at ~/.ctx/.ctx.key (outside the project directory). Because all worktrees on the same machine share this path, ctx pad works in worktrees automatically - no special setup needed.
For projects where encryption is unnecessary, disable it in .ctxrc:
scratchpad_encrypt: false\n
In plaintext mode:
Entries are stored in .context/scratchpad.md instead of .enc.
No key is generated or required.
All ctx pad commands work identically.
The file is human-readable and diffable.
When Should You Use Plaintext
Plaintext mode is useful for non-sensitive projects, solo work where encryption adds friction, or when you want scratchpad entries visible in git diff.
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#when-should-you-use-scratchpad-versus-context-files","level":2,"title":"When Should You Use Scratchpad versus Context Files","text":"Use case Where it goes Temporary reminders (\"check X after deploy\") Scratchpad Working values during debugging Scratchpad Sensitive tokens or API keys (short-term) Scratchpad Quick notes that don't fit anywhere else Scratchpad Items that are not directly relevant to the project Scratchpad Things that you want to keep near, but also hidden Scratchpad Work items with completion tracking TASKS.md Trade-offs with rationale DECISIONS.md Reusable lessons with context/lesson/application LEARNINGS.md Codified patterns and standards CONVENTIONS.md
Rule of thumb:
If it needs structure or will be referenced months later, use a context file (i.e. DECISIONS.md, LEARNINGS.md, TASKS.md).
If it is working memory for the current session or week, use the scratchpad.
Session journals contain sensitive data such as file contents, commands, API keys, internal discussions, error messages with stack traces, and more.
The .context/journal-site/ and .context/journal-obsidian/ directories MUST be .gitignored.
DO NOT host your journal publicly.
DO NOT commit your journal files to version control.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#browse-your-session-history","level":2,"title":"Browse Your Session History","text":"
ctx's Session Journal turns your AI coding sessions into a browsable, searchable, and editable archive.
After using ctx for a couple of sessions, you can generate a journal site with:
# Import all sessions to markdown\nctx journal import --all\n\n# Generate and serve the journal site\nctx journal site --serve\n
Then open http://localhost:8000 to browse your sessions.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#what-you-get","level":2,"title":"What You Get","text":"
The Session Journal gives you:
Browsable history: Navigate through all your AI sessions by date
Full conversations: See every message, tool use, and result
Token usage: Track how many tokens each session consumed
Search: Find sessions by content, project, or date
Dark mode: Easy on the eyes for late-night archaeology
Each session page includes the following sections:
Section Content Metadata Date, time, duration, model, project, git branch Summary Space for your notes (editable) Tool Usage Which tools were used and how often Conversation Full transcript with timestamps","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#1-import-sessions","level":3,"title":"1. Import Sessions","text":"
# Import all sessions from current project (only new files)\nctx journal import --all\n\n# Import sessions from all projects\nctx journal import --all --all-projects\n\n# Import a specific session by ID (always writes)\nctx journal import abc123\n\n# Preview what would be imported\nctx journal import --all --dry-run\n\n# Re-import existing (regenerates conversation, preserves YAML frontmatter)\nctx journal import --all --regenerate\n\n# Discard frontmatter during regeneration\nctx journal import --all --regenerate --keep-frontmatter=false -y\n
Imported sessions go to .context/journal/ as editable Markdown files.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#2-generate-the-site","level":3,"title":"2. Generate the Site","text":"
# Generate site structure\nctx journal site\n\n# Generate and build static HTML\nctx journal site --build\n\n# Generate and serve locally\nctx journal site --serve\n\n# Custom output directory\nctx journal site --output ~/my-journal\n
The site is generated in .context/journal-site/ by default.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#3-browse-and-search","level":3,"title":"3. Browse and Search","text":"
Imported sessions are plain Markdown in .context/journal/. You can:
Add summaries: Fill in the ## Summary section
Add notes: Insert your own commentary anywhere
Highlight key moments: Use Markdown formatting
Delete noise: Remove irrelevant tool outputs
After editing, regenerate the site:
ctx journal site --serve\n
Safe by Default
Running ctx journal import --all only imports new sessions. Existing files are skipped entirely (your edits and enrichments are never touched).
Use --regenerate to re-import existing files. Conversation content is regenerated, but YAML frontmatter (topics, type, outcome, etc.) is preserved. You'll be prompted before any existing files are overwritten; add -y to skip the prompt.
Use --keep-frontmatter=false to discard enriched frontmatter during regeneration.
Locked entries (via ctx journal lock) are always skipped, regardless of flags. If you prefer to add locked: true to frontmatter during enrichment, run ctx journal sync to propagate the lock state to .state.json.
Claude Code generates \"suggestion\" sessions for auto-complete prompts. These are separated in the index under a \"Suggestions\" section to keep your main session list focused.
Raw imported sessions contain basic metadata (date, time, project) but lack the structured information needed for effective search, filtering, and analysis. Journal enrichment adds semantic metadata that transforms a flat archive into a searchable knowledge base.
Field Required Description title Yes Descriptive title (not the session slug) date Yes Session date (YYYY-MM-DD) type Yes Session type (see below) outcome Yes How the session ended (see below) topics No Subject areas discussed technologies No Languages, databases, frameworks libraries No Specific packages or libraries used key_files No Important files created or modified
Type values:
Type When to use feature Building new functionality bugfix Fixing broken behavior refactor Restructuring without behavior change exploration Research, learning, experimentation debugging Investigating issues documentation Writing docs, comments, README
Outcome values:
Outcome Meaning completed Goal achieved partial Some progress, work continues abandoned Stopped pursuing this approach blocked Waiting on external dependency","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#using-ctx-journal-enrich","level":3,"title":"Using /ctx-journal-enrich","text":"
The /ctx-journal-enrich skill automates enrichment by analyzing conversation content and proposing metadata.
Extract decisions, learnings, and tasks mentioned;
Show a diff and ask for confirmation before writing.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#before-and-after","level":3,"title":"Before and After","text":"
Before enrichment:
# twinkly-stirring-kettle\n\n**ID**: abc123-def456\n**Date**: 2026-01-24\n**Time**: 14:30:00\n...\n\n## Summary\n\n[Add your summary of this session]\n\n## Conversation\n...\n
After enrichment:
---\ntitle: \"Add Redis caching to API endpoints\"\ndate: 2026-01-24\ntype: feature\noutcome: completed\ntopics:\n - caching\n - api-performance\ntechnologies:\n - go\n - redis\nkey_files:\n - internal/api/middleware/cache.go\n - internal/cache/redis.go\n---\n\n# twinkly-stirring-kettle\n\n**ID**: abc123-def456\n**Date**: 2026-01-24\n**Time**: 14:30:00\n...\n\n## Summary\n\nImplemented Redis-based caching middleware for frequently accessed API endpoints.\nAdded cache invalidation on writes and configurable TTL per route. Reduced\n the average response time from 200ms to 15ms for cached routes.\n\n## Decisions\n\n* Used Redis over in-memory cache for horizontal scaling\n* Chose per-route TTL configuration over global setting\n\n## Learnings\n\n* Redis WATCH command prevents race conditions during cache invalidation\n\n## Conversation\n...\n
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#enrichment-and-site-generation","level":3,"title":"Enrichment and Site Generation","text":"
The journal site generator uses enriched metadata for better organization:
Titles appear in navigation instead of slugs
Summaries provide context in the index
Topics enable filtering (when using search)
Types allow grouping by work category
Future improvements will add topic-based navigation and outcome filtering to the generated site.
Use ctx journal site when you want a web-browsable archive with search and dark mode. Use ctx journal obsidian when you want graph view, backlinks, and tag-based navigation inside Obsidian. Both use the same enriched source entries: you can generate both.
The complete journal workflow has four stages. Each is idempotent: safe to re-run, and stages skip already-processed entries.
import → enrich → rebuild\n
Stage Command / Skill What it does Skips if Import ctx journal import --all Converts session JSONL to Markdown File already exists (safe default) Enrich /ctx-journal-enrich Adds frontmatter, summaries, topics Frontmatter already present Rebuild ctx journal site --build Generates static HTML site -- Obsidian ctx journal obsidian Generates Obsidian vault with wikilinks --
One-command pipeline
/ctx-journal-enrich-all handles import automatically - it detects unimported sessions and imports them before enriching. You only need to run ctx journal site --build afterward.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#using-make-journal","level":3,"title":"Using make journal","text":"
If your project includes Makefile.ctx (deployed by ctx init), the first and last stages are combined:
make journal # import + rebuild\n
After it runs, it reminds you to enrich in Claude Code:
Next steps (in Claude Code):\n /ctx-journal-enrich-all # imports if needed + adds metadata per entry\n\nThen re-run: make journal\n
Rendering Issues?
If individual entries have rendering problems (broken fences, malformed lists), check the programmatic normalization in the import pipeline. Most cases are handled automatically during ctx journal import.
# Import, browse, then enrich in Claude Code\nmake journal && make journal-serve\n# Then in Claude Code: /ctx-journal-enrich <session>\n
After a productive session:
# Import just that session and add notes\nctx journal import <session-id>\n# Edit .context/journal/<session>.md\n# Regenerate: ctx journal site\n
Searching across all sessions:
# Use grep on the journal directory\ngrep -r \"authentication\" .context/journal/\n
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#requirements","level":2,"title":"Requirements","text":"Use pipx for zensical
pip install zensical may install a non-functional stub on system Python. Using venv has other issues too.
These issues especially happen on Mac OSX.
Use pipx install zensical, which creates an isolated environment and handles Python version management automatically.
The journal site uses zensical for static site generation:
Skills are slash commands that run inside your AI assistant (e.g., /ctx-next), as opposed to CLI commands that run in your terminal (e.g., ctx status).
Skills give your agent structured workflows: It knows what to read, what to run, and when to ask. Most wrap one or more ctx CLI commands with opinionated behavior on top.
Skills Are Best Used Conversationally
The beauty of ctx is that it's designed to be intuitive and conversational, allowing you to interact with your AI assistant naturally. That's why you don't have to memorize many of these skills.
See the Prompting Guide for natural-language triggers that invoke these skills conversationally.
However, when you need a more precise control, you have the option to invoke the relevant skills directly.
","path":["Reference","Skills"],"tags":[]},{"location":"reference/skills/#all-skills","level":2,"title":"All Skills","text":"Skill Description Type /ctx-remember Recall project context and present structured readback user-invocable /ctx-wrap-up End-of-session context persistence ceremony user-invocable /ctx-status Show context summary with interpretation user-invocable /ctx-agent Load full context packet for AI consumption user-invocable /ctx-next Suggest 1-3 concrete next actions with rationale user-invocable /ctx-commit Commit with integrated context persistence user-invocable /ctx-reflect Pause and reflect on session progress user-invocable /ctx-add-task Add actionable task to TASKS.md user-invocable /ctx-add-decision Record architectural decision with rationale user-invocable /ctx-add-learning Record gotchas and lessons learned user-invocable /ctx-add-convention Record coding convention for consistency user-invocable /ctx-archive Archive completed tasks from TASKS.md user-invocable /ctx-pad Manage encrypted scratchpad entries user-invocable /ctx-history Browse and import AI session history user-invocable /ctx-journal-enrich Enrich single journal entry with metadata user-invocable /ctx-journal-enrich-all Full journal pipeline: export if needed, then batch-enrich user-invocable /ctx-blog Generate blog post draft from project activity user-invocable /ctx-blog-changelog Generate themed blog post from a commit range user-invocable /ctx-consolidate Consolidate redundant learnings or decisions user-invocable /ctx-drift Detect and fix context drift user-invocable /ctx-prompt Apply, list, and manage saved prompt templates user-invocable /ctx-prompt-audit Analyze prompting patterns for improvement user-invocable /ctx-check-links Audit docs for dead internal and external links user-invocable /ctx-sanitize-permissions Audit Claude Code permissions for security risks user-invocable /ctx-brainstorm Structured design dialogue before implementation user-invocable /ctx-spec Scaffold a feature spec from a project template user-invocable /ctx-import-plans Import Claude Code plan files into project specs user-invocable /ctx-implement Execute a plan step-by-step with verification user-invocable /ctx-loop Generate autonomous loop script user-invocable /ctx-worktree Manage git worktrees for parallel agents user-invocable /ctx-architecture Build and maintain architecture maps user-invocable /ctx-remind Manage session-scoped reminders user-invocable /ctx-doctor Troubleshoot ctx behavior with health checks and event analysis user-invocable /ctx-skill-audit Audit skills against Anthropic prompting best practices user-invocable /ctx-skill-creator Create, improve, and test skills user-invocable /ctx-pause Pause context hooks for this session user-invocable /ctx-resume Resume context hooks after a pause user-invocable","path":["Reference","Skills"],"tags":[]},{"location":"reference/skills/#session-lifecycle","level":2,"title":"Session Lifecycle","text":"
Skills for starting, running, and ending a productive session.
Session Ceremonies
Two skills in this group are ceremony skills: /ctx-remember (session start) and /ctx-wrap-up (session end). Unlike other skills that work conversationally, these should be invoked as explicit slash commands for completeness. See Session Ceremonies.
Commit code with integrated context persistence: pre-commit checks, staged files, Co-Authored-By trailer, and a post-commit prompt to capture decisions and learnings.
Wraps: git add, git commit, optionally chains to /ctx-add-decision and /ctx-add-learning
End-of-session context persistence ceremony. Gathers signal from git diff, recent commits, and conversation themes. Proposes candidates (learnings, decisions, conventions, tasks) with complete structured fields for user approval, then persists via ctx add. Offers /ctx-commit if uncommitted changes remain. Ceremony skill: invoke explicitly at session end.
Record a project-specific gotcha, bug, or unexpected behavior. Filters for insights that are searchable, project-specific, and required real effort to discover.
Full journal pipeline: imports unimported sessions first, then batch-enriches all unenriched entries. Filters out short sessions and continuations. Can spawn subagents for large backlogs.
Generate a blog post draft from recent project activity: git history, decisions, learnings, tasks, and journal entries. Requires a narrative arc (problem, approach, outcome).
Consolidate redundant entries in LEARNINGS.md or DECISIONS.md. Groups overlapping entries by keyword similarity, presents candidates, and (with user approval) merges groups into denser combined entries. Originals are archived, not deleted.
Detect and fix context drift: stale paths, missing files, file age staleness, task accumulation, entry count warnings, and constitution violations via ctx drift. Also detects skill drift against canonical templates.
Analyze recent prompting patterns to identify vague or ineffective prompts. Reviews 3-5 journal entries and suggests rewrites with positive observations.
Troubleshoot ctx behavior. Runs structural health checks via ctx doctor, analyzes event log patterns via ctx system events, and presents findings with suggested actions. The CLI provides the structural baseline; the agent adds semantic analysis of event patterns and correlations.
Wraps: ctx doctor --json, ctx system events --json --last 100, ctx remind list, ctx system message list, reads .ctxrc
Graceful degradation: If event_log is not enabled, the skill still works but with reduced capability. It runs structural checks and notes: \"Enable event_log: true in .ctxrc for hook-level diagnostics.\"
See also: Troubleshooting, ctx doctor CLI, ctx system events CLI
Scan all markdown files under docs/ for broken links. Three passes: internal links (verify file targets exist on disk), external links (HTTP HEAD with timeout, report failures as warnings), and image references. Resolves relative paths, strips anchors before checking, and skips localhost/example URLs.
Wraps: Glob + Grep to scan, curl for external checks
Audit .claude/settings.local.json for dangerous permissions across four risk categories: hook bypass (Critical), destructive commands (High), config injection vectors (High), and overly broad patterns (Medium). Reports findings by severity and offers specific fix actions with user confirmation.
Wraps: reads .claude/settings.local.json, edits with confirmation
Transform raw ideas into clear, validated designs through structured dialogue before any implementation begins. Follows a gated process: understand context, clarify the idea (one question at a time), surface non-functional requirements, lock understanding with user confirmation, explore 2-3 design approaches with trade-offs, stress-test the chosen approach, and present the detailed design.
Wraps: reads DECISIONS.md, relevant source files; chains to /ctx-add-decision for recording design choices
Trigger phrases: \"let's brainstorm\", \"design this\", \"think through\", \"before we build\", \"what approach should we take?\"
Scaffold a feature spec from the project template and walk through each section with the user. Covers: problem, approach, happy path, edge cases, validation rules, error handling, interface, implementation, configuration, testing, and non-goals. Spends extra time on edge cases and error handling.
Wraps: reads specs/tpl/spec-template.md, writes to specs/, optionally chains to /ctx-add-task
Trigger phrases: \"spec this out\", \"write a spec\", \"create a spec\", \"design document\"
Import Claude Code plan files (~/.claude/plans/*.md) into the project's specs/ directory. Lists plans with dates and H1 titles, supports filtering (--today, --since, --all), slugifies headings for filenames, and optionally creates tasks referencing each imported spec.
Wraps: reads ~/.claude/plans/*.md, writes to specs/, optionally chains to /ctx-add-task
See also: Importing Claude Code Plans, Tracking Work Across Sessions
Execute a multi-step plan with build and test verification at each step. Loads a plan from a file or conversation context, breaks it into atomic steps, and checkpoints after every 3-5 steps.
Wraps: reads plan file, runs verification commands (go build, go test, etc.)
Generate a ready-to-run shell script for autonomous AI iteration. Supports Claude Code, Aider, and generic tool templates with configurable completion signals.
Manage git worktrees for parallel agent development. Create sibling worktrees on dedicated branches, analyze task blast radius for grouping, and tear down with merge.
Build and maintain architecture maps incrementally. Creates or refreshes ARCHITECTURE.md (succinct project map, loaded at session start) and DETAILED_DESIGN.md (deep per-module reference, consulted on-demand). Coverage is tracked in map-tracking.json so each run extends the map rather than re-analyzing everything.
Manage session-scoped reminders via natural language. Translates user intent (\"remind me to refactor swagger\") into the corresponding ctx remind command. Handles date conversion for --after flags.
Audit one or more skills against Anthropic prompting best practices. Checks audit dimensions: positive framing, motivation, phantom references, examples, subagent guards, scope, and descriptions. Reports findings by severity with concrete fix suggestions.
Wraps: reads internal/assets/claude/skills/*/SKILL.md or .claude/skills/*/SKILL.md, references anthropic-best-practices.md
Trigger phrases: \"audit this skill\", \"check skill quality\", \"review the skills\", \"are our skills any good?\"
Create, improve, and test skills. Guides the full lifecycle: capture intent, interview for edge cases, draft the SKILL.md, test with realistic prompts, review results with the user, and iterate. Applies core principles: the agent is already smart (only add what it does not know), the description is the trigger (make it specific and \"pushy\"), and explain the why instead of rigid directives.
Wraps: reads/writes .claude/skills/ and internal/assets/claude/skills/
Trigger phrases: \"create a skill\", \"turn this into a skill\", \"make a slash command\", \"this should be a skill\", \"improve this skill\", \"the skill isn't triggering\"
Pause all context nudge and reminder hooks for the current session. Security hooks still fire. Use for quick investigations or tasks that don't need ceremony overhead.
The ctx plugin ships the skills listed above. Teams can add their own project-specific skills to .claude/skills/ in the project root: These are separate from plugin-shipped skills and are scoped to the project.
Project-specific skills follow the same format and are invoked the same way.
MCP server for tool-agnostic AI integration. Memory bridge connecting Claude Code auto-memory to .context/. Complete CLI restructuring into cmd/ + core/ taxonomy. All user-facing strings externalized to YAML. fatih/color removed; two direct dependencies remain.
","path":["Reference","Version History"],"tags":[]},{"location":"reference/versions/#v060-the-integration-release","level":3,"title":"v0.6.0: The Integration Release","text":"
Plugin architecture: hooks and skills converted from shell scripts to Go subcommands, shipped as a Claude Code marketplace plugin. Multi-tool hook generation for Cursor, Aider, Copilot, and Windsurf. Webhook notifications with encrypted URL storage.
","path":["Reference","Version History"],"tags":[]},{"location":"reference/versions/#v030-the-discipline-release","level":3,"title":"v0.3.0: The Discipline Release","text":"
Journal static site generation via zensical. 49-skill audit and fix pass (positive framing, phantom reference removal, scope tightening). Context consolidation skill. golangci-lint v2 migration.
","path":["Reference","Version History"],"tags":[]},{"location":"reference/versions/#v020-the-archaeology-release","level":3,"title":"v0.2.0: The Archaeology Release","text":"
Session journal system: ctx journal import converts Claude Code JSONL transcripts to browsable Markdown. Constants refactor with semantic prefixes (Dir*, File*, Filename*). CRLF handling for Windows compatibility.
Trust model, vulnerability reporting, permission hygiene, and security design principles.
","path":["Security"],"tags":[]},{"location":"security/agent-security/","level":1,"title":"Securing AI Agents","text":"","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#defense-in-depth-securing-ai-agents","level":1,"title":"Defense in Depth: Securing AI Agents","text":"","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#the-problem","level":2,"title":"The Problem","text":"
An unattended AI agent with unrestricted access to your machine is an unattended shell with unrestricted access to your machine.
This is not a theoretical concern. AI coding agents execute shell commands, write files, make network requests, and modify project configuration. When running autonomously (overnight, in a loop, without a human watching), the attack surface is the full capability set of the operating system user account.
The risk is not that the AI is malicious. The risk is that the AI is controllable: it follows instructions from context, and context can be poisoned.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#threat-model","level":2,"title":"Threat Model","text":"","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#how-agents-get-compromised","level":3,"title":"How Agents Get Compromised","text":"
AI agents follow instructions from multiple sources: system prompts, project files, conversation history, and tool outputs. An attacker who can inject content into any of these sources can redirect the agent's behavior.
Vector How it works Prompt injection via dependencies A malicious package includes instructions in its README, changelog, or error output. The agent reads these during installation or debugging and follows them. Prompt injection via fetched content The agent fetches a URL (documentation, API response, Stack Overflow answer) containing embedded instructions. Poisoned project files A contributor adds adversarial instructions to CLAUDE.md, .cursorrules, or .context/ files. The agent loads these at session start. Self-modification between iterations In an autonomous loop, the agent modifies its own configuration files. The next iteration loads the modified config with no human review. Tool output injection A command's output (error messages, log lines, file contents) contains instructions the agent interprets and follows.","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#what-can-a-compromised-agent-do","level":3,"title":"What Can a Compromised Agent Do","text":"
Depends entirely on what permissions and access the agent has:
Access level Potential impact Unrestricted shell Execute any command, install software, modify system files Network access Exfiltrate source code, credentials, or context files to external servers Docker socket Escape container isolation by spawning privileged sibling containers SSH keys Pivot to other machines, push to remote repositories, access production systems Write access to own config Disable its own guardrails for the next iteration","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#the-defense-layers","level":2,"title":"The Defense Layers","text":"
No single layer is sufficient. Each layer catches what the others miss.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-1-soft-instructions-probabilistic","level":3,"title":"Layer 1: Soft Instructions (Probabilistic)","text":"
Markdown files like CONSTITUTION.md and the Agent Playbook tell the agent what to do and what not to do. These are probabilistic: the agent usually follows them, but there is no enforcement mechanism.
What it catches: Most common mistakes. An agent that has been told \"never delete production data\" will usually not delete production data.
What it misses: Prompt injection. A sufficiently crafted injection can override soft instructions. Long context windows dilute attention on rules stated early. Edge cases where instructions are ambiguous.
Verdict: Necessary but not sufficient. Good for the common case. Do not rely on it for security boundaries.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-2-application-controls-deterministic-at-runtime-mutable-across-iterations","level":3,"title":"Layer 2: Application Controls (Deterministic at Runtime, Mutable Across Iterations)","text":"
AI tool runtimes (Claude Code, Cursor, etc.) provide permission systems: tool allowlists, command restrictions, confirmation prompts.
For Claude Code, ctx init writes both an allowlist and an explicit deny list into .claude/settings.local.json. The golden images live in internal/assets/permissions/:
Allowlist (allow.txt): only these tools run without confirmation:
Bash(ctx:*)\nSkill(ctx-add-convention)\nSkill(ctx-add-decision)\n... # all bundled ctx-* skills\n
Deny list (deny.txt): these are blocked even if the agent requests them:
What it catches: The agent cannot run commands outside the allowlist, and the deny list blocks dangerous operations even if a future allowlist change were to widen access. If rm, curl, sudo, or docker are not allowed and sudo/curl/wget are explicitly denied, the agent cannot invoke them regardless of what any prompt says.
What it misses: The agent can modify the allowlist itself. In an autonomous loop, if the agent writes to .claude/settings.local.json, and the next iteration loads the modified config, then the protection is effectively lost. The application enforces the rules, but the application reads the rules from files the agent can write.
Verdict: Strong first layer. Must be combined with self-modification prevention (Layer 3).
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-3-os-level-isolation-deterministic-and-unbypassable","level":3,"title":"Layer 3: OS-Level Isolation (Deterministic and Unbypassable)","text":"
The operating system enforces access controls that no application-level trick can override. An unprivileged user cannot read files owned by root. A process without CAP_NET_RAW cannot open raw sockets. These are kernel boundaries.
Control Purpose Dedicated user account No sudo, no privileged group membership (docker, wheel, adm). The agent cannot escalate privileges. Filesystem permissions Project directory writable; everything else read-only or inaccessible. Agent cannot reach other projects, home directories, or system config. Immutable config files CLAUDE.md, .claude/settings.local.json, and .context/CONSTITUTION.md owned by a different user or marked immutable (chattr +i on Linux). The agent cannot modify its own guardrails.
What it catches: Privilege escalation, self-modification, lateral movement to other projects or users.
What it misses: Actions within the agent's legitimate scope. If the agent has write access to source code (which it needs to do its job), it can introduce vulnerabilities in the code itself.
Verdict: Essential. This is the layer that makes the other layers trustworthy.
OS-level isolation does not make the agent safe; it makes the other layers meaningful.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-4-network-controls","level":3,"title":"Layer 4: Network Controls","text":"
An agent that cannot reach the internet cannot exfiltrate data. It also cannot ingest new instructions mid-loop from external documents, API responses, or hostile content.
Scenario Recommended control Agent does not need the internet --network=none (container) or outbound firewall drop-all Agent needs to fetch dependencies Allow specific registries (npmjs.com, proxy.golang.org, pypi.org) via firewall rules. Block everything else. Agent needs API access Allow specific API endpoints only. Use an HTTP proxy with allowlisting.
What it catches: Data exfiltration, phone-home payloads, downloading additional tools, and instruction injection via fetched content.
What it misses: Nothing, if the agent genuinely does not need the network. The tradeoff is that many real workloads need dependency resolution, so a full airgap requires pre-populated caches.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-5-infrastructure-isolation","level":3,"title":"Layer 5: Infrastructure Isolation","text":"
The strongest boundary is a separate machine (or something that behaves like one).
The moment you stop arguing about prompts and start arguing about kernels, you are finally doing security.
Critical: never mount the Docker socket (/var/run/docker.sock).
An agent with socket access can spawn sibling containers with full host access, effectively escaping the sandbox.
Use rootless Docker or Podman to eliminate this escalation path.
Virtual machines: The strongest isolation. The guest kernel has no visibility into the host OS. No shared folders, no filesystem passthrough, no SSH keys to other machines.
Resource limits: CPU, memory, and disk quotas prevent a runaway agent from consuming all resources. Use ulimit, cgroup limits, or container resource constraints.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
A defense-in-depth setup for overnight autonomous runs:
Layer Implementation Stops Soft instructions CONSTITUTION.md with \"never delete tests\", \"always run tests before committing\" Common mistakes (probabilistic) Application allowlist .claude/settings.local.json with explicit tool permissions Unauthorized commands (deterministic within runtime) Immutable config chattr +i on CLAUDE.md, .claude/, CONSTITUTION.md Self-modification between iterations Unprivileged user Dedicated user, no sudo, no docker group Privilege escalation Container --cap-drop=ALL --network=none, rootless, no socket mount Host escape, network exfiltration Resource limits --memory=4g --cpus=2, disk quotas Resource exhaustion
Each layer is straightforward: The strength is in the combination.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#common-mistakes","level":2,"title":"Common Mistakes","text":"
\"I'll just use --dangerously-skip-permissions\": This disables Layer 2 entirely. Without Layers 3-5, you have no protection at all. Only use this flag inside a properly isolated container or VM.
\"The agent is sandboxed in Docker\": A Docker container with the Docker socket mounted, running as root, with --privileged, and full network access is not sandboxed. It is a root shell with extra steps.
\"CONSTITUTION.md says not to do that\": Markdown is a suggestion. It works most of the time. It is not a security boundary. Do not use it as one.
\"I reviewed the CLAUDE.md, it's fine\": The agent can modify CLAUDE.md during iteration N. Iteration N+1 loads the modified version. Unless the file is immutable, your review is stale.
\"The agent only has access to this one project\": Does the project directory contain .env files, SSH keys, API tokens, or credentials? Does it have a .git/config with push access to a remote? Filesystem isolation means isolating what is in the directory too.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#team-security-considerations","level":2,"title":"Team Security Considerations","text":"
When multiple developers share a .context/ directory, security considerations extend beyond single-agent hardening.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#code-review-for-context-files","level":3,"title":"Code Review for Context Files","text":"
Treat .context/ changes like code changes. Context files influence agent behavior (a modified CONSTITUTION.md or CONVENTIONS.md changes what every agent on the team will do next session). Review them in PRs with the same scrutiny you apply to production code.
New decisions that contradict existing ones without acknowledging it
Learnings that encode incorrect assumptions
Task additions that bypass the team's prioritization process
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#gitignore-patterns","level":3,"title":"Gitignore Patterns","text":"
ctx init configures .gitignore automatically, but verify these patterns are in place:
Team decision: scratchpad.enc (encrypted, safe to commit for shared scratchpad state); .gitignore if scratchpads are personal
Never committed: .env, credentials, API keys (enforced by drift secret detection)
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#multi-developer-context-sharing","level":3,"title":"Multi-Developer Context Sharing","text":"
CONSTITUTION.md is the shared contract. All team members and their agents inherit it. Changes require team consensus, not unilateral edits.
When multiple agents write to the same context files concurrently (e.g., two developers adding learnings simultaneously), git merge conflicts are expected. Resolution is typically additive: accept both additions. Destructive resolution (dropping one side) loses context.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#team-conventions-for-context-management","level":3,"title":"Team Conventions for Context Management","text":"
Establish and document:
Who reviews context changes: Same reviewers as code, or a designated context owner?
How to resolve conflicting decisions: If two sessions record contradictory decisions, which wins? Default: the later one must explicitly supersede the earlier one with rationale.
Frequency of context maintenance: Weekly ctx drift checks, monthly consolidation passes, archival after each milestone.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#checklist","level":2,"title":"Checklist","text":"
Before running an unattended AI agent:
Agent runs as a dedicated unprivileged user (no sudo, no docker group)
Agent's config files are immutable or owned by a different user
Permission allowlist restricts tools to the project's toolchain
Container drops all capabilities (--cap-drop=ALL)
Docker socket is NOT mounted
Network is disabled or restricted to specific domains
Resource limits are set (memory, CPU, disk)
No SSH keys, API tokens, or credentials are accessible to the agent
Project directory does not contain .env or secrets files
Iteration cap is set (--max-iterations)
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#further-reading","level":2,"title":"Further Reading","text":"
Running an Unattended AI Agent: the ctx recipe for autonomous loops, including step-by-step permissions and isolation setup
Security: ctx's own trust model and vulnerability reporting
Autonomous Loops: full documentation of the loop pattern, prompt templates, and troubleshooting
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/reporting/","level":1,"title":"Security Policy","text":"","path":["Security","Security Policy"],"tags":[]},{"location":"security/reporting/#reporting-vulnerabilities","level":2,"title":"Reporting Vulnerabilities","text":"
At ctx we take security very seriously.
If you discover a security vulnerability in ctx, please report it responsibly.
Do NOT open a public issue for security vulnerabilities.
If your report contains sensitive details (proof-of-concept exploits, credentials, or internal system information), you can encrypt your message with our PGP key:
In-repo: SECURITY_KEY.asc
Keybase: keybase.io/alekhinejose
# Import the key\ngpg --import SECURITY_KEY.asc\n\n# Encrypt your report\ngpg --armor --encrypt --recipient security@ctx.ist report.txt\n
Encryption is optional. Unencrypted reports to security@ctx.ist or via GitHub Private Reporting are perfectly fine.
","path":["Security","Security Policy"],"tags":[]},{"location":"security/reporting/#what-to-include","level":3,"title":"What to Include","text":"
We appreciate responsible disclosure and will acknowledge security researchers who report valid vulnerabilities (unless they prefer to remain anonymous).
ctx is a volunteer-maintained open source project.
The timelines below are guidelines, not guarantees, and depend on contributor availability.
We will address security reports on a best-effort basis and prioritize them by severity.
Stage Timeframe Acknowledgment Within 48 hours Initial assessment Within 7 days Resolution target Within 30 days (depending on severity)","path":["Security","Security Policy"],"tags":[]},{"location":"security/reporting/#trust-model","level":2,"title":"Trust Model","text":"
ctx operates within a single trust boundary: the local filesystem.
The person who authors .context/ files is the same person who runs the agent that reads them. There is no remote input, no shared state, and no server component.
This means:
ctx does not sanitize context files for prompt injection. This is a deliberate design choice, not an oversight. The files are authored by the developer who owns the machine: Sanitizing their own instructions back to them would be counterproductive.
If you place adversarial instructions in your own .context/ files, your agent will follow them. This is expected behavior. You control the context; the agent trusts it.
Shared Repositories
In shared repositories, .context/ files should be reviewed in code review (the same way you would review CI/CD config or Makefiles). A malicious contributor could add harmful instructions to CONSTITUTION.md or TASKS.md.
No secrets in context: The constitution explicitly forbids storing secrets, tokens, API keys, or credentials in .context/ files
Local only: ctx runs entirely locally with no external network calls
No code execution: ctx reads and writes Markdown files only; it does not execute arbitrary code
Git-tracked: Core context files are meant to be committed, so they should never contain sensitive data. Exception: sessions/ and journal/ contain raw conversation data and should be gitignored
Claude Code evaluates permissions in deny → ask → allow order. ctx init automatically populates permissions.deny with rules that block dangerous operations before the allow list is ever consulted.
Hook state files (throttle markers, prompt counters, pause markers) are stored in .context/state/, which is project-scoped and gitignored. State files are automatically managed by the hooks that create them; no manual cleanup is needed.
Review before committing: Always review .context/ files before committing
Use .gitignore: If you must store sensitive notes locally, add them to .gitignore
Drift detection: Run ctx drift to check for potential issues
Permission audit: Review .claude/settings.local.json after busy sessions
","path":["Security","Security Policy"],"tags":[]},{"location":"thesis/","level":1,"title":"Context as State","text":"","path":["The Thesis"],"tags":[]},{"location":"thesis/#a-persistence-layer-for-human-ai-cognition","level":2,"title":"A Persistence Layer for Human-AI Cognition","text":"
As AI tools evolve from code-completion utilities into reasoning collaborators, the knowledge that governs their behavior becomes as important as the code they produce; yet, that knowledge is routinely discarded at the end of every session.
AI-assisted development systems assemble context at prompt time using heuristic retrieval from mutable sources: recent files, semantic search results, session history. These approaches optimize relevance at the moment of generation but do not persist the cognitive state that produced decisions. Reasoning is not reproducible, intent is lost across sessions, and teams cannot audit the knowledge that constrains automated behavior.
This paper argues that context should be treated as deterministic, version-controlled state rather than as a transient query result. We ground this argument in three sources of evidence: a landscape analysis of 17 systems spanning AI coding assistants, agent frameworks, and knowledge stores; a taxonomy of five primitive categories that reveals irrecoverable architectural trade-offs; and an experience report from ctx, a persistence layer for AI-assisted development, which developed itself using its own persistence model across 389 sessions over 33 days. We define a three-tier model for cognitive state: authoritative knowledge, delivery views, and ephemeral state. Then we present six design invariants empirically validated by 56 independent rejection decisions observed across the analyzed landscape. We show that context determinism applies to assembly, not to model output, and that the curation cost this model requires is offset by compounding returns in reproducibility, auditability, and team cognition.
The introduction of large language models into software development has shifted the primary interface from code execution to interactive reasoning. In this environment, the correctness of an output depends not only on source code but on the context supplied to the model: the conventions, decisions, architectural constraints, and domain knowledge that bound the space of acceptable responses.
Current systems treat context as a query result assembled at the moment of interaction. A developer begins a session; the tool retrieves what it estimates to be relevant from chat history, recent files, and vector stores; the model generates output conditioned on this transient assembly; the session ends, and the context evaporates. The next session begins the cycle again.
This model has improved substantially over the past year. CLAUDE.md files, Cursor rules, Copilot's memory system, and tools such as Mem0, Letta, and Kindex each address aspects of the persistence problem. Yet across 17 systems we analyzed spanning AI coding assistants, agent frameworks, autonomous coding agents, and purpose-built knowledge stores, no system provides all five of the following properties simultaneously: deterministic context assembly, human-readable file-based persistence, token-budgeted delivery, zero runtime dependencies, and local-first operation.
This paper does not propose a universal replacement for retrieval-centric workflows. It defines a persistence layer (embodied in ctx (https://ctx.ist)) whose advantages emerge under specific operational conditions: when reproducibility is a requirement, when knowledge must outlive sessions and individuals, when teams require shared cognitive authority, or when offline operation is necessary.
The trade-offs (manual curation cost, reduced automatic recall, coarser granularity) are intentional and mirror the trade-offs accepted by systems that favor reproducibility over convenience, such as reproducible builds and immutable infrastructure 16.
The contribution is threefold: a three-tier model for cognitive state that resolves the ambiguity between authoritative knowledge and ephemeral session artifacts; six design invariants empirically grounded in a cross-system landscape analysis; and an experience report demonstrating that the model produces compounding returns when applied to its own development.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#2-the-limits-of-prompt-time-context","level":2,"title":"2. The Limits of Prompt-Time Context","text":"
Prompt-time assembly pipelines typically consist of corpus selection, retrieval, ranking, and truncation. These pipelines are probabilistic and time-dependent, producing three failure modes that compound over the lifetime of a project.
If context is derived from mutable sources using heuristic ranking, identical requests at different times receive different inputs. A developer who asks \"What is our authentication strategy?\" on Tuesday may receive a different context window than the same question on Thursday: Not because the strategy changed, but because the retrieval heuristic surfaced different fragments.
Reproducibility (the ability to reconstruct the exact inputs that produced a given output) is a foundational property of reliable systems. Its loss in AI-assisted development mirrors the historical evolution from ad-hoc builds to deterministic build systems 12. The build community learned that when outputs depend on implicit state (environment variables, system clocks, network-fetched dependencies), debugging becomes archaeology. The same principle applies when AI outputs depend on non-deterministic context retrieval.
Embedding-based memory increases recall but reduces inspectability. When a vector store determines that a code snippet is \"similar\" to the current query, the ranking function is opaque: the developer cannot inspect why that snippet was chosen, whether a more relevant artifact was excluded, or whether the ranking will remain stable. This prevents deterministic debugging, policy auditing, and causal attribution (properties that information retrieval theory identifies as fundamental trade-offs of probabilistic ranking) 3.
In practice, this opacity manifests as a compliance ceiling. In our experience developing a context management system (detailed in Section 7), soft instructions (directives that ask an AI agent to read specific files or follow specific procedures) achieve approximately 75-85% compliance. The remaining 15-25% represents cases where the agent exercises judgment about whether the instruction applies, effectively applying a second ranking function on top of the explicit directive. When 100% compliance is required, instruction is insufficient; the content must be injected directly, removing the agent's option to skip it.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#23-loss-of-intent","level":3,"title":"2.3 Loss of Intent","text":"
Session transcripts record interaction but not cognition. A transcript captures what was said but not which assumptions were accepted, which alternatives were rejected, or which constraints governed the decision. The distinction matters: a decision to use PostgreSQL recorded as a one-line note (\"Use PostgreSQL\") teaches a model what was decided; a structured record with context, rationale, and consequences teaches it why (and why is what prevents the model from unknowingly reversing the decision in a future session) 4.
Session transcripts provide history. Cognitive state requires something more: the persistent, structured representation of the knowledge required for correct decision-making.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#3-cognitive-state-a-three-tier-model","level":2,"title":"3. Cognitive State: A Three-Tier Model","text":"","path":["The Thesis"],"tags":[]},{"location":"thesis/#31-definitions","level":3,"title":"3.1 Definitions","text":"
We define cognitive state as the authoritative, persistent representation of the knowledge required for correct decision-making within a project. It is human-authored or human-ratified, versioned, inspectable, and reproducible. It is distinct from logs, transcripts, retrieval results, and model-generated summaries.
Previous formulations of this idea have treated cognitive state as a monolithic concept. In practice, a three-tier model better captures the operational reality:
Tier 1: Authoritative State: The canonical knowledge that the system treats as ground truth. In a concrete implementation, this corresponds to a set of human-curated files with defined schemas: a constitution (inviolable rules), conventions (code patterns), an architecture document (system structure), decision records (choices with rationale), learnings (captured experience), a task list (current work), a glossary (domain terminology), and an agent playbook (operating instructions). Each file has a single purpose, a defined lifecycle, and a distinct update frequency. Authoritative state is version-controlled alongside code and reviewed through the same mechanisms (diffs, pull requests, blame annotations).
Tier 2: Delivery Views: Derived representations of authoritative state, assembled for consumption by a model. A delivery view is produced by a deterministic assembly function that takes the authoritative state, a token budget, and an inclusion policy as inputs and produces a context window as output. The same authoritative state, budget, and policy must always produce the same delivery view. Delivery views are ephemeral (they exist only for the duration of a session), but their construction is reproducible.
Tier 3: Ephemeral State: Session transcripts, scratchpad notes, draft journal entries, and other artifacts that exist during or immediately after a session but are not authoritative. Ephemeral state is the raw material from which authoritative state may be extracted through human review, but it is never consumed directly by the assembly function.
This three-tier model resolves confusion present in earlier formulations: the claim that AI output is a deterministic function of the repository state. The corrected claim is that context selection is deterministic (the delivery view is a function of authoritative state), but model output remains stochastic, conditioned on the deterministic context. Formally:
The persistence layer's contribution is making assemble reproducible, not making model deterministic.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#32-separation-of-concerns","level":3,"title":"3.2 Separation of Concerns","text":"
The decision to separate authoritative state into distinct files with distinct purposes is not cosmetic. Different types of knowledge have different lifecycles:
Knowledge Type Update Frequency Read Frequency Load Priority Example Constitution Rarely Every session Always \"Never commit secrets to git\" Tasks Every session Session start Always \"Implement token budget CLI flag\" Conventions Weekly Before coding High \"All errors use structured logging with severity levels\" Decisions When decided When questioning Medium \"Use PostgreSQL over MySQL (see ADR-003)\" Learnings When learned When stuck Medium \"Hook scripts >50ms degrade interactive UX\" Architecture When changed When designing On demand \"Three-layer pipeline: ingest → enrich → assemble\" Journal Every session Rarely Never auto \"Session 247: Removed dead-end session copy layer\"
A monolithic context file would force the assembly function to load everything or nothing. Separation enables progressive disclosure: the minimum context that matters for the current moment, with the option to load more when needed. A normal session loads the constitution, tasks, and conventions; a deep investigation loads decision history and journal entries from specific dates.
The budget mechanism is the constraint that makes separation valuable. Without a budget, the default behavior is to load everything, which destroys the attention density that makes loaded context useful. With a budget, the assembly function must prioritize ruthlessly: constitution first (always full), then tasks and conventions (budget-capped), then decisions and learnings (scored by recency). Entries that do not fit receive title-only summaries rather than being silently dropped (an application of the \"tell me what you don't know\" pattern identified independently by four systems in our landscape analysis).
The following six invariants define the constraints that a cognitive state persistence layer must satisfy. They are not axioms chosen a priori; they are empirically grounded properties whose violation was independently identified as producing complexity costs across the 17 systems we analyzed.
Context files must be human-readable, git-diffable, and editable with any text editor. No database. No binary storage.
Validation: 11 independent rejection decisions across the analyzed landscape protected this property. Systems that adopted embedded records, binary serialization, or knowledge graphs as their core primitive consistently traded away the ability for a developer to run cat DECISIONS.md and understand the system's knowledge. The inspection cost of opaque storage compounds over the lifetime of a project: every debugging session, every audit, every onboarding conversation requires specialized tooling to access knowledge that could have been a text file.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#invariant-2-zero-runtime-dependencies","level":3,"title":"Invariant 2: Zero Runtime Dependencies","text":"
The tool must work with no installed runtimes, no running services, and no API keys for core functionality.
Validation: 13 independent rejection decisions protected this property (the most frequently defended invariant). Systems that required databases (PostgreSQL, SQLite, Redis), embedding models, server daemons, container runtimes, or cloud APIs for core operation introduced failure modes proportional to their dependency count. A persistence layer that depends on infrastructure is not a persistence layer; it is a service. Services have uptime requirements, version compatibility matrices, and operational costs that simple file operations do not.
The same files plus the same budget must produce the same output. No embedding-based retrieval, no LLM-driven selection, no wall-clock-dependent scoring in the assembly path.
Validation: 6 independent rejection decisions protected this property. Non-deterministic assembly (whether from embedding variance, LLM-based selection, or time-dependent scoring) destroys the ability to reproduce a context window and therefore to diagnose why a model produced a given output. Determinism in the assembly path is what makes the persistence layer auditable.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#invariant-4-human-authority-over-persistent-state","level":3,"title":"Invariant 4: Human Authority Over Persistent State","text":"
The agent may propose changes to context files but must not unilaterally modify them. All persistent changes go through human-reviewable git commits.
Validation: 6 independent rejection decisions protected this property. Systems that allowed agents to self-modify their memory (writing freeform notes, auto-pruning old entries, generating summaries as ground truth) consistently produced lower-quality persistent context than systems that enforced human review. Structure is a feature, not a limitation: across the landscape, the pattern \"structured beats freeform\" was independently discovered by four systems that evolved from freeform LLM summaries to typed schemas with required fields.
Core functionality must work offline with no network access. Cloud services may be used for optional features but never for core context management.
Validation: 7 independent rejection decisions protected this property. Infrastructure-dependent memory systems cannot operate in classified environments, isolated networks, or disaster-recovery scenarios. A filesystem-native model continues to function under all conditions where the repository is accessible.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#invariant-6-no-default-telemetry","level":3,"title":"Invariant 6: No Default Telemetry","text":"
Any analytics, if ever added, must be strictly opt-in.
Validation: 4 independent rejection decisions protected this property. Default telemetry erodes the trust model that a persistence layer depends on. If developers must trust the system with their architectural decisions, operational learnings, and project constraints, the system cannot simultaneously be reporting usage data to external services.
These six invariants collectively define a design space. Each feature proposal can be evaluated against them: a feature that violates any invariant is rejected regardless of how many other systems implement it. The discipline of constraint (refusing to add capabilities that compromise foundational properties) is itself an architectural contribution. Across the 17 analyzed systems, 56 patterns were explicitly rejected for violating these invariants. The rejection count per invariant (11, 13, 6, 6, 7, 4) provides a rough measure of each property's vulnerability to architectural erosion. A representative sample of these rejections is provided in Appendix A.1
The 17 systems were selected to cover the architectural design space rather than to achieve completeness. Each included system satisfies three criteria: it represents a distinct architectural primitive for AI-assisted development, it is actively maintained or widely referenced, and it provides sufficient public documentation or source code for architectural inspection. The goal was to ensure that every major category of primitive (document, embedded record, state snapshot, event/message, construction/derivation) was represented by multiple systems, enabling cross-system pattern detection.
The resulting set spans six categories: AI coding assistants (Continue, Sourcegraph/Cody, Aider, Claude Code), AI agent frameworks (CrewAI, AutoGen, LangGraph, LlamaIndex, Letta/MemGPT), autonomous coding agents (OpenHands, Sweep), session provenance tools (Entire), data versioning systems (Dolt, Pachyderm), pipeline/build systems (Dagger), and purpose-built knowledge stores (QubicDB, Kindex). Each system was analyzed from its source code and documentation, producing 34 individual analysis artifacts (an architectural profile and a set of insights per system) that yielded 87 adopt/adapt recommendations, 56 explicit rejection decisions, and 52 watch items.
Every system in the AI-assisted development landscape operates on a core primitive: an atomic unit around which the entire architecture revolves. Our analysis of 17 systems reveals five categories of primitives, each making irrecoverable trade-offs:
Group A: Document/File Primitives: Human-readable documents as the primary unit. Documents are authored by humans, version-controlled in git, and consumed by AI tools. The invariant of this group is that the primitive is always human-readable and version-controllable with standard tools. Three systems participate in this pattern: the system described in this paper as a pure expression, and Continue (via its rules directory) and Claude Code (via CLAUDE.md files) as partial participants: both use document-based context as an input but organize around different core primitives.
Group B: Embedded Record Primitives: Vector-embedded records stored with numerical embeddings for similarity search, metadata for filtering, and scoring mechanisms for ranking. Five systems use this approach (LlamaIndex, CrewAI, Letta/MemGPT, QubicDB, Kindex). The invariant is that the primitive requires an embedding model or vector database for core operations: a dependency that precludes offline and air-gapped use.
Group C: State Snapshot Primitives: Point-in-time captures of the complete system state. The invariant is that any past state can be reconstructed at any historical point. Three systems use this approach (LangGraph, Entire, Dolt).
Group D: Event/Message Primitives: Sequential events or messages forming an append-only log with causal relationships. Four systems use this approach (OpenHands, AutoGen, Claude Code, Sweep). The invariant is temporal ordering and append-only semantics.
Group E: Construction/Derivation Primitives: Derived or constructed values that encode how they were produced. The invariant is that the primitive is a function of its inputs; re-executing the same inputs produces the same primitive. Three systems use this approach (Dagger, Pachyderm, Aider).
The five primitive categories differ along seven dimensions:
Property Document Embedded Record State Snapshot Event/Message Construction Human-readable Yes No Varies Partially No Version-controllable Yes No Varies Yes Yes Queryable by meaning No Yes No No No Rewindable Via git No Yes Yes (replay) Yes Deterministic Yes No Yes Yes Yes Zero-dependency Yes No Varies Varies Varies Offline-capable Yes No Varies Varies Yes
The document primitive is the only one that simultaneously satisfies human-readability, version-controllability, determinism, zero dependencies, and offline capability. This is not because documents are superior in general (embedded records provide semantic queryability that documents lack) but because the combination of all five properties is what the persistence layer requires. The choice between primitive categories is not a matter of capability but of which properties are considered invariant.
Across the 17 analyzed systems, six design patterns were independently discovered. These convergent patterns carry extra validation weight because they emerged from different problem spaces:
Pattern 1: \"Tell me what you don't know\": When context is incomplete, explicitly communicate to the model what information is missing and what confidence level the provided context represents. Four systems independently converged on this pattern: inserting skip markers, tracking evidence gaps, annotating provenance, or naming output quality tiers.
Pattern 2: \"Freshness matters\": Information relevance decreases over time. Three systems independently chose exponential decay with different half-lives (30 days, 90 days, and LRU ordering). Static priority ordering with no time dimension leaves relevant recent knowledge at the same priority as stale entries. This pattern is in productive tension with the persistence model's emphasis on determinism: the claim is not that time-dependence is irrelevant, but that it belongs in the curation step (a human deciding to consolidate or archive stale entries) rather than in the assembly function (an algorithm silently down-ranking entries based on age).
Pattern 3: \"Content-address everything\": Compute a hash of content at creation time for deduplication, cache invalidation, integrity verification, and change detection. Five systems independently implement content hashing, each discovering it solves different problems 5.
Pattern 4: \"Structured beats freeform\": When capturing knowledge or session state, a structured schema with required fields produces more useful data than freeform text. Four systems evolved from freeform summaries to typed schemas: one moving from LLM-generated prose to a structured condenser with explicit fields for completed tasks, pending tasks, and files modified.
Pattern 5: \"Protocol convergence\": The Model Context Protocol (MCP) is emerging as a standard tool integration layer. Nine of 17 systems support it, spanning every category in the analysis. MCP's significance for the persistence model is that it provides a transport mechanism for context delivery without dictating how context is stored or assembled. This makes the approach compatible with both retrieval-centric and persistence-centric architectures.
Pattern 6: \"Human-in-the-loop for memory\": Critical memory decisions should involve human judgment. Fully automated memory management produces lower-quality persistent context than human-reviewed systems. Four systems independently converged on variants of this pattern: ceremony-based consolidation, interrupt/resume for human input, confirmation mode for high-risk actions, and separated \"think fast\" vs. \"think slow\" processing paths.
Pattern 6 directly validates the ceremony model described in this paper. The persistence layer requires human curation not because automation is impossible, but because the quality of persistent knowledge degrades when the curation step is removed. The improvement opportunity is to make curation easier, not to automate it away.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#6-worked-example-architectural-decision-under-two-models","level":2,"title":"6. Worked Example: Architectural Decision Under Two Models","text":"
We now instantiate the three-tier model in a concrete system (ctx) and illustrate the difference between prompt-time retrieval and cognitive state persistence using a real scenario from its development.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#61-the-problem","level":3,"title":"6.1 The Problem","text":"
During development, the system accumulated three overlapping storage layers for session data: raw transcripts (owned by the AI tool), session copies (JSONL copies plus context snapshots), and enriched journal entries (Markdown summaries). The middle layer (session copies) was a dead-end write sink. An auto-save hook copied transcripts to a directory that nothing read from, because the journal pipeline already read directly from the raw transcripts. Approximately 15 source files, a shell hook, 20 configuration constants, and 30 documentation references supported infrastructure with no consumers.
In a retrieval-based system, the decision to remove the middle layer depends on whether the retrieval function surfaces the relevant context:
The developer asks: \"Should we simplify the session storage?\" The retrieval system must find and rank the original discussion thread where the three layers were designed, the usage statistics showing zero reads from the middle layer, the journal pipeline documentation showing it reads from raw transcripts directly, and the dependency analysis showing 15 files, a hook, and 30 doc references. If any of these fragments are not retrieved (because they are in old chat history, because the embedding similarity score is low, or because the token budget was consumed by more recent but less relevant context), the model may recommend preserving the middle layer, or may not realize it exists.
Six months later, a new team member asks the same question. The retrieval results will differ: the original discussion has aged out of recency scoring, the usage statistics are no longer in recent history, and the model may re-derive the answer or arrive at a different conclusion.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#63-cognitive-state-model","level":3,"title":"6.3 Cognitive State Model","text":"
In the persistence model, the decision is recorded as a structured artifact at write time:
## [2026-02-11] Remove .context/sessions/ storage layer\n\n**Status**: Accepted\n\n**Context**: The session/recall/journal system had three overlapping\nstorage layers. The recall pipeline reads directly from raw transcripts,\nmaking .context/sessions/ a dead-end write sink that nothing reads from.\n\n**Decision**: Remove .context/sessions/ entirely. Two stores remain:\nraw transcripts (global, tool-owned) and enriched journal\n(project-local).\n\n**Rationale**: Dead-end write sinks waste code surface, maintenance\neffort, and user attention. The recall pipeline already proved that\nreading directly from raw transcripts is sufficient. Context snapshots\nare redundant with git history.\n\n**Consequence**: Deleted internal/cli/session/ (15 files), removed\nauto-save hook, removed --auto-save from watch, removed pre-compact\nauto-save, removed /ctx-save skill, updated ~45 documentation files.\nFour earlier decisions superseded.\n
This artifact is:
Deterministically included in every subsequent session's delivery view (budget permitting, with title-only fallback if budget is exceeded)
Human-readable and reviewable as a diff in the commit that introduced it
Permanent: it persists in version control regardless of retrieval heuristics
Causally linked: it explicitly supersedes four earlier decisions, creating an auditable chain
When the new team member asks \"Why don't we store session copies?\" six months later, the answer is the same artifact, at the same revision, with the same rationale. The reasoning is reconstructible because it was persisted at write time, not discovered at query time.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#64-the-diff-when-policy-changes","level":3,"title":"6.4 The Diff When Policy Changes","text":"
If a future requirement re-introduces session storage (for example, to support multi-agent session correlation), the change appears as a diff to the decision record:
- **Status**: Accepted\n+ **Status**: Superseded by [2026-08-15] Reintroduce session storage\n+ for multi-agent correlation\n
The new decision record references the old one, creating a chain of reasoning visible in git log. In the retrieval model, the old decision would simply be ranked lower over time and eventually forgotten.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#7-experience-report-a-system-that-designed-itself","level":2,"title":"7. Experience Report: A System That Designed Itself","text":"
The persistence model described in this paper was developed and tested by using it on its own development. Over 33 days and 389 sessions, the system's context files accumulated a detailed record of decisions made, reversed, and consolidated: providing quantitative and qualitative evidence for the model's properties.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#71-scale-and-structure","level":3,"title":"7.1 Scale and Structure","text":"
The development produced the following authoritative state artifacts:
8 consolidated decision records covering 24 original decisions spanning context injection architecture, hook design, task management, security, agent autonomy, and webhook systems
18 consolidated learning records covering 75 original observations spanning agent compliance, hook behavior, testing patterns, documentation drift, and tool integration
A constitution with 13 inviolable rules across 4 categories (security, quality, process, context preservation)
389 enriched journal entries providing a complete session-level audit trail
The consolidation ratio (24 decisions compressed to 8 records, 75 learnings compressed to 18) illustrates the curation cost and its return: authoritative state becomes denser and more useful over time as related entries are merged, contradictions are resolved, and superseded decisions are marked.
Three architectural reversals during development provide evidence that the persistence model captures and communicates reasoning effectively:
Reversal 1: The two-tier persistence model: The original design included a middle storage tier for session copies. After 21 days of development, the middle tier was identified as a dead-end write sink (described in Section 6). The decision record captured the full context, and the removal was executed cleanly: 15 source files, a shell hook, and 45 documentation references. The pattern of a \"dead-end write sink\" was subsequently observed in 7 of 17 systems in our landscape analysis that store raw transcripts alongside structured context.
Reversal 2: The prompt-coach hook: An early design included a hook that analyzed user prompts and offered improvement suggestions. After deployment, the hook produced zero useful tips, its output channel was invisible to users, and it accumulated orphan temporary files. The hook was removed, and the decision record captured the failure mode for future reference.
Reversal 3: The soft-instruction compliance model: The original context injection strategy relied on soft instructions: directives asking the AI agent to read specific files. After measuring compliance across multiple sessions, we found a consistent 75-85% compliance ceiling. The revised strategy injects content directly, bypassing the agent's judgment about whether to comply. The learning record captures the ceiling measurement and the rationale for the architectural change.
Each reversal was captured as a structured decision record with context, rationale, and consequences. In a retrieval-based system, these reversals would exist only in chat history, discoverable only if the retrieval function happens to surface them. In the persistence model, they are permanent, indexable artifacts that inform future decisions.
The 75-85% compliance ceiling for soft instructions is the most operationally significant finding from the experience report. It means that any context management strategy relying on agent compliance with instructions (\"read this file,\" \"follow this convention,\" \"check this list\") has a hard ceiling on reliability.
The root cause is structural: the instruction \"don't apply judgment\" is itself evaluated by judgment. When an agent receives a directive to read a file, it first assesses whether the directive is relevant to the current task (and that assessment is the judgment the directive was trying to prevent).
The architectural response maps directly to the formal model defined in Section 3.1. Content requiring 100% compliance is included in authoritative_state and injected by the deterministic assemble function, bypassing the agent entirely. Content where 80% compliance is acceptable is delivered as instructions within the delivery view. The three-tier architecture makes this distinction explicit: authoritative state is injected; delivery views are assembled deterministically; ephemeral state is available but not pushed.
Over 33 days, we observed a qualitative shift in the development experience. Early sessions (days 1-7) spent significant time re-establishing context: explaining conventions, re-stating constraints, re-deriving past decisions. Later sessions (days 25-33) began with the agent loading curated context and immediately operating within established constraints, because the constraints were in files rather than in chat history.
This compounding effect (where each session's context curation improves all subsequent sessions) is the primary return on the curation investment. The cost is borne once (writing a decision record, capturing a learning, updating the task list); the benefit is collected on every subsequent session load.
The effect is analogous to compound interest in financial systems: the knowledge base grows not linearly with effort but with increasing marginal returns as new knowledge interacts with existing context. A learning captured on day 5 prevents a mistake on day 12, which avoids a debugging session that would have consumed a day 12 session, freeing that session for productive work that generates new learnings. The growth is not literally exponential (it is bounded by project scope and subject to diminishing returns as the knowledge base matures), but within the observed 33-day window, the returns were consistently accelerating.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#75-scope-and-generalizability","level":3,"title":"7.5 Scope and Generalizability","text":"
This experience report is self-referential by design: the system was developed using its own persistence model. This circularity strengthens the internal validity of the findings (the model was stress-tested under authentic conditions) but limits external generalizability. The two-week crossover point was observed on a single project of moderate complexity with a small team already familiar with the model's assumptions. Whether the same crossover holds for larger teams, for codebases with different characteristics, or for teams adopting the model without having designed it remains an open empirical question. The quantitative claims in this section should be read as existence proofs (demonstrating that the model can produce compounding returns) rather than as predictions about specific adoption scenarios.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#8-situating-the-persistence-layer","level":2,"title":"8. Situating the Persistence Layer","text":"
The persistence layer occupies a specific position in the stack of AI-assisted development:
Application Logic\nAI Interaction / Agents\nContext Retrieval Systems\nCognitive State Persistence Layer\nVersion Control / Storage\n
Current systems innovate primarily in the retrieval layer (improving how context is discovered, ranked, and delivered at query time). The persistence layer sits beneath retrieval and above version control. Its role is to maintain the authoritative state that retrieval systems may query but do not own. The relationship is complementary: retrieval answers \"What in the corpus might be relevant?\"; cognitive state answers \"What must be true for this system to operate correctly?\" A mature system uses both: retrieval for discovery, persistence for authority.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#9-applicability-and-trade-offs","level":2,"title":"9. Applicability and Trade-Offs","text":"","path":["The Thesis"],"tags":[]},{"location":"thesis/#91-when-to-use-this-model","level":3,"title":"9.1 When to Use This Model","text":"
A cognitive state persistence layer is most appropriate when:
Reproducibility is a requirement: If a system must be able to answer \"Why did this output occur, and can it be produced again?\" then deterministic, version-controlled context becomes necessary. This is relevant in regulated environments, safety-critical systems, long-lived infrastructure, and security-sensitive deployments.
Knowledge must outlive sessions and individuals: Projects with multi-year lifetimes accumulate architectural decisions, domain interpretations, and operational policy. If this knowledge is stored only in chat history, issue trackers, and institutional memory, it decays. The persistence model converts implicit knowledge into branchable, reviewable artifacts.
Teams require shared cognitive authority: In collaborative environments, correctness depends on a stable answer to \"What does the system believe to be true?\" When this answer is derived from retrieval heuristics, authority shifts to ranking algorithms. When it is versioned and human-readable, authority remains with the team.
Offline or air-gapped operation is required: Infrastructure-dependent memory systems cannot operate in classified environments, isolated networks, or disaster-recovery scenarios.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#92-when-not-to-use-this-model","level":3,"title":"9.2 When Not to Use This Model","text":"
Zero-configuration personal workflows: For short-lived or exploratory tasks, the cost of explicit knowledge curation outweighs its benefits. Heuristic retrieval is sufficient when correctness is non-critical, outputs are disposable, and historical reconstruction is unnecessary.
Maximum automatic recall from large corpora: Vector retrieval systems provide superior performance when the primary task is searching vast, weakly structured information spaces. The persistence model assumes that what matters can be decided and that this decision is valuable to record.
Fully autonomous agent architectures: Agent runtimes that generate and discard state continuously, optimizing for local goal completion, do not benefit from a model that centers human ratification of knowledge.
The transition does not require full system replacement. An incremental path:
Step 1: Record decisions as versioned artifacts: Instead of allowing conclusions to remain in discussion threads, persist them in reviewable form with context, rationale, and consequences 4. This alone converts ephemeral reasoning into the cognitive state.
Step 2: Make inclusion deterministic: Define explicit assembly rules. Retrieval may still exist, but it is no longer authoritative.
Step 3: Move policy into cognitive state: When system behavior depends on stable constraints, encode those constraints as versioned knowledge. Behavior becomes reproducible.
Step 4: Optimize assembly, not retrieval: Once the authoritative layer exists, performance improvements come from budgeting, caching, and structural refinement rather than from improving ranking heuristics.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#94-the-curation-cost","level":3,"title":"9.4 The Curation Cost","text":"
The primary objection to this model is the cost of explicit knowledge curation. This cost is real. Writing a structured decision record takes longer than letting a chatbot auto-summarize a conversation. Maintaining a glossary requires discipline. Consolidating 75 learnings into 18 records requires judgment.
The response is not that the cost is negligible but that it is amortized. A decision record written once is loaded hundreds of times. A learning captured today prevents repeated mistakes across all future sessions. The curation cost is paid once; the benefit compounds.
The experience report provides rough order-of-magnitude numbers. Across 389 sessions over 33 days, curation activities (writing decision records, capturing learnings, updating the task list, consolidating entries) averaged approximately 3-5 minutes per session. In early sessions (days 1-7), before curated context existed, re-establishing context consumed approximately 10-15 minutes per session: re-explaining conventions, re-stating architectural constraints, re-deriving decisions that had been made but not persisted. By the final week (days 25-33), the re-explanation overhead had dropped to near zero: the agent loaded curated context and began productive work immediately.
At ~12 sessions per day, the curation cost was roughly 35-60 minutes daily. The re-explanation cost in the first week was roughly 120-180 minutes daily. By the third week, that cost had fallen to under 15 minutes daily while the curation cost remained stable. The crossover (where cumulative curation cost was exceeded by cumulative time saved) occurred around day 10. These figures are approximate and derived from a single project with a small team already familiar with the model; the crossover point will vary with project complexity, team size, and curation discipline.
Several directions are compatible with the model described here:
Section-level deterministic budgeting: Current assembly operates at file granularity. Section-level budgeting would allow finer-grained control (including specific decision records while excluding others within the same file) without sacrificing determinism.
Causal links between decisions: The experience report shows that decisions frequently reference earlier decisions (superseding, extending, or qualifying them). Formal causal links would enable traversal of the decision graph and automatic detection of orphaned or contradictory constraints.
Content-addressed context caches: Five systems in our landscape analysis independently discovered that content hashing provides cache invalidation, integrity verification, and change detection. Applying content addressing to the assembly output would enable efficient cache reuse when the authoritative state has not changed.
Conditional context inclusion: Five systems independently suggest that context entries could carry activation conditions (file patterns, task keywords, or explicit triggers) that control whether they are included in a given assembly. This would reduce the per-session budget cost of large knowledge bases without sacrificing determinism.
Provenance metadata: Linking context entries to the sessions, decisions, or learnings that motivated them would strengthen the audit trail. Optional provenance fields on Markdown entries (session identifier, cause reference, motivation) would be lightweight and compatible with the existing file-based model.
AI-assisted development has treated context as a \"query result\" assembled at the moment of interaction, discarded at the session end. This paper identifies a complementary layer: the persistence of authoritative cognitive state as deterministic, version-controlled artifacts.
The contribution is grounded in three sources of evidence. A landscape analysis of 17 systems reveals five categories of primitives and shows that no existing system provides the combination of human-readability, determinism, zero dependencies, and offline capability that the persistence layer requires. Six design invariants, validated by 56 independent rejection decisions, define the constraints of the design space. An experience report over 389 sessions and 33 days demonstrates compounding returns: later sessions start faster, decisions are not re-derived, and architectural reversals are captured with full context.
The core claim is this: persistent cognitive state enables causal reasoning across time. A system built on this model can explain not only what is true, but why it became true and when it changed.
When context is the state:
Reasoning is reproducible: the same authoritative state, budget, and policy produce the same delivery view.
Knowledge is auditable: decisions are traceable to explicit artifacts with context, rationale, and consequences.
Understanding compounds: each session's curation improves all subsequent sessions.
The choice between retrieval-centric workflows and a persistence layer is not a matter of capability but of time horizon. Retrieval optimizes for relevance at the moment of interaction. Persistence optimizes for the durability of understanding across the lifetime of a project.
🐸🖤 \"Gooood... let the deterministic context flow through the repository...\" - Kermit the Sidious, probably
The 56 rejection decisions referenced in Section 4 were cataloged across all 17 system analyses, grouped by the invariant they would violate. This appendix provides a representative sample (two per invariant) to illustrate the methodology.
Invariant 1: Markdown-on-Filesystem (11 rejections): CrewAI's vector embedding storage was rejected because embeddings are not human-readable, not git-diff-friendly, and require external services. Kindex's knowledge graph as core primitive was rejected because it requires specialized commands to inspect content that could be a text file (kin show <id> vs. cat DECISIONS.md).
Invariant 2: Zero Runtime Dependencies (13 rejections): Letta/MemGPT's PostgreSQL-backed architecture was rejected because it conflicts with local-first, no-database, single-binary operation. Pachyderm's Kubernetes-based distributed architecture was rejected as the antithesis of a single-binary design for a tool that manages text files.
Invariant 3: Deterministic Assembly (6 rejections): LlamaIndex's embedding-based retrieval as the primary selection mechanism was rejected because it destroys determinism, requires an embedding model, and removes human judgment from the selection process. QubicDB's wall-clock-dependent scoring was rejected because it directly conflicts with the \"same inputs produce same output\" property.
Invariant 4: Human Authority (6 rejections): Letta/MemGPT's agent self-modification of memory was rejected as fundamentally opposed to human-curated persistence. Claude Code's unstructured auto-memory (where the agent writes freeform notes) was rejected because structured files with defined schemas produce higher-quality persistent context than unconstrained agent output.
Invariant 5: Local-First / Air-Gap Capable (7 rejections): Sweep's cloud-dependent architecture was rejected as fundamentally incompatible with the local-first, offline-capable model. LangGraph's managed cloud deployment was rejected because cloud dependencies for core functionality violate air-gap capability.
Invariant 6: No Default Telemetry (4 rejections): Continue's telemetry-by-default (PostHog) was rejected because it contradicts the local-first, privacy-respecting trust model. CrewAI's global telemetry on import (Scarf tracking pixel) was rejected because it violates user trust and breaks air-gap capability.
The remaining 9 rejections did not map to a specific invariant but were rejected on other architectural grounds: for example, Aider's full-file-content-in-context approach (which defeats token budgeting), AutoGen's multi-agent orchestration as core primitive (scope creep), and Claude Code's 30-day transcript retention limit (institutional knowledge should have no automatic expiration).
Reproducible Builds Project, \"Reproducible Builds: Increasing the Integrity of Software Supply Chains\", 2017. https://reproducible-builds.org/docs/definition/ ↩↩↩
S. McIntosh et al., \"The Impact of Build System Evolution on Software Quality\", ICSE, 2015. https://doi.org/10.1109/ICSE.2015.70 ↩
C. Manning, P. Raghavan, H. Schütze, Introduction to Information Retrieval, Cambridge University Press, 2008. https://nlp.stanford.edu/IR-book/ ↩
M. Nygard, \"Documenting Architecture Decisions\", Cognitect Blog, 2011. https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions ↩↩
L. Torvalds et al., Git Internals - Git Objects (content-addressed storage concepts). https://git-scm.com/book/en/v2/Git-Internals-Git-Objects ↩
Kief Morris, Infrastructure as Code, O'Reilly, 2016. ↩
J. Kreps, \"The Log: What every software engineer should know about real-time data's unifying abstraction\", 2013. https://engineering.linkedin.com/distributed-systems/log ↩
P. Hunt et al., \"ZooKeeper: Wait-free coordination for Internet-scale systems\", USENIX ATC, 2010. https://www.usenix.org/legacy/event/atc10/tech/full_papers/Hunt.pdf ↩
","path":["The Thesis"],"tags":[]}]}
\ No newline at end of file
diff --git a/specs/ast-audit-contributor-guide.md b/specs/ast-audit-contributor-guide.md
new file mode 100644
index 000000000..b8002510d
--- /dev/null
+++ b/specs/ast-audit-contributor-guide.md
@@ -0,0 +1,30 @@
+---
+title: AST audit contributor guide
+date: 2026-04-03
+status: ready
+---
+
+# AST Audit Contributor Guide
+
+## Problem
+
+Contributors (human and AI) routinely introduce code that violates
+project conventions enforced by `internal/audit/` AST tests. The
+violations are caught at `go test` time, but the fix patterns are
+not documented — contributors must reverse-engineer the convention
+from the test name and error message.
+
+## Solution
+
+A contributor-facing document in `docs/reference/` that catalogs
+every common violation pattern, shows a before/after code example,
+and explains the rationale. Organized by convention category, not
+by test name (contributors think "I have a magic string" not
+"TestNoMagicStrings is failing").
+
+## Scope
+
+- Document only patterns enforced by `internal/audit/` tests
+- Before/after examples drawn from actual commits
+- Link each pattern to its test and the CONVENTIONS.md entry
+- No changes to tests or source code
From 836c3baa0cbad5b78b4d244e000b283f8e45d0ce Mon Sep 17 00:00:00 2001
From: Jose Alekhinne
Date: Fri, 3 Apr 2026 22:04:11 -0700
Subject: [PATCH 02/13] fix: migrate 42 magic strings from PR #55 to YAML
assets
All user-facing format strings in write/steering, write/skill,
write/trigger, write/setup, mcp/handler, trigger/, and drift/
now go through desc.Text() with YAML-backed DescKeys.
Added 42 YAML entries in ui.yaml and 30 DescKey constants across
6 config/embed/text files (steering.go, trigger.go new; skill.go,
setup.go, mcp_tool.go, drift.go updated). Also adds predicate
naming convention section to audit-conventions.md.
Spec: specs/ast-audit-contributor-guide.md
Signed-off-by: Jose Alekhinne
---
docs/reference/audit-conventions.md | 35 ++++++++++
internal/assets/commands/text/ui.yaml | 91 ++++++++++++++++++++++++++
internal/config/embed/text/drift.go | 1 +
internal/config/embed/text/mcp_tool.go | 7 ++
internal/config/embed/text/setup.go | 11 +++-
internal/config/embed/text/skill.go | 9 ++-
internal/config/embed/text/steering.go | 23 +++++++
internal/config/embed/text/trigger.go | 29 ++++++++
internal/drift/check_ext.go | 4 +-
internal/mcp/handler/steering.go | 14 +++-
internal/trigger/discover.go | 6 +-
internal/trigger/runner.go | 10 ++-
internal/write/setup/hook.go | 23 +++++--
internal/write/skill/skill.go | 20 ++++--
internal/write/steering/steering.go | 45 +++++++++----
internal/write/trigger/trigger.go | 50 ++++++++++----
16 files changed, 332 insertions(+), 46 deletions(-)
create mode 100644 internal/config/embed/text/steering.go
create mode 100644 internal/config/embed/text/trigger.go
diff --git a/docs/reference/audit-conventions.md b/docs/reference/audit-conventions.md
index 47b5b9d79..a591a6b91 100644
--- a/docs/reference/audit-conventions.md
+++ b/docs/reference/audit-conventions.md
@@ -702,6 +702,41 @@ func Journal(cmd *cobra.Command, ...) {
---
+## Predicate Naming (no `Is`/`Has`/`Can` prefix)
+
+**Test:** None (manual review convention)
+
+Exported methods that return `bool` must not use `Is`, `Has`, or
+`Can` prefixes. The predicate reads more naturally without them,
+especially at call sites where the package name provides context.
+
+**Before:**
+
+```go
+func IsCompleted(t *Task) bool { ... }
+func HasChildren(n *Node) bool { ... }
+func IsExemptPackage(path string) bool { ... }
+```
+
+**After:**
+
+```go
+func Completed(t *Task) bool { ... }
+func Children(n *Node) bool { ... } // or: ChildCount > 0
+func ExemptPackage(path string) bool { ... }
+```
+
+**Rule:** Drop the prefix. Private helpers may use prefixes when it
+reads more naturally (`isValid` in a local context is fine). This
+convention applies to exported methods and package-level functions.
+See CONVENTIONS.md "Predicates" section.
+
+This is not yet enforced by an AST test — it requires semantic
+understanding of return types and naming intent that makes automated
+detection fragile. Apply during code review.
+
+---
+
## Mixed Visibility
**Test:** `TestNoMixedVisibility`
diff --git a/internal/assets/commands/text/ui.yaml b/internal/assets/commands/text/ui.yaml
index 8d411ffde..b36285a2f 100644
--- a/internal/assets/commands/text/ui.yaml
+++ b/internal/assets/commands/text/ui.yaml
@@ -526,3 +526,94 @@ write.why-label-invariants:
short: "Design Invariants"
help.community-footer:
short: "\nHave a question, bug, or feature request? Join us on Discord:\n https://ctx.ist/discord\n"
+
+write.steering-created:
+ short: 'Created %s'
+write.steering-skipped:
+ short: 'Skipped %s (already exists)'
+write.steering-init-summary:
+ short: "\n%d created, %d skipped"
+write.steering-file-entry:
+ short: '%-20s inclusion=%-7s priority=%-3d tools=%s'
+write.steering-file-count:
+ short: "\n%d steering file(s)"
+write.steering-preview-head:
+ short: 'Steering files matching prompt %q:'
+write.steering-preview-entry:
+ short: ' %-20s inclusion=%-7s priority=%-3d tools=%s'
+write.steering-preview-count:
+ short: "\n%d file(s) would be included"
+write.steering-sync-written:
+ short: 'Written: %s'
+write.steering-sync-skipped:
+ short: 'Skipped: %s'
+write.steering-sync-error:
+ short: 'Error: %s'
+write.steering-sync-summary:
+ short: "\n%d written, %d skipped, %d errors"
+
+write.skill-installed:
+ short: 'Installed %s → %s'
+write.skill-entry-desc:
+ short: ' %-20s %s'
+write.skill-entry:
+ short: ' %s'
+write.skill-count:
+ short: "\n%d skill(s)"
+write.skill-removed:
+ short: 'Removed %s'
+
+write.setup-deploy-complete:
+ short: '%s setup complete.'
+write.setup-deploy-mcp:
+ short: ' MCP server: %s'
+write.setup-deploy-steering:
+ short: ' Steering: %s'
+write.setup-deploy-exists:
+ short: "\u2713 %s already exists (skipped)"
+write.setup-deploy-created:
+ short: "\u2713 Created %s"
+write.setup-deploy-synced:
+ short: "\u2713 Synced steering: %s"
+write.setup-deploy-skip-steer:
+ short: ' Skipped steering: %s (unchanged)'
+
+mcp.steering-section:
+ short: "## %s\n\n%s\n\n"
+mcp.search-hit-line:
+ short: "%s:%d: %s\n"
+mcp.search-no-match:
+ short: 'No matches for %q in %s.'
+
+trigger.warn:
+ short: 'hook %s: %v'
+trigger.error-item:
+ short: '%s: %s'
+trigger.skip-warn:
+ short: 'hook skip %s: %v'
+
+drift.tool-suffix:
+ short: '%s (tool: %s)'
+
+write.trigger-created:
+ short: 'Created %s'
+write.trigger-disabled:
+ short: 'Disabled %s (%s)'
+write.trigger-enabled:
+ short: 'Enabled %s (%s)'
+write.trigger-type-hdr:
+ short: '[%s]'
+write.trigger-entry:
+ short: ' %-20s %-8s %s'
+write.trigger-count:
+ short: '%d hook(s)'
+write.trigger-test-hdr:
+ short: 'Testing %s hooks...'
+write.trigger-test-input:
+ short: "Input:\n%s"
+write.trigger-cancelled:
+ short: 'Cancelled: %s'
+write.trigger-context:
+ short: "Context:\n%s"
+write.trigger-err-line:
+ short: ' %s'
diff --git a/internal/config/embed/text/drift.go b/internal/config/embed/text/drift.go
index 331f0a4f3..32c5069b6 100644
--- a/internal/config/embed/text/drift.go
+++ b/internal/config/embed/text/drift.go
@@ -58,6 +58,7 @@ const (
DescKeyDriftInvalidTool = "drift.invalid-tool"
DescKeyDriftHookNoExec = "drift.hook-no-exec"
DescKeyDriftStaleSyncFile = "drift.stale-sync-file"
+ DescKeyDriftToolSuffix = "drift.tool-suffix"
DescKeyVersionDriftRelayMessage = "version-drift.relay-message"
DescKeyWriteVersionDriftFallback = "write.version-drift-fallback"
)
diff --git a/internal/config/embed/text/mcp_tool.go b/internal/config/embed/text/mcp_tool.go
index 4019df4ec..c98e9c511 100644
--- a/internal/config/embed/text/mcp_tool.go
+++ b/internal/config/embed/text/mcp_tool.go
@@ -45,3 +45,10 @@ const (
DescKeyMCPToolPropSearchQuery = "mcp.tool-prop-search-query"
DescKeyMCPToolPropSummary = "mcp.tool-prop-summary"
)
+
+// DescKeys for MCP handler steering/search output.
+const (
+ DescKeyMCPSteeringSection = "mcp.steering-section"
+ DescKeyMCPSearchHitLine = "mcp.search-hit-line"
+ DescKeyMCPSearchNoMatch = "mcp.search-no-match"
+)
diff --git a/internal/config/embed/text/setup.go b/internal/config/embed/text/setup.go
index 3c3612b10..bc21b5589 100644
--- a/internal/config/embed/text/setup.go
+++ b/internal/config/embed/text/setup.go
@@ -8,6 +8,13 @@ package text
// DescKeys for setup wizard write output.
const (
- DescKeyWriteSetupDone = "write.setup-done"
- DescKeyWriteSetupPrompt = "write.setup-prompt"
+ DescKeyWriteSetupDone = "write.setup-done"
+ DescKeyWriteSetupPrompt = "write.setup-prompt"
+ DescKeyWriteSetupDeployComplete = "write.setup-deploy-complete"
+ DescKeyWriteSetupDeployMCP = "write.setup-deploy-mcp"
+ DescKeyWriteSetupDeploySteering = "write.setup-deploy-steering"
+ DescKeyWriteSetupDeployExists = "write.setup-deploy-exists"
+ DescKeyWriteSetupDeployCreated = "write.setup-deploy-created"
+ DescKeyWriteSetupDeploySynced = "write.setup-deploy-synced"
+ DescKeyWriteSetupDeploySkipSteer = "write.setup-deploy-skip-steer"
)
diff --git a/internal/config/embed/text/skill.go b/internal/config/embed/text/skill.go
index a9a7043db..103338cbe 100644
--- a/internal/config/embed/text/skill.go
+++ b/internal/config/embed/text/skill.go
@@ -8,6 +8,11 @@ package text
// DescKeys for skill display write output.
const (
- DescKeyWriteSkillLine = "write.skill-line"
- DescKeyWriteSkillsHeader = "write.skills-header"
+ DescKeyWriteSkillLine = "write.skill-line"
+ DescKeyWriteSkillsHeader = "write.skills-header"
+ DescKeyWriteSkillInstalled = "write.skill-installed"
+ DescKeyWriteSkillEntryDesc = "write.skill-entry-desc"
+ DescKeyWriteSkillEntry = "write.skill-entry"
+ DescKeyWriteSkillCount = "write.skill-count"
+ DescKeyWriteSkillRemoved = "write.skill-removed"
)
diff --git a/internal/config/embed/text/steering.go b/internal/config/embed/text/steering.go
new file mode 100644
index 000000000..9e583d48a
--- /dev/null
+++ b/internal/config/embed/text/steering.go
@@ -0,0 +1,23 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package text
+
+// DescKeys for steering write output.
+const (
+ DescKeyWriteSteeringCreated = "write.steering-created"
+ DescKeyWriteSteeringSkipped = "write.steering-skipped"
+ DescKeyWriteSteeringInitSummary = "write.steering-init-summary"
+ DescKeyWriteSteeringFileEntry = "write.steering-file-entry"
+ DescKeyWriteSteeringFileCount = "write.steering-file-count"
+ DescKeyWriteSteeringPreviewHead = "write.steering-preview-head"
+ DescKeyWriteSteeringPreviewEntry = "write.steering-preview-entry"
+ DescKeyWriteSteeringPreviewCount = "write.steering-preview-count"
+ DescKeyWriteSteeringSyncWritten = "write.steering-sync-written"
+ DescKeyWriteSteeringSyncSkipped = "write.steering-sync-skipped"
+ DescKeyWriteSteeringSyncError = "write.steering-sync-error"
+ DescKeyWriteSteeringSyncSummary = "write.steering-sync-summary"
+)
diff --git a/internal/config/embed/text/trigger.go b/internal/config/embed/text/trigger.go
new file mode 100644
index 000000000..efe7025a8
--- /dev/null
+++ b/internal/config/embed/text/trigger.go
@@ -0,0 +1,29 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package text
+
+// DescKeys for trigger (hook runner) output.
+const (
+ DescKeyTriggerWarn = "trigger.warn"
+ DescKeyTriggerErrorItem = "trigger.error-item"
+ DescKeyTriggerSkipWarn = "trigger.skip-warn"
+)
+
+// DescKeys for write/trigger display output.
+const (
+ DescKeyWriteTriggerCreated = "write.trigger-created"
+ DescKeyWriteTriggerDisabled = "write.trigger-disabled"
+ DescKeyWriteTriggerEnabled = "write.trigger-enabled"
+ DescKeyWriteTriggerTypeHdr = "write.trigger-type-hdr"
+ DescKeyWriteTriggerEntry = "write.trigger-entry"
+ DescKeyWriteTriggerCount = "write.trigger-count"
+ DescKeyWriteTriggerTestHdr = "write.trigger-test-hdr"
+ DescKeyWriteTriggerTestInput = "write.trigger-test-input"
+ DescKeyWriteTriggerCancelled = "write.trigger-cancelled"
+ DescKeyWriteTriggerContext = "write.trigger-context"
+ DescKeyWriteTriggerErrLine = "write.trigger-err-line"
+)
diff --git a/internal/drift/check_ext.go b/internal/drift/check_ext.go
index d0e115a7c..7ff0bab89 100644
--- a/internal/drift/check_ext.go
+++ b/internal/drift/check_ext.go
@@ -143,7 +143,9 @@ func checkSyncStaleness(report *Report) {
File: name,
Type: IssueStaleSyncFile,
Message: desc.Text(text.DescKeyDriftStaleSyncFile),
- Path: fmt.Sprintf("%s (tool: %s)", name, tool),
+ Path: fmt.Sprintf(
+ desc.Text(text.DescKeyDriftToolSuffix),
+ name, tool),
})
found = true
}
diff --git a/internal/mcp/handler/steering.go b/internal/mcp/handler/steering.go
index a6c7a82ef..1d04586b6 100644
--- a/internal/mcp/handler/steering.go
+++ b/internal/mcp/handler/steering.go
@@ -14,6 +14,8 @@ import (
"path/filepath"
"strings"
+ "github.com/ActiveMemory/ctx/internal/assets/read/desc"
+ "github.com/ActiveMemory/ctx/internal/config/embed/text"
errMcp "github.com/ActiveMemory/ctx/internal/err/mcp"
ctxIo "github.com/ActiveMemory/ctx/internal/io"
"github.com/ActiveMemory/ctx/internal/rc"
@@ -60,7 +62,9 @@ func (h *Handler) SteeringGet(prompt string) (string, error) {
var sb strings.Builder
for _, sf := range filtered {
- fmt.Fprintf(&sb, "## %s\n\n%s\n\n", sf.Name, sf.Body)
+ fmt.Fprintf(&sb,
+ desc.Text(text.DescKeyMCPSteeringSection),
+ sf.Name, sf.Body)
}
return sb.String(), nil
@@ -105,14 +109,18 @@ func (h *Handler) Search(query string) (string, error) {
lineNum++
line := scanner.Text()
if strings.Contains(strings.ToLower(line), queryLower) {
- fmt.Fprintf(&sb, "%s:%d: %s\n", e.Name(), lineNum, line)
+ fmt.Fprintf(&sb,
+ desc.Text(text.DescKeyMCPSearchHitLine),
+ e.Name(), lineNum, line)
matches++
}
}
}
if matches == 0 {
- return fmt.Sprintf("No matches for %q in %s.", query, h.ContextDir), nil
+ return fmt.Sprintf(
+ desc.Text(text.DescKeyMCPSearchNoMatch),
+ query, h.ContextDir), nil
}
return sb.String(), nil
diff --git a/internal/trigger/discover.go b/internal/trigger/discover.go
index a210c4e01..f5b238cc2 100644
--- a/internal/trigger/discover.go
+++ b/internal/trigger/discover.go
@@ -11,6 +11,8 @@ import (
"path/filepath"
"sort"
+ "github.com/ActiveMemory/ctx/internal/assets/read/desc"
+ "github.com/ActiveMemory/ctx/internal/config/embed/text"
"github.com/ActiveMemory/ctx/internal/config/fs"
"github.com/ActiveMemory/ctx/internal/config/warn"
ctxIo "github.com/ActiveMemory/ctx/internal/io"
@@ -59,7 +61,9 @@ func Discover(hooksDir string) (map[HookType][]HookInfo, error) {
validateErr := ValidatePath(hooksDir, path)
if validateErr != nil {
- ctxLog.Warn("hook skip %s: %v", path, validateErr)
+ ctxLog.Warn(
+ desc.Text(text.DescKeyTriggerSkipWarn),
+ path, validateErr)
continue
}
diff --git a/internal/trigger/runner.go b/internal/trigger/runner.go
index 6722a7e1b..d630a1aca 100644
--- a/internal/trigger/runner.go
+++ b/internal/trigger/runner.go
@@ -11,6 +11,8 @@ import (
"fmt"
"time"
+ "github.com/ActiveMemory/ctx/internal/assets/read/desc"
+ "github.com/ActiveMemory/ctx/internal/config/embed/text"
"github.com/ActiveMemory/ctx/internal/config/token"
errTrigger "github.com/ActiveMemory/ctx/internal/err/trigger"
ctxLog "github.com/ActiveMemory/ctx/internal/log/warn"
@@ -72,8 +74,12 @@ func RunAll(
out, runErr := runOne(h, inputJSON, timeout)
if runErr != nil {
- ctxLog.Warn("hook %s: %v", h.Path, runErr)
- agg.Errors = append(agg.Errors, fmt.Sprintf("%s: %s", h.Path, runErr))
+ ctxLog.Warn(
+ desc.Text(text.DescKeyTriggerWarn),
+ h.Path, runErr)
+ agg.Errors = append(agg.Errors, fmt.Sprintf(
+ desc.Text(text.DescKeyTriggerErrorItem),
+ h.Path, runErr))
continue
}
diff --git a/internal/write/setup/hook.go b/internal/write/setup/hook.go
index 5b26fd46d..c9deac621 100644
--- a/internal/write/setup/hook.go
+++ b/internal/write/setup/hook.go
@@ -320,9 +320,13 @@ func InfoClineIntegration(cmd *cobra.Command) {
// - steeringPath: Path to the steering directory
func DeployComplete(cmd *cobra.Command, tool, mcpPath, steeringPath string) {
cmd.Println()
- cmd.Println(fmt.Sprintf("%s setup complete.", tool))
- cmd.Println(fmt.Sprintf(" MCP server: %s", mcpPath))
- cmd.Println(fmt.Sprintf(" Steering: %s", steeringPath))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSetupDeployComplete), tool))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSetupDeployMCP), mcpPath))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSetupDeploySteering),
+ steeringPath))
}
// DeployFileExists prints that a file already exists and was skipped.
@@ -331,7 +335,8 @@ func DeployComplete(cmd *cobra.Command, tool, mcpPath, steeringPath string) {
// - cmd: Cobra command for output
// - path: Path to the existing file
func DeployFileExists(cmd *cobra.Command, path string) {
- cmd.Println(fmt.Sprintf("\u2713 %s already exists (skipped)", path))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSetupDeployExists), path))
}
// DeployFileCreated prints that a file was created.
@@ -340,7 +345,8 @@ func DeployFileExists(cmd *cobra.Command, path string) {
// - cmd: Cobra command for output
// - path: Path to the created file
func DeployFileCreated(cmd *cobra.Command, path string) {
- cmd.Println(fmt.Sprintf("\u2713 Created %s", path))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSetupDeployCreated), path))
}
// DeploySteeringSynced prints that a steering file was synced.
@@ -349,7 +355,8 @@ func DeployFileCreated(cmd *cobra.Command, path string) {
// - cmd: Cobra command for output
// - name: Name of the synced file
func DeploySteeringSynced(cmd *cobra.Command, name string) {
- cmd.Println(fmt.Sprintf("\u2713 Synced steering: %s", name))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSetupDeploySynced), name))
}
// DeploySteeringSkipped prints that a steering file was skipped.
@@ -358,7 +365,9 @@ func DeploySteeringSynced(cmd *cobra.Command, name string) {
// - cmd: Cobra command for output
// - name: Name of the skipped file
func DeploySteeringSkipped(cmd *cobra.Command, name string) {
- cmd.Println(fmt.Sprintf(" Skipped steering: %s (unchanged)", name))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSetupDeploySkipSteer),
+ name))
}
// msgNoSteeringToSync is the message when no steering files
diff --git a/internal/write/skill/skill.go b/internal/write/skill/skill.go
index cbe922922..db2d110da 100644
--- a/internal/write/skill/skill.go
+++ b/internal/write/skill/skill.go
@@ -10,6 +10,9 @@ import (
"fmt"
"github.com/spf13/cobra"
+
+ "github.com/ActiveMemory/ctx/internal/assets/read/desc"
+ "github.com/ActiveMemory/ctx/internal/config/embed/text"
)
// Installed prints confirmation that a skill was installed.
@@ -19,7 +22,9 @@ import (
// - name: Skill name
// - dir: Installation directory
func Installed(cmd *cobra.Command, name, dir string) {
- cmd.Println(fmt.Sprintf("Installed %s → %s", name, dir))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSkillInstalled),
+ name, dir))
}
// msgNoSkills is shown when no skills are installed.
@@ -40,7 +45,9 @@ func NoSkillsFound(cmd *cobra.Command) {
// - name: Skill name
// - description: Skill description
func EntryWithDesc(cmd *cobra.Command, name, description string) {
- cmd.Println(fmt.Sprintf(" %-20s %s", name, description))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSkillEntryDesc),
+ name, description))
}
// Entry prints a skill entry with name only.
@@ -49,7 +56,8 @@ func EntryWithDesc(cmd *cobra.Command, name, description string) {
// - cmd: Cobra command for output
// - name: Skill name
func Entry(cmd *cobra.Command, name string) {
- cmd.Println(fmt.Sprintf(" %s", name))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSkillEntry), name))
}
// Count prints the total skill count.
@@ -58,7 +66,8 @@ func Entry(cmd *cobra.Command, name string) {
// - cmd: Cobra command for output
// - count: Number of skills
func Count(cmd *cobra.Command, count int) {
- cmd.Println(fmt.Sprintf("\n%d skill(s)", count))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSkillCount), count))
}
// Removed prints confirmation that a skill was removed.
@@ -67,5 +76,6 @@ func Count(cmd *cobra.Command, count int) {
// - cmd: Cobra command for output
// - name: Skill name
func Removed(cmd *cobra.Command, name string) {
- cmd.Println(fmt.Sprintf("Removed %s", name))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSkillRemoved), name))
}
diff --git a/internal/write/steering/steering.go b/internal/write/steering/steering.go
index e42c23b76..03da6e019 100644
--- a/internal/write/steering/steering.go
+++ b/internal/write/steering/steering.go
@@ -10,6 +10,9 @@ import (
"fmt"
"github.com/spf13/cobra"
+
+ "github.com/ActiveMemory/ctx/internal/assets/read/desc"
+ "github.com/ActiveMemory/ctx/internal/config/embed/text"
)
// User-facing messages for steering commands.
@@ -26,7 +29,8 @@ const (
// - cmd: Cobra command for output
// - path: Path to the created file
func Created(cmd *cobra.Command, path string) {
- cmd.Println(fmt.Sprintf("Created %s", path))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSteeringCreated), path))
}
// Skipped prints that a steering file was skipped because it exists.
@@ -35,7 +39,8 @@ func Created(cmd *cobra.Command, path string) {
// - cmd: Cobra command for output
// - path: Path to the existing file
func Skipped(cmd *cobra.Command, path string) {
- cmd.Println(fmt.Sprintf("Skipped %s (already exists)", path))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSteeringSkipped), path))
}
// InitSummary prints the summary after steering init.
@@ -45,7 +50,9 @@ func Skipped(cmd *cobra.Command, path string) {
// - created: Number of files created
// - skipped: Number of files skipped
func InitSummary(cmd *cobra.Command, created, skipped int) {
- cmd.Println(fmt.Sprintf("\n%d created, %d skipped", created, skipped))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSteeringInitSummary),
+ created, skipped))
}
// NoFilesFound prints a message indicating no steering files exist.
@@ -68,7 +75,8 @@ func FileEntry(
cmd *cobra.Command, name, inclusion string,
priority int, tools string,
) {
- cmd.Println(fmt.Sprintf("%-20s inclusion=%-7s priority=%-3d tools=%s",
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSteeringFileEntry),
name, inclusion, priority, tools))
}
@@ -78,7 +86,8 @@ func FileEntry(
// - cmd: Cobra command for output
// - count: Number of steering files
func FileCount(cmd *cobra.Command, count int) {
- cmd.Println(fmt.Sprintf("\n%d steering file(s)", count))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSteeringFileCount), count))
}
// NoFilesMatch prints a message indicating no files match the prompt.
@@ -95,7 +104,9 @@ func NoFilesMatch(cmd *cobra.Command) {
// - cmd: Cobra command for output
// - prompt: The prompt being matched against
func PreviewHeader(cmd *cobra.Command, prompt string) {
- cmd.Println(fmt.Sprintf("Steering files matching prompt %q:", prompt))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSteeringPreviewHead),
+ prompt))
cmd.Println()
}
@@ -111,7 +122,8 @@ func PreviewEntry(
cmd *cobra.Command, name, inclusion string,
priority int, tools string,
) {
- cmd.Println(fmt.Sprintf(" %-20s inclusion=%-7s priority=%-3d tools=%s",
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSteeringPreviewEntry),
name, inclusion, priority, tools))
}
@@ -121,7 +133,9 @@ func PreviewEntry(
// - cmd: Cobra command for output
// - count: Number of matched files
func PreviewCount(cmd *cobra.Command, count int) {
- cmd.Println(fmt.Sprintf("\n%d file(s) would be included", count))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSteeringPreviewCount),
+ count))
}
// SyncWritten prints that a file was written during sync.
@@ -130,7 +144,9 @@ func PreviewCount(cmd *cobra.Command, count int) {
// - cmd: Cobra command for output
// - name: Name of the written file
func SyncWritten(cmd *cobra.Command, name string) {
- cmd.Println(fmt.Sprintf("Written: %s", name))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSteeringSyncWritten),
+ name))
}
// SyncSkipped prints that a file was skipped during sync.
@@ -139,7 +155,9 @@ func SyncWritten(cmd *cobra.Command, name string) {
// - cmd: Cobra command for output
// - name: Name of the skipped file
func SyncSkipped(cmd *cobra.Command, name string) {
- cmd.Println(fmt.Sprintf("Skipped: %s", name))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSteeringSyncSkipped),
+ name))
}
// SyncError prints a sync error.
@@ -148,7 +166,9 @@ func SyncSkipped(cmd *cobra.Command, name string) {
// - cmd: Cobra command for output
// - errMsg: The error message
func SyncError(cmd *cobra.Command, errMsg string) {
- cmd.Println(fmt.Sprintf("Error: %s", errMsg))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSteeringSyncError),
+ errMsg))
}
// SyncSummary prints the sync summary with counts.
@@ -159,6 +179,7 @@ func SyncError(cmd *cobra.Command, errMsg string) {
// - skipped: Number of files skipped
// - errors: Number of errors
func SyncSummary(cmd *cobra.Command, written, skipped, errors int) {
- cmd.Println(fmt.Sprintf("\n%d written, %d skipped, %d errors",
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteSteeringSyncSummary),
written, skipped, errors))
}
diff --git a/internal/write/trigger/trigger.go b/internal/write/trigger/trigger.go
index 93611ed52..688b310c5 100644
--- a/internal/write/trigger/trigger.go
+++ b/internal/write/trigger/trigger.go
@@ -10,6 +10,9 @@ import (
"fmt"
"github.com/spf13/cobra"
+
+ "github.com/ActiveMemory/ctx/internal/assets/read/desc"
+ "github.com/ActiveMemory/ctx/internal/config/embed/text"
)
// User-facing messages for hook list and test output.
@@ -28,7 +31,9 @@ const (
// - cmd: Cobra command for output
// - path: Path to the created hook script
func Created(cmd *cobra.Command, path string) {
- cmd.Println(fmt.Sprintf("Created %s", path))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteTriggerCreated), path,
+ ))
}
// Disabled prints confirmation that a hook was disabled.
@@ -38,7 +43,10 @@ func Created(cmd *cobra.Command, path string) {
// - name: Hook name
// - path: Path to the hook script
func Disabled(cmd *cobra.Command, name, path string) {
- cmd.Println(fmt.Sprintf("Disabled %s (%s)", name, path))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteTriggerDisabled),
+ name, path,
+ ))
}
// Enabled prints confirmation that a hook was enabled.
@@ -48,7 +56,10 @@ func Disabled(cmd *cobra.Command, name, path string) {
// - name: Hook name
// - path: Path to the hook script
func Enabled(cmd *cobra.Command, name, path string) {
- cmd.Println(fmt.Sprintf("Enabled %s (%s)", name, path))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteTriggerEnabled),
+ name, path,
+ ))
}
// TypeHeader prints a hook type section header.
@@ -57,7 +68,9 @@ func Enabled(cmd *cobra.Command, name, path string) {
// - cmd: Cobra command for output
// - hookType: The hook type name
func TypeHeader(cmd *cobra.Command, hookType string) {
- cmd.Println(fmt.Sprintf("[%s]", hookType))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteTriggerTypeHdr), hookType,
+ ))
}
// Entry prints a single hook entry with name, status, and path.
@@ -68,7 +81,10 @@ func TypeHeader(cmd *cobra.Command, hookType string) {
// - status: "enabled" or "disabled"
// - path: Path to the hook script
func Entry(cmd *cobra.Command, name, status, path string) {
- cmd.Println(fmt.Sprintf(" %-20s %-8s %s", name, status, path))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteTriggerEntry),
+ name, status, path,
+ ))
}
// BlankLine prints a blank line. Nil cmd is a no-op.
@@ -96,7 +112,9 @@ func NoHooksFound(cmd *cobra.Command) {
// - cmd: Cobra command for output
// - count: Number of hooks
func Count(cmd *cobra.Command, count int) {
- cmd.Println(fmt.Sprintf("%d hook(s)", count))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteTriggerCount), count,
+ ))
}
// TestingHeader prints the header for hook testing output.
@@ -105,7 +123,9 @@ func Count(cmd *cobra.Command, count int) {
// - cmd: Cobra command for output
// - hookType: The hook type being tested
func TestingHeader(cmd *cobra.Command, hookType string) {
- cmd.Println(fmt.Sprintf("Testing %s hooks...", hookType))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteTriggerTestHdr), hookType,
+ ))
cmd.Println()
}
@@ -115,7 +135,9 @@ func TestingHeader(cmd *cobra.Command, hookType string) {
// - cmd: Cobra command for output
// - inputJSON: Pretty-printed JSON input
func TestInput(cmd *cobra.Command, inputJSON string) {
- cmd.Println(fmt.Sprintf("Input:\n%s", inputJSON))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteTriggerTestInput), inputJSON,
+ ))
cmd.Println()
}
@@ -125,7 +147,9 @@ func TestInput(cmd *cobra.Command, inputJSON string) {
// - cmd: Cobra command for output
// - message: The cancellation reason
func Cancelled(cmd *cobra.Command, message string) {
- cmd.Println(fmt.Sprintf("Cancelled: %s", message))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteTriggerCancelled), message,
+ ))
}
// ContextOutput prints context output from hook execution.
@@ -134,7 +158,9 @@ func Cancelled(cmd *cobra.Command, message string) {
// - cmd: Cobra command for output
// - context: The context string from hooks
func ContextOutput(cmd *cobra.Command, context string) {
- cmd.Println(fmt.Sprintf("Context:\n%s", context))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteTriggerContext), context,
+ ))
cmd.Println()
}
@@ -152,7 +178,9 @@ func ErrorsHeader(cmd *cobra.Command) {
// - cmd: Cobra command for output
// - errMsg: The error message
func ErrorLine(cmd *cobra.Command, errMsg string) {
- cmd.Println(fmt.Sprintf(" %s", errMsg))
+ cmd.Println(fmt.Sprintf(
+ desc.Text(text.DescKeyWriteTriggerErrLine), errMsg,
+ ))
}
// NoOutput prints a message indicating no output from hooks.
From 827f241f9b752eaec5e57227cfe73e337b2a8558 Mon Sep 17 00:00:00 2001
From: Jose Alekhinne
Date: Fri, 3 Apr 2026 22:31:09 -0700
Subject: [PATCH 03/13] docs: document TestDocComments known gap for embed/text
DescKeys
Config const blocks with group docs are exempted from per-constant
doc checks. This lets ~1300 DescKey constants through without
individual docs. Added comment documenting the gap; tracked as
task for future tightening.
Spec: specs/ast-audit-contributor-guide.md
Signed-off-by: Jose Alekhinne
---
internal/audit/doc_comments_test.go | 3 +++
1 file changed, 3 insertions(+)
diff --git a/internal/audit/doc_comments_test.go b/internal/audit/doc_comments_test.go
index 0a1d19392..0cc4f3da4 100644
--- a/internal/audit/doc_comments_test.go
+++ b/internal/audit/doc_comments_test.go
@@ -62,6 +62,9 @@ func TestDocComments(t *testing.T) {
// Config const/var blocks: group doc covers
// all specs. Report once per undocumented
// block, not per constant.
+ // Known gap: embed/text/ DescKey constants
+ // are not self-documenting but are exempted
+ // here. Tracked for future tightening.
if isCfg && d.Lparen.IsValid() &&
(d.Tok == token.CONST ||
d.Tok == token.VAR) {
From 5321f01f364f949e445678f050bcb05963eb58bb Mon Sep 17 00:00:00 2001
From: Jose Alekhinne
Date: Fri, 3 Apr 2026 23:09:43 -0700
Subject: [PATCH 04/13] fix: tighten TestDocComments for embed/, add 1716 doc
comments, guard exemptions
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Tighten TestDocComments so config/embed/ packages no longer get a
blanket exemption for const blocks — DescKey names like
DescKeyWriteSetupDeployMCP are not self-documenting. Added
per-constant doc comments to all ~1716 exported constants across
172 files in config/embed/{text,cmd,flag}.
Add "DO NOT widen" guard comments to all 10 exemption/allowlist
data structures across 7 audit test files. These prevent agents
from adding drive-by entries to make tests pass instead of fixing
the code.
Spec: specs/ast-audit-contributor-guide.md
Signed-off-by: Jose Alekhinne
---
internal/audit/cli_cmd_structure_test.go | 4 +
internal/audit/cross_package_types_test.go | 4 +
internal/audit/dead_exports_test.go | 8 +
internal/audit/doc_comments_test.go | 19 +-
internal/audit/magic_strings_test.go | 8 +
internal/audit/magic_values_test.go | 8 +
internal/audit/string_concat_paths_test.go | 4 +
internal/audit/type_file_convention_test.go | 4 +
internal/config/embed/cmd/base.go | 137 ++++++---
internal/config/embed/cmd/config.go | 12 +-
internal/config/embed/cmd/decision.go | 5 +-
internal/config/embed/cmd/group.go | 22 +-
internal/config/embed/cmd/journal.go | 21 +-
internal/config/embed/cmd/journal_source.go | 16 +-
internal/config/embed/cmd/learning.go | 5 +-
internal/config/embed/cmd/mcp.go | 4 +-
internal/config/embed/cmd/memory.go | 36 ++-
internal/config/embed/cmd/notify.go | 11 +-
internal/config/embed/cmd/pad.go | 53 ++--
internal/config/embed/cmd/pause.go | 4 +-
internal/config/embed/cmd/permission.go | 15 +-
internal/config/embed/cmd/philosophy.go | 4 +-
internal/config/embed/cmd/remind.go | 25 +-
internal/config/embed/cmd/site.go | 8 +-
internal/config/embed/cmd/skill.go | 20 +-
internal/config/embed/cmd/steering.go | 33 ++-
internal/config/embed/cmd/system.go | 278 +++++++++++++-----
internal/config/embed/cmd/task.go | 11 +-
internal/config/embed/cmd/trace.go | 26 +-
internal/config/embed/cmd/trigger.go | 33 ++-
internal/config/embed/flag/add.go | 20 +-
internal/config/embed/flag/agent.go | 13 +-
internal/config/embed/flag/dep.go | 7 +-
internal/config/embed/flag/drift.go | 4 +-
internal/config/embed/flag/dry_run.go | 5 +-
internal/config/embed/flag/flag.go | 36 ++-
internal/config/embed/flag/guide.go | 4 +-
internal/config/embed/flag/initialize.go | 12 +-
internal/config/embed/flag/journal.go | 48 ++-
internal/config/embed/flag/journal_import.go | 29 +-
internal/config/embed/flag/load.go | 4 +-
internal/config/embed/flag/loop.go | 14 +-
internal/config/embed/flag/memory.go | 12 +-
internal/config/embed/flag/notify.go | 11 +-
internal/config/embed/flag/pad.go | 32 +-
internal/config/embed/flag/pause.go | 5 +-
internal/config/embed/flag/remind.go | 8 +-
internal/config/embed/flag/site.go | 5 +-
internal/config/embed/flag/status.go | 4 +-
internal/config/embed/flag/system.go | 90 ++++--
internal/config/embed/flag/trace.go | 14 +-
internal/config/embed/flag/watch.go | 4 +-
internal/config/embed/text/agent.go | 45 ++-
internal/config/embed/text/backup.go | 34 ++-
internal/config/embed/text/block.go | 28 +-
internal/config/embed/text/bootstrap.go | 39 ++-
internal/config/embed/text/change.go | 47 ++-
internal/config/embed/text/check_ceremony.go | 28 +-
internal/config/embed/text/check_context.go | 92 ++++--
internal/config/embed/text/check_journal.go | 20 +-
internal/config/embed/text/check_knowledge.go | 24 +-
internal/config/embed/text/check_map.go | 14 +-
internal/config/embed/text/check_memory.go | 14 +-
.../config/embed/text/check_persistence.go | 50 +++-
internal/config/embed/text/check_reminder.go | 22 +-
internal/config/embed/text/check_resource.go | 22 +-
.../embed/text/check_skill_discovery.go | 10 +-
internal/config/embed/text/check_version.go | 34 ++-
internal/config/embed/text/colummn.go | 2 +
internal/config/embed/text/compact.go | 17 +-
internal/config/embed/text/config.go | 15 +-
internal/config/embed/text/context.go | 40 ++-
internal/config/embed/text/dep.go | 12 +-
internal/config/embed/text/doctor.go | 157 +++++++---
internal/config/embed/text/drift.go | 175 +++++++----
internal/config/embed/text/err_add.go | 19 +-
internal/config/embed/text/err_backup.go | 51 +++-
internal/config/embed/text/err_cli.go | 2 +
internal/config/embed/text/err_config.go | 44 ++-
internal/config/embed/text/err_crypto.go | 52 +++-
internal/config/embed/text/err_dep.go | 6 +-
internal/config/embed/text/err_fs.go | 86 ++++--
internal/config/embed/text/err_hook.go | 74 +++--
internal/config/embed/text/err_http.go | 9 +-
internal/config/embed/text/err_init.go | 29 +-
internal/config/embed/text/err_journal.go | 38 ++-
.../config/embed/text/err_journal_source.go | 22 +-
.../config/embed/text/err_lifecycle_hook.go | 10 +-
internal/config/embed/text/err_memory.go | 81 +++--
internal/config/embed/text/err_notify.go | 18 +-
internal/config/embed/text/err_pad.go | 48 ++-
internal/config/embed/text/err_parse.go | 18 +-
internal/config/embed/text/err_prompt.go | 26 +-
internal/config/embed/text/err_remind.go | 18 +-
internal/config/embed/text/err_session.go | 42 ++-
internal/config/embed/text/err_setup.go | 12 +-
internal/config/embed/text/err_skill.go | 52 +++-
internal/config/embed/text/err_state.go | 8 +-
internal/config/embed/text/err_steering.go | 74 +++--
internal/config/embed/text/err_task.go | 31 +-
internal/config/embed/text/err_time.go | 6 +-
internal/config/embed/text/err_trace.go | 27 +-
internal/config/embed/text/err_validate.go | 40 ++-
internal/config/embed/text/err_write.go | 4 +-
internal/config/embed/text/event.go | 4 +-
internal/config/embed/text/format.go | 70 +++--
internal/config/embed/text/freshness.go | 18 +-
internal/config/embed/text/git.go | 6 +-
internal/config/embed/text/governance.go | 17 +-
internal/config/embed/text/group.go | 25 +-
internal/config/embed/text/guide.go | 7 +-
internal/config/embed/text/heading.go | 63 ++--
internal/config/embed/text/heartbeat.go | 12 +-
internal/config/embed/text/hook.go | 73 +++--
internal/config/embed/text/import.go | 88 ++++--
internal/config/embed/text/initialize.go | 135 +++++++--
internal/config/embed/text/journal.go | 172 ++++++++---
internal/config/embed/text/journal_source.go | 120 ++++++--
internal/config/embed/text/label.go | 22 +-
internal/config/embed/text/label_col.go | 16 +-
internal/config/embed/text/label_hint.go | 5 +-
internal/config/embed/text/label_inline.go | 4 +-
internal/config/embed/text/label_loop.go | 1 +
internal/config/embed/text/label_meta.go | 75 +++--
internal/config/embed/text/label_reason.go | 4 +-
internal/config/embed/text/label_role.go | 4 +-
internal/config/embed/text/label_section.go | 10 +-
internal/config/embed/text/lock.go | 10 +-
internal/config/embed/text/loop.go | 9 +-
internal/config/embed/text/mark.go | 8 +-
internal/config/embed/text/mcp_compact.go | 8 +-
internal/config/embed/text/mcp_context.go | 1 +
internal/config/embed/text/mcp_drift.go | 18 +-
internal/config/embed/text/mcp_err.go | 42 ++-
internal/config/embed/text/mcp_event.go | 6 +-
internal/config/embed/text/mcp_format.go | 25 +-
internal/config/embed/text/mcp_io.go | 4 +-
internal/config/embed/text/mcp_journal.go | 12 +-
internal/config/embed/text/mcp_pending.go | 8 +-
internal/config/embed/text/mcp_prompt.go | 130 ++++++--
internal/config/embed/text/mcp_remind.go | 4 +-
internal/config/embed/text/mcp_res.go | 23 +-
internal/config/embed/text/mcp_session.go | 9 +-
internal/config/embed/text/mcp_status.go | 35 ++-
internal/config/embed/text/mcp_task.go | 15 +-
internal/config/embed/text/mcp_tool.go | 134 ++++++---
internal/config/embed/text/mcp_validate.go | 9 +-
internal/config/embed/text/memory.go | 75 +++--
internal/config/embed/text/message.go | 40 ++-
internal/config/embed/text/nudge.go | 10 +-
internal/config/embed/text/obsidian.go | 16 +-
internal/config/embed/text/pad.go | 126 ++++++--
internal/config/embed/text/pause.go | 13 +-
internal/config/embed/text/philosophy.go | 26 +-
internal/config/embed/text/post_commit.go | 45 ++-
internal/config/embed/text/prune.go | 11 +-
internal/config/embed/text/publish.go | 39 ++-
internal/config/embed/text/reminder.go | 23 +-
internal/config/embed/text/resource.go | 64 ++--
internal/config/embed/text/restore.go | 35 ++-
internal/config/embed/text/setup.go | 32 +-
internal/config/embed/text/site.go | 38 ++-
internal/config/embed/text/skill.go | 19 +-
internal/config/embed/text/stat.go | 7 +-
internal/config/embed/text/status.go | 32 +-
internal/config/embed/text/steering.go | 44 ++-
internal/config/embed/text/summary.go | 25 +-
internal/config/embed/text/sync.go | 64 ++--
internal/config/embed/text/task.go | 42 ++-
internal/config/embed/text/test.go | 11 +-
internal/config/embed/text/text.go | 1 +
internal/config/embed/text/time.go | 47 ++-
internal/config/embed/text/trace.go | 64 +++-
internal/config/embed/text/trigger.go | 45 ++-
internal/config/embed/text/vscode.go | 6 +
internal/config/embed/text/watch.go | 19 +-
internal/config/embed/text/write.go | 31 +-
internal/config/embed/text/zensical.go | 2 +
178 files changed, 4201 insertions(+), 1419 deletions(-)
diff --git a/internal/audit/cli_cmd_structure_test.go b/internal/audit/cli_cmd_structure_test.go
index 996d2161e..1efbede93 100644
--- a/internal/audit/cli_cmd_structure_test.go
+++ b/internal/audit/cli_cmd_structure_test.go
@@ -21,6 +21,10 @@ var allowedCmdFiles = map[string]bool{
"doc.go": true,
}
+// DO NOT add entries here to make tests pass. New code must
+// conform to the check. Widening requires a dedicated PR with
+// justification for each entry.
+//
// cmdSubdirAllowlist lists cmd/ subdirectories with
// stray files that cannot be moved to core/. This
// should be empty.
diff --git a/internal/audit/cross_package_types_test.go b/internal/audit/cross_package_types_test.go
index 8dc426eb0..39a835380 100644
--- a/internal/audit/cross_package_types_test.go
+++ b/internal/audit/cross_package_types_test.go
@@ -14,6 +14,10 @@ import (
"golang.org/x/tools/go/packages"
)
+// DO NOT add entries here to make tests pass. New code must
+// conform to the check. Widening requires a dedicated PR with
+// justification for each entry.
+//
// typeExemptPackages lists packages where exported
// types are expected to be used cross-package by
// design (entity, config, proto, etc.).
diff --git a/internal/audit/dead_exports_test.go b/internal/audit/dead_exports_test.go
index ec6878dd7..4515ffab3 100644
--- a/internal/audit/dead_exports_test.go
+++ b/internal/audit/dead_exports_test.go
@@ -26,6 +26,10 @@ import (
// internal and may be used via reflection or are
// genuinely file-scoped helpers.
+// DO NOT add entries here to make tests pass. New code must
+// conform to the check. Widening requires a dedicated PR with
+// justification for each entry.
+//
// testOnlyExports lists exported symbols that exist
// solely for test usage. The dead-export scanner skips
// test files, so these would otherwise be false
@@ -51,6 +55,10 @@ var testOnlyExports = map[string]bool{
"github.com/ActiveMemory/ctx/internal/task.MatchFull": true,
}
+// DO NOT add entries here to make tests pass. New code must
+// conform to the check. Widening requires a dedicated PR with
+// justification for each entry.
+//
// linuxOnlyExports lists exported symbols used only from
// _linux.go source files. These appear dead on non-Linux
// builds because go/packages loads only the current
diff --git a/internal/audit/doc_comments_test.go b/internal/audit/doc_comments_test.go
index 0cc4f3da4..0698bf325 100644
--- a/internal/audit/doc_comments_test.go
+++ b/internal/audit/doc_comments_test.go
@@ -59,13 +59,20 @@ func TestDocComments(t *testing.T) {
singleton := !d.Lparen.IsValid()
isCfg := configPackage(pkg.PkgPath)
- // Config const/var blocks: group doc covers
- // all specs. Report once per undocumented
- // block, not per constant.
- // Known gap: embed/text/ DescKey constants
- // are not self-documenting but are exempted
- // here. Tracked for future tightening.
+ // Config const/var blocks (excluding
+ // embed/): group doc covers all specs
+ // because names are self-documenting
+ // (Dir*, File*, Perm*, etc.).
+ //
+ // DO NOT widen this exemption. New code
+ // must have per-constant doc comments.
+ // Widening requires a dedicated PR with
+ // justification — not a drive-by allowlist
+ // change to make tests pass.
if isCfg && d.Lparen.IsValid() &&
+ !strings.Contains(
+ pkg.PkgPath, "config/embed/",
+ ) &&
(d.Tok == token.CONST ||
d.Tok == token.VAR) {
if d.Doc == nil {
diff --git a/internal/audit/magic_strings_test.go b/internal/audit/magic_strings_test.go
index 836b803dc..362f5f007 100644
--- a/internal/audit/magic_strings_test.go
+++ b/internal/audit/magic_strings_test.go
@@ -17,6 +17,10 @@ import (
"golang.org/x/tools/go/packages"
)
+// DO NOT add entries here to make tests pass. New code must
+// conform to the check. Widening requires a dedicated PR with
+// justification for each entry.
+//
// exemptStrings lists string values always acceptable.
var exemptStrings = map[string]bool{
"": true, // empty string
@@ -27,6 +31,10 @@ var exemptStrings = map[string]bool{
": ": true, // key-value separator
}
+// DO NOT add entries here to make tests pass. New code must
+// conform to the check. Widening requires a dedicated PR with
+// justification for each entry.
+//
// exemptStringPackages lists package paths fully exempt
// from magic string checks.
var exemptStringPackages = []string{
diff --git a/internal/audit/magic_values_test.go b/internal/audit/magic_values_test.go
index 388f2074a..790f651f8 100644
--- a/internal/audit/magic_values_test.go
+++ b/internal/audit/magic_values_test.go
@@ -13,6 +13,10 @@ import (
"testing"
)
+// DO NOT add entries here to make tests pass. New code must
+// conform to the check. Widening requires a dedicated PR with
+// justification for each entry.
+//
// exemptIntLiterals lists integer values that are always acceptable.
// 0, 1, -1: universal identity/sentinel values.
// 2, 3: structural constants (split counts, field indices, ternary).
@@ -50,6 +54,10 @@ var strconvFuncs = map[string]bool{
"AppendFloat": true,
}
+// DO NOT add entries here to make tests pass. New code must
+// conform to the check. Widening requires a dedicated PR with
+// justification for each entry.
+//
// exemptPackagePaths lists package path substrings that are fully
// exempt from magic value checks — config definitions, template
// definitions, and error constructors.
diff --git a/internal/audit/string_concat_paths_test.go b/internal/audit/string_concat_paths_test.go
index c7e5f583c..f3acb684b 100644
--- a/internal/audit/string_concat_paths_test.go
+++ b/internal/audit/string_concat_paths_test.go
@@ -18,6 +18,10 @@ import (
// (case-insensitive), suggest the variable holds a filesystem path.
var pathVarHints = []string{"path", "dir", "folder", "file"}
+// DO NOT add entries here to make tests pass. New code must
+// conform to the check. Widening requires a dedicated PR with
+// justification for each entry.
+//
// stringConcatPathAllowlist lists known false positives where string
// concatenation is used for non-path purposes (e.g. substring search
// patterns, extension appending).
diff --git a/internal/audit/type_file_convention_test.go b/internal/audit/type_file_convention_test.go
index 90ac260f7..4cfb5ed84 100644
--- a/internal/audit/type_file_convention_test.go
+++ b/internal/audit/type_file_convention_test.go
@@ -16,6 +16,10 @@ import (
"testing"
)
+// DO NOT add entries here to make tests pass. New code must
+// conform to the check. Widening requires a dedicated PR with
+// justification for each entry.
+//
// exemptTypePackages lists package path segments where
// types intentionally do NOT live in types.go. Each
// has a documented reason.
diff --git a/internal/config/embed/cmd/base.go b/internal/config/embed/cmd/base.go
index ce0579d93..be8ecdb7d 100644
--- a/internal/config/embed/cmd/base.go
+++ b/internal/config/embed/cmd/base.go
@@ -8,55 +8,102 @@ package cmd
// Use strings for cobra command registration.
const (
- UseAdd = "add [content]"
- UseAgent = "agent"
- UseChange = "change"
- UseCompact = "compact"
- UseDep = "dep"
- UseDoctor = "doctor"
- UseDrift = "drift"
- UseComplete = "complete "
- UseGuide = "guide"
- UseSetup = "setup "
- UseInit = "init"
- UseLoad = "load"
- UseLoop = "loop"
- UseMcp = "mcp"
- UseMemory = "memory"
- UseNotify = "notify [message]"
- UsePad = "pad"
- UsePause = "pause"
+ // UseAdd is the cobra Use string for the add command.
+ UseAdd = "add [content]"
+ // UseAgent is the cobra Use string for the agent command.
+ UseAgent = "agent"
+ // UseChange is the cobra Use string for the change command.
+ UseChange = "change"
+ // UseCompact is the cobra Use string for the compact command.
+ UseCompact = "compact"
+ // UseDep is the cobra Use string for the dep command.
+ UseDep = "dep"
+ // UseDoctor is the cobra Use string for the doctor command.
+ UseDoctor = "doctor"
+ // UseDrift is the cobra Use string for the drift command.
+ UseDrift = "drift"
+ // UseComplete is the cobra Use string for the complete command.
+ UseComplete = "complete "
+ // UseGuide is the cobra Use string for the guide command.
+ UseGuide = "guide"
+ // UseSetup is the cobra Use string for the setup command.
+ UseSetup = "setup "
+ // UseInit is the cobra Use string for the init command.
+ UseInit = "init"
+ // UseLoad is the cobra Use string for the load command.
+ UseLoad = "load"
+ // UseLoop is the cobra Use string for the loop command.
+ UseLoop = "loop"
+ // UseMcp is the cobra Use string for the mcp command.
+ UseMcp = "mcp"
+ // UseMemory is the cobra Use string for the memory command.
+ UseMemory = "memory"
+ // UseNotify is the cobra Use string for the notify command.
+ UseNotify = "notify [message]"
+ // UsePad is the cobra Use string for the pad command.
+ UsePad = "pad"
+ // UsePause is the cobra Use string for the pause command.
+ UsePause = "pause"
+ // UsePermission is the cobra Use string for the permission command.
UsePermission = "permission"
- UseReindex = "reindex"
- UseRemind = "remind [TEXT]"
- UseResume = "resume"
- UseServe = "serve [directory]"
- UseStatus = "status"
- UseSync = "sync"
- UseSystem = "system"
- UseTask = "task"
- UseWatch = "watch"
- UseWhy = "why [DOCUMENT]"
+ // UseReindex is the cobra Use string for the reindex command.
+ UseReindex = "reindex"
+ // UseRemind is the cobra Use string for the remind command.
+ UseRemind = "remind [TEXT]"
+ // UseResume is the cobra Use string for the resume command.
+ UseResume = "resume"
+ // UseServe is the cobra Use string for the serve command.
+ UseServe = "serve [directory]"
+ // UseStatus is the cobra Use string for the status command.
+ UseStatus = "status"
+ // UseSync is the cobra Use string for the sync command.
+ UseSync = "sync"
+ // UseSystem is the cobra Use string for the system command.
+ UseSystem = "system"
+ // UseTask is the cobra Use string for the task command.
+ UseTask = "task"
+ // UseWatch is the cobra Use string for the watch command.
+ UseWatch = "watch"
+ // UseWhy is the cobra Use string for the why command.
+ UseWhy = "why [DOCUMENT]"
)
// DescKeys for base commands.
const (
- DescKeyAdd = "add"
- DescKeyAgent = "agent"
- DescKeyChange = "change"
- DescKeyCompact = "compact"
- DescKeyComplete = "complete"
- DescKeyCtx = "ctx"
- DescKeyDep = "dep"
- DescKeyDoctor = "doctor"
- DescKeyDrift = "drift"
- DescKeySetup = "setup"
+ // DescKeyAdd is the description key for the add command.
+ DescKeyAdd = "add"
+ // DescKeyAgent is the description key for the agent command.
+ DescKeyAgent = "agent"
+ // DescKeyChange is the description key for the change command.
+ DescKeyChange = "change"
+ // DescKeyCompact is the description key for the compact command.
+ DescKeyCompact = "compact"
+ // DescKeyComplete is the description key for the complete command.
+ DescKeyComplete = "complete"
+ // DescKeyCtx is the description key for the ctx command.
+ DescKeyCtx = "ctx"
+ // DescKeyDep is the description key for the dep command.
+ DescKeyDep = "dep"
+ // DescKeyDoctor is the description key for the doctor command.
+ DescKeyDoctor = "doctor"
+ // DescKeyDrift is the description key for the drift command.
+ DescKeyDrift = "drift"
+ // DescKeySetup is the description key for the setup command.
+ DescKeySetup = "setup"
+ // DescKeyInitialize is the description key for the initialize command.
DescKeyInitialize = "initialize"
- DescKeyLoad = "load"
- DescKeyLoop = "loop"
- DescKeyReindex = "reindex"
- DescKeyServe = "serve"
- DescKeyStatus = "status"
- DescKeySync = "sync"
- DescKeyWatch = "watch"
+ // DescKeyLoad is the description key for the load command.
+ DescKeyLoad = "load"
+ // DescKeyLoop is the description key for the loop command.
+ DescKeyLoop = "loop"
+ // DescKeyReindex is the description key for the reindex command.
+ DescKeyReindex = "reindex"
+ // DescKeyServe is the description key for the serve command.
+ DescKeyServe = "serve"
+ // DescKeyStatus is the description key for the status command.
+ DescKeyStatus = "status"
+ // DescKeySync is the description key for the sync command.
+ DescKeySync = "sync"
+ // DescKeyWatch is the description key for the watch command.
+ DescKeyWatch = "watch"
)
diff --git a/internal/config/embed/cmd/config.go b/internal/config/embed/cmd/config.go
index 054b6e8e6..6e04bfa94 100644
--- a/internal/config/embed/cmd/config.go
+++ b/internal/config/embed/cmd/config.go
@@ -8,16 +8,24 @@ package cmd
// Use strings for config subcommands.
const (
- UseConfig = "config"
+ // UseConfig is the cobra Use string for the config command.
+ UseConfig = "config"
+ // UseConfigSchema is the cobra Use string for the config schema command.
UseConfigSchema = "schema"
+ // UseConfigStatus is the cobra Use string for the config status command.
UseConfigStatus = "status"
+ // UseConfigSwitch is the cobra Use string for the config switch command.
UseConfigSwitch = "switch [dev|base]"
)
// DescKeys for config subcommands.
const (
- DescKeyConfig = "config"
+ // DescKeyConfig is the description key for the config command.
+ DescKeyConfig = "config"
+ // DescKeyConfigSchema is the description key for the config schema command.
DescKeyConfigSchema = "config.schema"
+ // DescKeyConfigStatus is the description key for the config status command.
DescKeyConfigStatus = "config.status"
+ // DescKeyConfigSwitch is the description key for the config switch command.
DescKeyConfigSwitch = "config.switch"
)
diff --git a/internal/config/embed/cmd/decision.go b/internal/config/embed/cmd/decision.go
index 989eb7861..6f37136b6 100644
--- a/internal/config/embed/cmd/decision.go
+++ b/internal/config/embed/cmd/decision.go
@@ -11,6 +11,9 @@ const UseDecision = "decision"
// DescKeys for decision subcommands.
const (
- DescKeyDecision = "decision"
+ // DescKeyDecision is the description key for the decision command.
+ DescKeyDecision = "decision"
+ // DescKeyDecisionReindex is the description key for the decision reindex
+ // command.
DescKeyDecisionReindex = "decision.reindex"
)
diff --git a/internal/config/embed/cmd/group.go b/internal/config/embed/cmd/group.go
index 4358f1756..222b8f383 100644
--- a/internal/config/embed/cmd/group.go
+++ b/internal/config/embed/cmd/group.go
@@ -8,12 +8,20 @@ package cmd
// Command group IDs for organizing help output.
const (
+ // GroupGettingStarted is the command group ID for getting started.
GroupGettingStarted = "getting-started"
- GroupContext = "context"
- GroupArtifacts = "artifacts"
- GroupSessions = "sessions"
- GroupRuntime = "runtime"
- GroupIntegration = "integration"
- GroupDiagnostics = "diagnostics"
- GroupUtilities = "utilities"
+ // GroupContext is the command group ID for context.
+ GroupContext = "context"
+ // GroupArtifacts is the command group ID for artifacts.
+ GroupArtifacts = "artifacts"
+ // GroupSessions is the command group ID for sessions.
+ GroupSessions = "sessions"
+ // GroupRuntime is the command group ID for runtime.
+ GroupRuntime = "runtime"
+ // GroupIntegration is the command group ID for integration.
+ GroupIntegration = "integration"
+ // GroupDiagnostics is the command group ID for diagnostics.
+ GroupDiagnostics = "diagnostics"
+ // GroupUtilities is the command group ID for utilities.
+ GroupUtilities = "utilities"
)
diff --git a/internal/config/embed/cmd/journal.go b/internal/config/embed/cmd/journal.go
index 8ca447a71..d7a586c92 100644
--- a/internal/config/embed/cmd/journal.go
+++ b/internal/config/embed/cmd/journal.go
@@ -8,16 +8,25 @@ package cmd
// Use strings for journal subcommands.
const (
- UseJournal = "journal"
+ // UseJournal is the cobra Use string for the journal command.
+ UseJournal = "journal"
+ // UseJournalObsidian is the cobra Use string for the journal obsidian command.
UseJournalObsidian = "obsidian"
- UseJournalSite = "site"
- UseJournalSource = "source"
+ // UseJournalSite is the cobra Use string for the journal site command.
+ UseJournalSite = "site"
+ // UseJournalSource is the cobra Use string for the journal source command.
+ UseJournalSource = "source"
)
// DescKeys for journal subcommands.
const (
- DescKeyJournal = "journal"
+ // DescKeyJournal is the description key for the journal command.
+ DescKeyJournal = "journal"
+ // DescKeyJournalObsidian is the description key for the journal obsidian
+ // command.
DescKeyJournalObsidian = "journal.obsidian"
- DescKeyJournalSite = "journal.site"
- DescKeyJournalSource = "journal.source"
+ // DescKeyJournalSite is the description key for the journal site command.
+ DescKeyJournalSite = "journal.site"
+ // DescKeyJournalSource is the description key for the journal source command.
+ DescKeyJournalSource = "journal.source"
)
diff --git a/internal/config/embed/cmd/journal_source.go b/internal/config/embed/cmd/journal_source.go
index f8f978da8..f46e93134 100644
--- a/internal/config/embed/cmd/journal_source.go
+++ b/internal/config/embed/cmd/journal_source.go
@@ -8,16 +8,24 @@ package cmd
// Use strings for journal source subcommands.
const (
+ // UseJournalImport is the cobra Use string for the journal import command.
UseJournalImport = "import [session-id]"
- UseJournalLock = "lock "
- UseJournalSync = "sync"
+ // UseJournalLock is the cobra Use string for the journal lock command.
+ UseJournalLock = "lock "
+ // UseJournalSync is the cobra Use string for the journal sync command.
+ UseJournalSync = "sync"
+ // UseJournalUnlock is the cobra Use string for the journal unlock command.
UseJournalUnlock = "unlock "
)
// DescKeys for journal source subcommands.
const (
+ // DescKeyJournalImport is the description key for the journal import command.
DescKeyJournalImport = "journal.import"
- DescKeyJournalLock = "journal.lock"
- DescKeyJournalSync = "journal.sync"
+ // DescKeyJournalLock is the description key for the journal lock command.
+ DescKeyJournalLock = "journal.lock"
+ // DescKeyJournalSync is the description key for the journal sync command.
+ DescKeyJournalSync = "journal.sync"
+ // DescKeyJournalUnlock is the description key for the journal unlock command.
DescKeyJournalUnlock = "journal.unlock"
)
diff --git a/internal/config/embed/cmd/learning.go b/internal/config/embed/cmd/learning.go
index 7076e3bf7..cc6516084 100644
--- a/internal/config/embed/cmd/learning.go
+++ b/internal/config/embed/cmd/learning.go
@@ -11,6 +11,9 @@ const UseLearning = "learning"
// DescKeys for learning subcommands.
const (
- DescKeyLearning = "learning"
+ // DescKeyLearning is the description key for the learning command.
+ DescKeyLearning = "learning"
+ // DescKeyLearningReindex is the description key for the learning reindex
+ // command.
DescKeyLearningReindex = "learning.reindex"
)
diff --git a/internal/config/embed/cmd/mcp.go b/internal/config/embed/cmd/mcp.go
index 597eab830..8ab95834e 100644
--- a/internal/config/embed/cmd/mcp.go
+++ b/internal/config/embed/cmd/mcp.go
@@ -11,6 +11,8 @@ const UseMcpServe = "serve"
// DescKeys for MCP subcommands.
const (
- DescKeyMcp = "mcp"
+ // DescKeyMcp is the description key for the mcp command.
+ DescKeyMcp = "mcp"
+ // DescKeyMcpServe is the description key for the mcp serve command.
DescKeyMcpServe = "mcp.serve"
)
diff --git a/internal/config/embed/cmd/memory.go b/internal/config/embed/cmd/memory.go
index 699c37ee5..2053bdc0e 100644
--- a/internal/config/embed/cmd/memory.go
+++ b/internal/config/embed/cmd/memory.go
@@ -8,21 +8,35 @@ package cmd
// Use strings for memory subcommands.
const (
- UseMemoryDiff = "diff"
- UseMemoryImport = "import"
- UseMemoryPublish = "publish"
- UseMemoryStatus = "status"
- UseMemorySync = "sync"
+ // UseMemoryDiff is the cobra Use string for the memory diff command.
+ UseMemoryDiff = "diff"
+ // UseMemoryImport is the cobra Use string for the memory import command.
+ UseMemoryImport = "import"
+ // UseMemoryPublish is the cobra Use string for the memory publish command.
+ UseMemoryPublish = "publish"
+ // UseMemoryStatus is the cobra Use string for the memory status command.
+ UseMemoryStatus = "status"
+ // UseMemorySync is the cobra Use string for the memory sync command.
+ UseMemorySync = "sync"
+ // UseMemoryUnpublish is the cobra Use string for the memory unpublish command.
UseMemoryUnpublish = "unpublish"
)
// DescKeys for memory subcommands.
const (
- DescKeyMemory = "memory"
- DescKeyMemoryDiff = "memory.diff"
- DescKeyMemoryImport = "memory.import"
- DescKeyMemoryPublish = "memory.publish"
- DescKeyMemoryStatus = "memory.status"
- DescKeyMemorySync = "memory.sync"
+ // DescKeyMemory is the description key for the memory command.
+ DescKeyMemory = "memory"
+ // DescKeyMemoryDiff is the description key for the memory diff command.
+ DescKeyMemoryDiff = "memory.diff"
+ // DescKeyMemoryImport is the description key for the memory import command.
+ DescKeyMemoryImport = "memory.import"
+ // DescKeyMemoryPublish is the description key for the memory publish command.
+ DescKeyMemoryPublish = "memory.publish"
+ // DescKeyMemoryStatus is the description key for the memory status command.
+ DescKeyMemoryStatus = "memory.status"
+ // DescKeyMemorySync is the description key for the memory sync command.
+ DescKeyMemorySync = "memory.sync"
+ // DescKeyMemoryUnpublish is the description key for the memory unpublish
+ // command.
DescKeyMemoryUnpublish = "memory.unpublish"
)
diff --git a/internal/config/embed/cmd/notify.go b/internal/config/embed/cmd/notify.go
index 2b460103e..401959f2c 100644
--- a/internal/config/embed/cmd/notify.go
+++ b/internal/config/embed/cmd/notify.go
@@ -8,13 +8,18 @@ package cmd
// Use strings for notify subcommands.
const (
+ // UseNotifySetup is the cobra Use string for the notify setup command.
UseNotifySetup = "setup"
- UseNotifyTest = "test"
+ // UseNotifyTest is the cobra Use string for the notify test command.
+ UseNotifyTest = "test"
)
// DescKeys for notify subcommands.
const (
- DescKeyNotify = "notify"
+ // DescKeyNotify is the description key for the notify command.
+ DescKeyNotify = "notify"
+ // DescKeyNotifySetup is the description key for the notify setup command.
DescKeyNotifySetup = "notify.setup"
- DescKeyNotifyTest = "notify.test"
+ // DescKeyNotifyTest is the description key for the notify test command.
+ DescKeyNotifyTest = "notify.test"
)
diff --git a/internal/config/embed/cmd/pad.go b/internal/config/embed/cmd/pad.go
index 6a9620304..ba744f164 100644
--- a/internal/config/embed/cmd/pad.go
+++ b/internal/config/embed/cmd/pad.go
@@ -8,27 +8,46 @@ package cmd
// Use strings for pad subcommands.
const (
- UsePadAdd = "add TEXT"
- UsePadEdit = "edit N [TEXT]"
- UsePadExport = "export [DIR]"
- UsePadImport = "import FILE"
- UsePadMerge = "merge FILE..."
- UsePadMv = "mv N M"
+ // UsePadAdd is the cobra Use string for the pad add command.
+ UsePadAdd = "add TEXT"
+ // UsePadEdit is the cobra Use string for the pad edit command.
+ UsePadEdit = "edit N [TEXT]"
+ // UsePadExport is the cobra Use string for the pad export command.
+ UsePadExport = "export [DIR]"
+ // UsePadImport is the cobra Use string for the pad import command.
+ UsePadImport = "import FILE"
+ // UsePadMerge is the cobra Use string for the pad merge command.
+ UsePadMerge = "merge FILE..."
+ // UsePadMv is the cobra Use string for the pad mv command.
+ UsePadMv = "mv N M"
+ // UsePadResolve is the cobra Use string for the pad resolve command.
UsePadResolve = "resolve"
- UsePadRm = "rm N"
- UsePadShow = "show N"
+ // UsePadRm is the cobra Use string for the pad rm command.
+ UsePadRm = "rm N"
+ // UsePadShow is the cobra Use string for the pad show command.
+ UsePadShow = "show N"
)
// DescKeys for pad subcommands.
const (
- DescKeyPad = "pad"
- DescKeyPadAdd = "pad.add"
- DescKeyPadEdit = "pad.edit"
- DescKeyPadExport = "pad.export"
- DescKeyPadImp = "pad.root"
- DescKeyPadMerge = "pad.merge"
- DescKeyPadMv = "pad.mv"
+ // DescKeyPad is the description key for the pad command.
+ DescKeyPad = "pad"
+ // DescKeyPadAdd is the description key for the pad add command.
+ DescKeyPadAdd = "pad.add"
+ // DescKeyPadEdit is the description key for the pad edit command.
+ DescKeyPadEdit = "pad.edit"
+ // DescKeyPadExport is the description key for the pad export command.
+ DescKeyPadExport = "pad.export"
+ // DescKeyPadImp is the description key for the pad imp command.
+ DescKeyPadImp = "pad.root"
+ // DescKeyPadMerge is the description key for the pad merge command.
+ DescKeyPadMerge = "pad.merge"
+ // DescKeyPadMv is the description key for the pad mv command.
+ DescKeyPadMv = "pad.mv"
+ // DescKeyPadResolve is the description key for the pad resolve command.
DescKeyPadResolve = "pad.resolve"
- DescKeyPadRm = "pad.rm"
- DescKeyPadShow = "pad.show"
+ // DescKeyPadRm is the description key for the pad rm command.
+ DescKeyPadRm = "pad.rm"
+ // DescKeyPadShow is the description key for the pad show command.
+ DescKeyPadShow = "pad.show"
)
diff --git a/internal/config/embed/cmd/pause.go b/internal/config/embed/cmd/pause.go
index 1a056c6f9..d5e4b2d14 100644
--- a/internal/config/embed/cmd/pause.go
+++ b/internal/config/embed/cmd/pause.go
@@ -8,6 +8,8 @@ package cmd
// DescKeys for pause subcommands.
const (
- DescKeyPause = "pause"
+ // DescKeyPause is the description key for the pause command.
+ DescKeyPause = "pause"
+ // DescKeyResume is the description key for the resume command.
DescKeyResume = "resume"
)
diff --git a/internal/config/embed/cmd/permission.go b/internal/config/embed/cmd/permission.go
index 225cfb7fd..6198a7756 100644
--- a/internal/config/embed/cmd/permission.go
+++ b/internal/config/embed/cmd/permission.go
@@ -8,13 +8,22 @@ package cmd
// Use strings for permission subcommands.
const (
- UsePermissionRestore = "restore"
+ // UsePermissionRestore is the cobra Use string for the permission restore
+ // command.
+ UsePermissionRestore = "restore"
+ // UsePermissionSnapshot is the cobra Use string for the permission snapshot
+ // command.
UsePermissionSnapshot = "snapshot"
)
// DescKeys for permission subcommands.
const (
- DescKeyPermission = "permission"
- DescKeyPermissionRestore = "permission.restore"
+ // DescKeyPermission is the description key for the permission command.
+ DescKeyPermission = "permission"
+ // DescKeyPermissionRestore is the description key for the permission restore
+ // command.
+ DescKeyPermissionRestore = "permission.restore"
+ // DescKeyPermissionSnapshot is the description key for the permission
+ // snapshot command.
DescKeyPermissionSnapshot = "permission.snapshot"
)
diff --git a/internal/config/embed/cmd/philosophy.go b/internal/config/embed/cmd/philosophy.go
index 7650b2ec7..41b8f43b8 100644
--- a/internal/config/embed/cmd/philosophy.go
+++ b/internal/config/embed/cmd/philosophy.go
@@ -8,6 +8,8 @@ package cmd
// DescKeys for philosophy subcommands.
const (
+ // DescKeyGuide is the description key for the guide command.
DescKeyGuide = "guide"
- DescKeyWhy = "why"
+ // DescKeyWhy is the description key for the why command.
+ DescKeyWhy = "why"
)
diff --git a/internal/config/embed/cmd/remind.go b/internal/config/embed/cmd/remind.go
index 3d5c3c853..27019ac0c 100644
--- a/internal/config/embed/cmd/remind.go
+++ b/internal/config/embed/cmd/remind.go
@@ -8,17 +8,28 @@ package cmd
// Use strings for remind subcommands.
const (
- UseRemindAdd = "add TEXT"
- UseRemindDismiss = "dismiss [ID]"
+ // UseRemindAdd is the cobra Use string for the remind add command.
+ UseRemindAdd = "add TEXT"
+ // UseRemindDismiss is the cobra Use string for the remind dismiss command.
+ UseRemindDismiss = "dismiss [ID]"
+ // UseRemindDismissAlias is the cobra Use string for the remind dismiss alias
+ // command.
UseRemindDismissAlias = "rm"
- UseRemindList = "list"
- UseRemindListAlias = "ls"
+ // UseRemindList is the cobra Use string for the remind list command.
+ UseRemindList = "list"
+ // UseRemindListAlias is the cobra Use string for the remind list alias
+ // command.
+ UseRemindListAlias = "ls"
)
// DescKeys for remind subcommands.
const (
- DescKeyRemind = "remind"
- DescKeyRemindAdd = "remind.add"
+ // DescKeyRemind is the description key for the remind command.
+ DescKeyRemind = "remind"
+ // DescKeyRemindAdd is the description key for the remind add command.
+ DescKeyRemindAdd = "remind.add"
+ // DescKeyRemindDismiss is the description key for the remind dismiss command.
DescKeyRemindDismiss = "remind.dismiss"
- DescKeyRemindList = "remind.list"
+ // DescKeyRemindList is the description key for the remind list command.
+ DescKeyRemindList = "remind.list"
)
diff --git a/internal/config/embed/cmd/site.go b/internal/config/embed/cmd/site.go
index 364f64037..d40dab434 100644
--- a/internal/config/embed/cmd/site.go
+++ b/internal/config/embed/cmd/site.go
@@ -8,12 +8,16 @@ package cmd
// Use strings for site subcommands.
const (
- UseSite = "site"
+ // UseSite is the cobra Use string for the site command.
+ UseSite = "site"
+ // UseSiteFeed is the cobra Use string for the site feed command.
UseSiteFeed = "feed"
)
// DescKeys for site subcommands.
const (
- DescKeySite = "site"
+ // DescKeySite is the description key for the site command.
+ DescKeySite = "site"
+ // DescKeySiteFeed is the description key for the site feed command.
DescKeySiteFeed = "site.feed"
)
diff --git a/internal/config/embed/cmd/skill.go b/internal/config/embed/cmd/skill.go
index da5e2949c..2dfceab35 100644
--- a/internal/config/embed/cmd/skill.go
+++ b/internal/config/embed/cmd/skill.go
@@ -8,16 +8,24 @@ package cmd
// Use strings for skill subcommands.
const (
- UseSkill = "skill"
+ // UseSkill is the cobra Use string for the skill command.
+ UseSkill = "skill"
+ // UseSkillInstall is the cobra Use string for the skill install command.
UseSkillInstall = "install "
- UseSkillList = "list"
- UseSkillRemove = "remove "
+ // UseSkillList is the cobra Use string for the skill list command.
+ UseSkillList = "list"
+ // UseSkillRemove is the cobra Use string for the skill remove command.
+ UseSkillRemove = "remove "
)
// DescKeys for skill subcommands.
const (
- DescKeySkill = "skill"
+ // DescKeySkill is the description key for the skill command.
+ DescKeySkill = "skill"
+ // DescKeySkillInstall is the description key for the skill install command.
DescKeySkillInstall = "skill.install"
- DescKeySkillList = "skill.list"
- DescKeySkillRemove = "skill.remove"
+ // DescKeySkillList is the description key for the skill list command.
+ DescKeySkillList = "skill.list"
+ // DescKeySkillRemove is the description key for the skill remove command.
+ DescKeySkillRemove = "skill.remove"
)
diff --git a/internal/config/embed/cmd/steering.go b/internal/config/embed/cmd/steering.go
index 2810c294d..3fcde9b6e 100644
--- a/internal/config/embed/cmd/steering.go
+++ b/internal/config/embed/cmd/steering.go
@@ -8,20 +8,33 @@ package cmd
// Use strings for steering subcommands.
const (
- UseSteering = "steering"
- UseSteeringAdd = "add "
- UseSteeringList = "list"
+ // UseSteering is the cobra Use string for the steering command.
+ UseSteering = "steering"
+ // UseSteeringAdd is the cobra Use string for the steering add command.
+ UseSteeringAdd = "add "
+ // UseSteeringList is the cobra Use string for the steering list command.
+ UseSteeringList = "list"
+ // UseSteeringPreview is the cobra Use string for the steering preview command.
UseSteeringPreview = "preview "
- UseSteeringInit = "init"
- UseSteeringSync = "sync"
+ // UseSteeringInit is the cobra Use string for the steering init command.
+ UseSteeringInit = "init"
+ // UseSteeringSync is the cobra Use string for the steering sync command.
+ UseSteeringSync = "sync"
)
// DescKeys for steering subcommands.
const (
- DescKeySteering = "steering"
- DescKeySteeringAdd = "steering.add"
- DescKeySteeringList = "steering.list"
+ // DescKeySteering is the description key for the steering command.
+ DescKeySteering = "steering"
+ // DescKeySteeringAdd is the description key for the steering add command.
+ DescKeySteeringAdd = "steering.add"
+ // DescKeySteeringList is the description key for the steering list command.
+ DescKeySteeringList = "steering.list"
+ // DescKeySteeringPreview is the description key for the steering preview
+ // command.
DescKeySteeringPreview = "steering.preview"
- DescKeySteeringInit = "steering.init"
- DescKeySteeringSync = "steering.sync"
+ // DescKeySteeringInit is the description key for the steering init command.
+ DescKeySteeringInit = "steering.init"
+ // DescKeySteeringSync is the description key for the steering sync command.
+ DescKeySteeringSync = "steering.sync"
)
diff --git a/internal/config/embed/cmd/system.go b/internal/config/embed/cmd/system.go
index ab1740ef4..964c8b7fd 100644
--- a/internal/config/embed/cmd/system.go
+++ b/internal/config/embed/cmd/system.go
@@ -8,83 +8,215 @@ package cmd
// Use strings for system subcommands.
const (
- UseSystemBackup = "backup"
+ // UseSystemBackup is the cobra Use string for the system backup command.
+ UseSystemBackup = "backup"
+ // UseSystemBlockDangerousCommands is the cobra Use string for the system
+ // block dangerous commands command.
UseSystemBlockDangerousCommands = "block-dangerous-commands"
- UseSystemBlockNonPathCtx = "block-non-path-ctx"
- UseSystemBootstrap = "bootstrap"
- UseSystemCheckBackupAge = "check-backup-age"
- UseSystemCheckCeremonies = "check-ceremonies"
- UseSystemCheckContextSize = "check-context-size"
- UseSystemCheckFreshness = "check-freshness"
- UseSystemCheckJournal = "check-journal"
- UseSystemCheckKnowledge = "check-knowledge"
- UseSystemCheckMapStaleness = "check-map-staleness"
- UseSystemCheckMemoryDrift = "check-memory-drift"
- UseSystemCheckPersistence = "check-persistence"
- UseSystemCheckSkillDiscovery = "check-skill-discovery"
- UseSystemCheckReminders = "check-reminders"
- UseSystemCheckResources = "check-resources"
- UseSystemCheckTaskCompletion = "check-task-completion"
- UseSystemCheckVersion = "check-version"
- UseSystemContextLoadGate = "context-load-gate"
- UseSystemEvents = "events"
- UseSystemHeartbeat = "heartbeat"
- UseSystemMarkJournal = "mark-journal "
- UseSystemMarkWrappedUp = "mark-wrapped-up"
- UseSystemMessage = "message"
- UseSystemMessageEdit = "edit "
- UseSystemMessageList = "list"
- UseSystemMessageReset = "reset "
- UseSystemMessageShow = "show "
- UseSystemPause = "pause"
- UseSystemPostCommit = "post-commit"
- UseSystemPrune = "prune"
- UseSystemQaReminder = "qa-reminder"
- UseSystemResources = "resources"
- UseSystemResume = "resume"
- UseSystemSessionEvent = "session-event"
- UseSystemSpecsNudge = "specs-nudge"
- UseSystemStats = "stats"
+ // UseSystemBlockNonPathCtx is the cobra Use string for the system block non
+ // path ctx command.
+ UseSystemBlockNonPathCtx = "block-non-path-ctx"
+ // UseSystemBootstrap is the cobra Use string for the system bootstrap command.
+ UseSystemBootstrap = "bootstrap"
+ // UseSystemCheckBackupAge is the cobra Use string for the system check backup
+ // age command.
+ UseSystemCheckBackupAge = "check-backup-age"
+ // UseSystemCheckCeremonies is the cobra Use string for the system check
+ // ceremonies command.
+ UseSystemCheckCeremonies = "check-ceremonies"
+ // UseSystemCheckContextSize is the cobra Use string for the system check
+ // context size command.
+ UseSystemCheckContextSize = "check-context-size"
+ // UseSystemCheckFreshness is the cobra Use string for the system check
+ // freshness command.
+ UseSystemCheckFreshness = "check-freshness"
+ // UseSystemCheckJournal is the cobra Use string for the system check journal
+ // command.
+ UseSystemCheckJournal = "check-journal"
+ // UseSystemCheckKnowledge is the cobra Use string for the system check
+ // knowledge command.
+ UseSystemCheckKnowledge = "check-knowledge"
+ // UseSystemCheckMapStaleness is the cobra Use string for the system check map
+ // staleness command.
+ UseSystemCheckMapStaleness = "check-map-staleness"
+ // UseSystemCheckMemoryDrift is the cobra Use string for the system check
+ // memory drift command.
+ UseSystemCheckMemoryDrift = "check-memory-drift"
+ // UseSystemCheckPersistence is the cobra Use string for the system check
+ // persistence command.
+ UseSystemCheckPersistence = "check-persistence"
+ // UseSystemCheckSkillDiscovery is the cobra Use string for the system check
+ // skill discovery command.
+ UseSystemCheckSkillDiscovery = "check-skill-discovery"
+ // UseSystemCheckReminders is the cobra Use string for the system check
+ // reminders command.
+ UseSystemCheckReminders = "check-reminders"
+ // UseSystemCheckResources is the cobra Use string for the system check
+ // resources command.
+ UseSystemCheckResources = "check-resources"
+ // UseSystemCheckTaskCompletion is the cobra Use string for the system check
+ // task completion command.
+ UseSystemCheckTaskCompletion = "check-task-completion"
+ // UseSystemCheckVersion is the cobra Use string for the system check version
+ // command.
+ UseSystemCheckVersion = "check-version"
+ // UseSystemContextLoadGate is the cobra Use string for the system context
+ // load gate command.
+ UseSystemContextLoadGate = "context-load-gate"
+ // UseSystemEvents is the cobra Use string for the system events command.
+ UseSystemEvents = "events"
+ // UseSystemHeartbeat is the cobra Use string for the system heartbeat command.
+ UseSystemHeartbeat = "heartbeat"
+ // UseSystemMarkJournal is the cobra Use string for the system mark journal
+ // command.
+ UseSystemMarkJournal = "mark-journal "
+ // UseSystemMarkWrappedUp is the cobra Use string for the system mark wrapped
+ // up command.
+ UseSystemMarkWrappedUp = "mark-wrapped-up"
+ // UseSystemMessage is the cobra Use string for the system message command.
+ UseSystemMessage = "message"
+ // UseSystemMessageEdit is the cobra Use string for the system message edit
+ // command.
+ UseSystemMessageEdit = "edit "
+ // UseSystemMessageList is the cobra Use string for the system message list
+ // command.
+ UseSystemMessageList = "list"
+ // UseSystemMessageReset is the cobra Use string for the system message reset
+ // command.
+ UseSystemMessageReset = "reset "
+ // UseSystemMessageShow is the cobra Use string for the system message show
+ // command.
+ UseSystemMessageShow = "show "
+ // UseSystemPause is the cobra Use string for the system pause command.
+ UseSystemPause = "pause"
+ // UseSystemPostCommit is the cobra Use string for the system post commit
+ // command.
+ UseSystemPostCommit = "post-commit"
+ // UseSystemPrune is the cobra Use string for the system prune command.
+ UseSystemPrune = "prune"
+ // UseSystemQaReminder is the cobra Use string for the system qa reminder
+ // command.
+ UseSystemQaReminder = "qa-reminder"
+ // UseSystemResources is the cobra Use string for the system resources command.
+ UseSystemResources = "resources"
+ // UseSystemResume is the cobra Use string for the system resume command.
+ UseSystemResume = "resume"
+ // UseSystemSessionEvent is the cobra Use string for the system session event
+ // command.
+ UseSystemSessionEvent = "session-event"
+ // UseSystemSpecsNudge is the cobra Use string for the system specs nudge
+ // command.
+ UseSystemSpecsNudge = "specs-nudge"
+ // UseSystemStats is the cobra Use string for the system stats command.
+ UseSystemStats = "stats"
)
// DescKeys for system subcommands.
const (
- DescKeySystem = "system"
- DescKeySystemBackup = "system.backup"
+ // DescKeySystem is the description key for the system command.
+ DescKeySystem = "system"
+ // DescKeySystemBackup is the description key for the system backup command.
+ DescKeySystemBackup = "system.backup"
+ // DescKeySystemBlockDangerousCommands is the description key for the system
+ // block dangerous commands command.
DescKeySystemBlockDangerousCommands = "system.blockdangerouscommands"
- DescKeySystemBlockNonPathCtx = "system.blocknonpathctx"
- DescKeySystemBootstrap = "system.bootstrap"
- DescKeySystemCheckBackupAge = "system.checkbackupage"
- DescKeySystemCheckCeremonies = "system.checkceremonies"
- DescKeySystemCheckContextSize = "system.checkcontextsize"
- DescKeySystemCheckFreshness = "system.checkfreshness"
- DescKeySystemCheckJournal = "system.checkjournal"
- DescKeySystemCheckKnowledge = "system.checkknowledge"
- DescKeySystemCheckMapStaleness = "system.checkmapstaleness"
- DescKeySystemCheckMemoryDrift = "system.checkmemorydrift"
- DescKeySystemCheckPersistence = "system.checkpersistence"
- DescKeySystemCheckSkillDiscovery = "system.checkskilldiscovery"
- DescKeySystemCheckReminders = "system.checkreminders"
- DescKeySystemCheckResources = "system.checkresources"
- DescKeySystemCheckTaskCompletion = "system.checktaskcompletion"
- DescKeySystemCheckVersion = "system.checkversion"
- DescKeySystemContextLoadGate = "system.contextloadgate"
- DescKeySystemEvents = "system.events"
- DescKeySystemHeartbeat = "system.heartbeat"
- DescKeySystemMarkJournal = "system.markjournal"
- DescKeySystemMarkWrappedUp = "system.markwrappedup"
- DescKeySystemMessage = "system.message"
- DescKeySystemMessageEdit = "system.message.edit"
- DescKeySystemMessageList = "system.message.list"
- DescKeySystemMessageReset = "system.message.reset"
- DescKeySystemMessageShow = "system.message.show"
- DescKeySystemPause = "system.pause"
- DescKeySystemPostCommit = "system.postcommit"
- DescKeySystemPrune = "system.prune"
- DescKeySystemQaReminder = "system.qareminder"
- DescKeySystemResources = "system.resources"
- DescKeySystemResume = "system.resume"
- DescKeySystemSessionEvent = "system.sessionevent"
- DescKeySystemSpecsNudge = "system.specsnudge"
- DescKeySystemStats = "system.stats"
+ // DescKeySystemBlockNonPathCtx is the description key for the system block
+ // non path ctx command.
+ DescKeySystemBlockNonPathCtx = "system.blocknonpathctx"
+ // DescKeySystemBootstrap is the description key for the system bootstrap
+ // command.
+ DescKeySystemBootstrap = "system.bootstrap"
+ // DescKeySystemCheckBackupAge is the description key for the system check
+ // backup age command.
+ DescKeySystemCheckBackupAge = "system.checkbackupage"
+ // DescKeySystemCheckCeremonies is the description key for the system check
+ // ceremonies command.
+ DescKeySystemCheckCeremonies = "system.checkceremonies"
+ // DescKeySystemCheckContextSize is the description key for the system check
+ // context size command.
+ DescKeySystemCheckContextSize = "system.checkcontextsize"
+ // DescKeySystemCheckFreshness is the description key for the system check
+ // freshness command.
+ DescKeySystemCheckFreshness = "system.checkfreshness"
+ // DescKeySystemCheckJournal is the description key for the system check
+ // journal command.
+ DescKeySystemCheckJournal = "system.checkjournal"
+ // DescKeySystemCheckKnowledge is the description key for the system check
+ // knowledge command.
+ DescKeySystemCheckKnowledge = "system.checkknowledge"
+ // DescKeySystemCheckMapStaleness is the description key for the system check
+ // map staleness command.
+ DescKeySystemCheckMapStaleness = "system.checkmapstaleness"
+ // DescKeySystemCheckMemoryDrift is the description key for the system check
+ // memory drift command.
+ DescKeySystemCheckMemoryDrift = "system.checkmemorydrift"
+ // DescKeySystemCheckPersistence is the description key for the system check
+ // persistence command.
+ DescKeySystemCheckPersistence = "system.checkpersistence"
+ // DescKeySystemCheckSkillDiscovery is the description key for the system
+ // check skill discovery command.
+ DescKeySystemCheckSkillDiscovery = "system.checkskilldiscovery"
+ // DescKeySystemCheckReminders is the description key for the system check
+ // reminders command.
+ DescKeySystemCheckReminders = "system.checkreminders"
+ // DescKeySystemCheckResources is the description key for the system check
+ // resources command.
+ DescKeySystemCheckResources = "system.checkresources"
+ // DescKeySystemCheckTaskCompletion is the description key for the system
+ // check task completion command.
+ DescKeySystemCheckTaskCompletion = "system.checktaskcompletion"
+ // DescKeySystemCheckVersion is the description key for the system check
+ // version command.
+ DescKeySystemCheckVersion = "system.checkversion"
+ // DescKeySystemContextLoadGate is the description key for the system context
+ // load gate command.
+ DescKeySystemContextLoadGate = "system.contextloadgate"
+ // DescKeySystemEvents is the description key for the system events command.
+ DescKeySystemEvents = "system.events"
+ // DescKeySystemHeartbeat is the description key for the system heartbeat
+ // command.
+ DescKeySystemHeartbeat = "system.heartbeat"
+ // DescKeySystemMarkJournal is the description key for the system mark journal
+ // command.
+ DescKeySystemMarkJournal = "system.markjournal"
+ // DescKeySystemMarkWrappedUp is the description key for the system mark
+ // wrapped up command.
+ DescKeySystemMarkWrappedUp = "system.markwrappedup"
+ // DescKeySystemMessage is the description key for the system message command.
+ DescKeySystemMessage = "system.message"
+ // DescKeySystemMessageEdit is the description key for the system message edit
+ // command.
+ DescKeySystemMessageEdit = "system.message.edit"
+ // DescKeySystemMessageList is the description key for the system message list
+ // command.
+ DescKeySystemMessageList = "system.message.list"
+ // DescKeySystemMessageReset is the description key for the system message
+ // reset command.
+ DescKeySystemMessageReset = "system.message.reset"
+ // DescKeySystemMessageShow is the description key for the system message show
+ // command.
+ DescKeySystemMessageShow = "system.message.show"
+ // DescKeySystemPause is the description key for the system pause command.
+ DescKeySystemPause = "system.pause"
+ // DescKeySystemPostCommit is the description key for the system post commit
+ // command.
+ DescKeySystemPostCommit = "system.postcommit"
+ // DescKeySystemPrune is the description key for the system prune command.
+ DescKeySystemPrune = "system.prune"
+ // DescKeySystemQaReminder is the description key for the system qa reminder
+ // command.
+ DescKeySystemQaReminder = "system.qareminder"
+ // DescKeySystemResources is the description key for the system resources
+ // command.
+ DescKeySystemResources = "system.resources"
+ // DescKeySystemResume is the description key for the system resume command.
+ DescKeySystemResume = "system.resume"
+ // DescKeySystemSessionEvent is the description key for the system session
+ // event command.
+ DescKeySystemSessionEvent = "system.sessionevent"
+ // DescKeySystemSpecsNudge is the description key for the system specs nudge
+ // command.
+ DescKeySystemSpecsNudge = "system.specsnudge"
+ // DescKeySystemStats is the description key for the system stats command.
+ DescKeySystemStats = "system.stats"
)
diff --git a/internal/config/embed/cmd/task.go b/internal/config/embed/cmd/task.go
index e022d8b3d..8d0236661 100644
--- a/internal/config/embed/cmd/task.go
+++ b/internal/config/embed/cmd/task.go
@@ -8,13 +8,18 @@ package cmd
// Use strings for task subcommands.
const (
- UseTaskArchive = "archive"
+ // UseTaskArchive is the cobra Use string for the task archive command.
+ UseTaskArchive = "archive"
+ // UseTaskSnapshot is the cobra Use string for the task snapshot command.
UseTaskSnapshot = "snapshot [name]"
)
// DescKeys for task subcommands.
const (
- DescKeyTask = "task"
- DescKeyTaskArchive = "task.archive"
+ // DescKeyTask is the description key for the task command.
+ DescKeyTask = "task"
+ // DescKeyTaskArchive is the description key for the task archive command.
+ DescKeyTaskArchive = "task.archive"
+ // DescKeyTaskSnapshot is the description key for the task snapshot command.
DescKeyTaskSnapshot = "task.snapshot"
)
diff --git a/internal/config/embed/cmd/trace.go b/internal/config/embed/cmd/trace.go
index 4f183ba75..dc4a89adc 100644
--- a/internal/config/embed/cmd/trace.go
+++ b/internal/config/embed/cmd/trace.go
@@ -8,18 +8,28 @@ package cmd
// Use strings for trace subcommands.
const (
- UseTrace = "trace [commit]"
- UseTraceFile = "file "
- UseTraceTag = "tag "
+ // UseTrace is the cobra Use string for the trace command.
+ UseTrace = "trace [commit]"
+ // UseTraceFile is the cobra Use string for the trace file command.
+ UseTraceFile = "file "
+ // UseTraceTag is the cobra Use string for the trace tag command.
+ UseTraceTag = "tag "
+ // UseTraceCollect is the cobra Use string for the trace collect command.
UseTraceCollect = "collect"
- UseTraceHook = "hook "
+ // UseTraceHook is the cobra Use string for the trace hook command.
+ UseTraceHook = "hook "
)
// DescKeys for trace subcommands.
const (
- DescKeyTrace = "trace"
- DescKeyTraceFile = "trace.file"
- DescKeyTraceTag = "trace.tag"
+ // DescKeyTrace is the description key for the trace command.
+ DescKeyTrace = "trace"
+ // DescKeyTraceFile is the description key for the trace file command.
+ DescKeyTraceFile = "trace.file"
+ // DescKeyTraceTag is the description key for the trace tag command.
+ DescKeyTraceTag = "trace.tag"
+ // DescKeyTraceCollect is the description key for the trace collect command.
DescKeyTraceCollect = "trace.collect"
- DescKeyTraceHook = "trace.hook"
+ // DescKeyTraceHook is the description key for the trace hook command.
+ DescKeyTraceHook = "trace.hook"
)
diff --git a/internal/config/embed/cmd/trigger.go b/internal/config/embed/cmd/trigger.go
index 2b0b0b008..deaad99f0 100644
--- a/internal/config/embed/cmd/trigger.go
+++ b/internal/config/embed/cmd/trigger.go
@@ -8,20 +8,33 @@ package cmd
// Use strings for trigger subcommands.
const (
- UseTrigger = "trigger"
- UseTriggerAdd = "add "
- UseTriggerList = "list"
- UseTriggerTest = "test "
- UseTriggerEnable = "enable "
+ // UseTrigger is the cobra Use string for the trigger command.
+ UseTrigger = "trigger"
+ // UseTriggerAdd is the cobra Use string for the trigger add command.
+ UseTriggerAdd = "add "
+ // UseTriggerList is the cobra Use string for the trigger list command.
+ UseTriggerList = "list"
+ // UseTriggerTest is the cobra Use string for the trigger test command.
+ UseTriggerTest = "test "
+ // UseTriggerEnable is the cobra Use string for the trigger enable command.
+ UseTriggerEnable = "enable "
+ // UseTriggerDisable is the cobra Use string for the trigger disable command.
UseTriggerDisable = "disable "
)
// DescKeys for trigger subcommands.
const (
- DescKeyTrigger = "trigger"
- DescKeyTriggerAdd = "trigger.add"
- DescKeyTriggerList = "trigger.list"
- DescKeyTriggerTest = "trigger.test"
- DescKeyTriggerEnable = "trigger.enable"
+ // DescKeyTrigger is the description key for the trigger command.
+ DescKeyTrigger = "trigger"
+ // DescKeyTriggerAdd is the description key for the trigger add command.
+ DescKeyTriggerAdd = "trigger.add"
+ // DescKeyTriggerList is the description key for the trigger list command.
+ DescKeyTriggerList = "trigger.list"
+ // DescKeyTriggerTest is the description key for the trigger test command.
+ DescKeyTriggerTest = "trigger.test"
+ // DescKeyTriggerEnable is the description key for the trigger enable command.
+ DescKeyTriggerEnable = "trigger.enable"
+ // DescKeyTriggerDisable is the description key for the trigger disable
+ // command.
DescKeyTriggerDisable = "trigger.disable"
)
diff --git a/internal/config/embed/flag/add.go b/internal/config/embed/flag/add.go
index d3d3338a2..2ae4d01a2 100644
--- a/internal/config/embed/flag/add.go
+++ b/internal/config/embed/flag/add.go
@@ -8,12 +8,20 @@ package flag
// DescKeys for add command flags.
const (
+ // DescKeyAddApplication is the description key for the add application flag.
DescKeyAddApplication = "add.application"
+ // DescKeyAddConsequence is the description key for the add consequence flag.
DescKeyAddConsequence = "add.consequence"
- DescKeyAddContext = "add.context"
- DescKeyAddFile = "add.file"
- DescKeyAddLesson = "add.lesson"
- DescKeyAddPriority = "add.priority"
- DescKeyAddRationale = "add.rationale"
- DescKeyAddSection = "add.section"
+ // DescKeyAddContext is the description key for the add context flag.
+ DescKeyAddContext = "add.context"
+ // DescKeyAddFile is the description key for the add file flag.
+ DescKeyAddFile = "add.file"
+ // DescKeyAddLesson is the description key for the add lesson flag.
+ DescKeyAddLesson = "add.lesson"
+ // DescKeyAddPriority is the description key for the add priority flag.
+ DescKeyAddPriority = "add.priority"
+ // DescKeyAddRationale is the description key for the add rationale flag.
+ DescKeyAddRationale = "add.rationale"
+ // DescKeyAddSection is the description key for the add section flag.
+ DescKeyAddSection = "add.section"
)
diff --git a/internal/config/embed/flag/agent.go b/internal/config/embed/flag/agent.go
index c493d8721..ca517178a 100644
--- a/internal/config/embed/flag/agent.go
+++ b/internal/config/embed/flag/agent.go
@@ -8,9 +8,14 @@ package flag
// DescKeys for agent command flags.
const (
- DescKeyAgentBudget = "agent.budget"
+ // DescKeyAgentBudget is the description key for the agent budget flag.
+ DescKeyAgentBudget = "agent.budget"
+ // DescKeyAgentCooldown is the description key for the agent cooldown flag.
DescKeyAgentCooldown = "agent.cooldown"
- DescKeyAgentFormat = "agent.format"
- DescKeyAgentSession = "agent.session"
- DescKeyAgentSkill = "agent.skill"
+ // DescKeyAgentFormat is the description key for the agent format flag.
+ DescKeyAgentFormat = "agent.format"
+ // DescKeyAgentSession is the description key for the agent session flag.
+ DescKeyAgentSession = "agent.session"
+ // DescKeyAgentSkill is the description key for the agent skill flag.
+ DescKeyAgentSkill = "agent.skill"
)
diff --git a/internal/config/embed/flag/dep.go b/internal/config/embed/flag/dep.go
index 01ee548a6..456307776 100644
--- a/internal/config/embed/flag/dep.go
+++ b/internal/config/embed/flag/dep.go
@@ -8,7 +8,10 @@ package flag
// DescKeys for dep command flags.
const (
+ // DescKeyDepsExternal is the description key for the deps external flag.
DescKeyDepsExternal = "deps.external"
- DescKeyDepsFormat = "deps.format"
- DescKeyDepsType = "deps.type"
+ // DescKeyDepsFormat is the description key for the deps format flag.
+ DescKeyDepsFormat = "deps.format"
+ // DescKeyDepsType is the description key for the deps type flag.
+ DescKeyDepsType = "deps.type"
)
diff --git a/internal/config/embed/flag/drift.go b/internal/config/embed/flag/drift.go
index e1f363d2c..d0b066a62 100644
--- a/internal/config/embed/flag/drift.go
+++ b/internal/config/embed/flag/drift.go
@@ -8,6 +8,8 @@ package flag
// DescKeys for drift command flags.
const (
- DescKeyDriftFix = "drift.fix"
+ // DescKeyDriftFix is the description key for the drift fix flag.
+ DescKeyDriftFix = "drift.fix"
+ // DescKeyDriftJson is the description key for the drift json flag.
DescKeyDriftJson = "drift.json"
)
diff --git a/internal/config/embed/flag/dry_run.go b/internal/config/embed/flag/dry_run.go
index da33f76c9..d800a2c3e 100644
--- a/internal/config/embed/flag/dry_run.go
+++ b/internal/config/embed/flag/dry_run.go
@@ -8,6 +8,9 @@ package flag
// DescKeys for dry-run mode flags.
const (
- DescKeySyncDryRun = "sync.dry-run"
+ // DescKeySyncDryRun is the description key for the sync dry run flag.
+ DescKeySyncDryRun = "sync.dry-run"
+ // DescKeyTaskArchiveDryRun is the description key for the task archive dry
+ // run flag.
DescKeyTaskArchiveDryRun = "task.archive.dry-run"
)
diff --git a/internal/config/embed/flag/flag.go b/internal/config/embed/flag/flag.go
index 1af103756..5e9a3aeeb 100644
--- a/internal/config/embed/flag/flag.go
+++ b/internal/config/embed/flag/flag.go
@@ -8,15 +8,31 @@ package flag
// DescKeys for shared flag descriptions.
const (
- DescKeyAllowOutsideCwd = "allow-outside-cwd"
- DescKeyChangesSince = "changes.since"
- DescKeyCompactArchive = "compact.archive"
- DescKeyContextDir = "context-dir"
- DescKeyDoctorJson = "doctor.json"
- DescKeyTriggerTestPath = "trigger.test.path"
- DescKeyTriggerTestTool = "trigger.test.tool"
+ // DescKeyAllowOutsideCwd is the description key for the allow outside cwd
+ // flag.
+ DescKeyAllowOutsideCwd = "allow-outside-cwd"
+ // DescKeyChangesSince is the description key for the changes since flag.
+ DescKeyChangesSince = "changes.since"
+ // DescKeyCompactArchive is the description key for the compact archive flag.
+ DescKeyCompactArchive = "compact.archive"
+ // DescKeyContextDir is the description key for the context dir flag.
+ DescKeyContextDir = "context-dir"
+ // DescKeyDoctorJson is the description key for the doctor json flag.
+ DescKeyDoctorJson = "doctor.json"
+ // DescKeyTriggerTestPath is the description key for the trigger test path
+ // flag.
+ DescKeyTriggerTestPath = "trigger.test.path"
+ // DescKeyTriggerTestTool is the description key for the trigger test tool
+ // flag.
+ DescKeyTriggerTestTool = "trigger.test.tool"
+ // DescKeyInitializeCaller is the description key for the initialize caller
+ // flag.
DescKeyInitializeCaller = "initialize.caller"
- DescKeySetupWrite = "setup.write"
- DescKeySteeringSyncAll = "steering.sync.all"
- DescKeyTool = "tool"
+ // DescKeySetupWrite is the description key for the setup write flag.
+ DescKeySetupWrite = "setup.write"
+ // DescKeySteeringSyncAll is the description key for the steering sync all
+ // flag.
+ DescKeySteeringSyncAll = "steering.sync.all"
+ // DescKeyTool is the description key for the tool flag.
+ DescKeyTool = "tool"
)
diff --git a/internal/config/embed/flag/guide.go b/internal/config/embed/flag/guide.go
index e4a3b0bdf..9d5e97855 100644
--- a/internal/config/embed/flag/guide.go
+++ b/internal/config/embed/flag/guide.go
@@ -8,6 +8,8 @@ package flag
// DescKeys for guide command flags.
const (
+ // DescKeyGuideCommands is the description key for the guide commands flag.
DescKeyGuideCommands = "guide.commands"
- DescKeyGuideSkills = "guide.skills"
+ // DescKeyGuideSkills is the description key for the guide skills flag.
+ DescKeyGuideSkills = "guide.skills"
)
diff --git a/internal/config/embed/flag/initialize.go b/internal/config/embed/flag/initialize.go
index be9eccc5c..103c31f35 100644
--- a/internal/config/embed/flag/initialize.go
+++ b/internal/config/embed/flag/initialize.go
@@ -8,8 +8,14 @@ package flag
// DescKeys for init command flags.
const (
- DescKeyInitializeForce = "initialize.force"
- DescKeyInitializeMerge = "initialize.merge"
- DescKeyInitializeMinimal = "initialize.minimal"
+ // DescKeyInitializeForce is the description key for the initialize force flag.
+ DescKeyInitializeForce = "initialize.force"
+ // DescKeyInitializeMerge is the description key for the initialize merge flag.
+ DescKeyInitializeMerge = "initialize.merge"
+ // DescKeyInitializeMinimal is the description key for the initialize minimal
+ // flag.
+ DescKeyInitializeMinimal = "initialize.minimal"
+ // DescKeyInitializeNoPluginEnable is the description key for the initialize
+ // no plugin enable flag.
DescKeyInitializeNoPluginEnable = "initialize.no-plugin-enable"
)
diff --git a/internal/config/embed/flag/journal.go b/internal/config/embed/flag/journal.go
index 279890f2c..3a9d68d0f 100644
--- a/internal/config/embed/flag/journal.go
+++ b/internal/config/embed/flag/journal.go
@@ -8,21 +8,47 @@ package flag
// DescKeys for journal output flags.
const (
+ // DescKeyJournalObsidianOutput is the description key for the journal
+ // obsidian output flag.
DescKeyJournalObsidianOutput = "journal.obsidian.output"
- DescKeyJournalSiteBuild = "journal.site.build"
- DescKeyJournalSiteOutput = "journal.site.output"
- DescKeyJournalSiteServe = "journal.site.serve"
+ // DescKeyJournalSiteBuild is the description key for the journal site build
+ // flag.
+ DescKeyJournalSiteBuild = "journal.site.build"
+ // DescKeyJournalSiteOutput is the description key for the journal site output
+ // flag.
+ DescKeyJournalSiteOutput = "journal.site.output"
+ // DescKeyJournalSiteServe is the description key for the journal site serve
+ // flag.
+ DescKeyJournalSiteServe = "journal.site.serve"
)
// DescKeys for journal source flags.
const (
+ // DescKeyJournalSourceAllProjects is the description key for the journal
+ // source all projects flag.
DescKeyJournalSourceAllProjects = "journal.source.all-projects"
- DescKeyJournalSourceFull = "journal.source.full"
- DescKeyJournalSourceLatest = "journal.source.latest"
- DescKeyJournalSourceLimit = "journal.source.limit"
- DescKeyJournalSourceProject = "journal.source.project"
- DescKeyJournalSourceShow = "journal.source.show"
- DescKeyJournalSourceSince = "journal.source.since"
- DescKeyJournalSourceTool = "journal.source.tool"
- DescKeyJournalSourceUntil = "journal.source.until"
+ // DescKeyJournalSourceFull is the description key for the journal source full
+ // flag.
+ DescKeyJournalSourceFull = "journal.source.full"
+ // DescKeyJournalSourceLatest is the description key for the journal source
+ // latest flag.
+ DescKeyJournalSourceLatest = "journal.source.latest"
+ // DescKeyJournalSourceLimit is the description key for the journal source
+ // limit flag.
+ DescKeyJournalSourceLimit = "journal.source.limit"
+ // DescKeyJournalSourceProject is the description key for the journal source
+ // project flag.
+ DescKeyJournalSourceProject = "journal.source.project"
+ // DescKeyJournalSourceShow is the description key for the journal source show
+ // flag.
+ DescKeyJournalSourceShow = "journal.source.show"
+ // DescKeyJournalSourceSince is the description key for the journal source
+ // since flag.
+ DescKeyJournalSourceSince = "journal.source.since"
+ // DescKeyJournalSourceTool is the description key for the journal source tool
+ // flag.
+ DescKeyJournalSourceTool = "journal.source.tool"
+ // DescKeyJournalSourceUntil is the description key for the journal source
+ // until flag.
+ DescKeyJournalSourceUntil = "journal.source.until"
)
diff --git a/internal/config/embed/flag/journal_import.go b/internal/config/embed/flag/journal_import.go
index d70fe8928..2c4e7fd88 100644
--- a/internal/config/embed/flag/journal_import.go
+++ b/internal/config/embed/flag/journal_import.go
@@ -8,12 +8,27 @@ package flag
// DescKeys for journal import flags.
const (
- DescKeyJournalImportAll = "journal.import.all"
- DescKeyJournalImportAllProjects = "journal.import.all-projects"
- DescKeyJournalImportDryRun = "journal.import.dry-run"
+ // DescKeyJournalImportAll is the description key for the journal import all
+ // flag.
+ DescKeyJournalImportAll = "journal.import.all"
+ // DescKeyJournalImportAllProjects is the description key for the journal
+ // import all projects flag.
+ DescKeyJournalImportAllProjects = "journal.import.all-projects"
+ // DescKeyJournalImportDryRun is the description key for the journal import
+ // dry run flag.
+ DescKeyJournalImportDryRun = "journal.import.dry-run"
+ // DescKeyJournalImportKeepFrontmatter is the description key for the journal
+ // import keep frontmatter flag.
DescKeyJournalImportKeepFrontmatter = "journal.import.keep-frontmatter"
- DescKeyJournalImportRegenerate = "journal.import.regenerate"
- DescKeyJournalImportYes = "journal.import.yes"
- DescKeyJournalLockAll = "journal.lock.all"
- DescKeyJournalUnlockAll = "journal.unlock.all"
+ // DescKeyJournalImportRegenerate is the description key for the journal
+ // import regenerate flag.
+ DescKeyJournalImportRegenerate = "journal.import.regenerate"
+ // DescKeyJournalImportYes is the description key for the journal import yes
+ // flag.
+ DescKeyJournalImportYes = "journal.import.yes"
+ // DescKeyJournalLockAll is the description key for the journal lock all flag.
+ DescKeyJournalLockAll = "journal.lock.all"
+ // DescKeyJournalUnlockAll is the description key for the journal unlock all
+ // flag.
+ DescKeyJournalUnlockAll = "journal.unlock.all"
)
diff --git a/internal/config/embed/flag/load.go b/internal/config/embed/flag/load.go
index 091f1e327..ced7ef80e 100644
--- a/internal/config/embed/flag/load.go
+++ b/internal/config/embed/flag/load.go
@@ -8,6 +8,8 @@ package flag
// DescKeys for load command flags.
const (
+ // DescKeyLoadBudget is the description key for the load budget flag.
DescKeyLoadBudget = "load.budget"
- DescKeyLoadRaw = "load.raw"
+ // DescKeyLoadRaw is the description key for the load raw flag.
+ DescKeyLoadRaw = "load.raw"
)
diff --git a/internal/config/embed/flag/loop.go b/internal/config/embed/flag/loop.go
index 09019bf66..6c65c639d 100644
--- a/internal/config/embed/flag/loop.go
+++ b/internal/config/embed/flag/loop.go
@@ -8,9 +8,15 @@ package flag
// DescKeys for loop command flags.
const (
- DescKeyLoopCompletion = "loop.completion"
+ // DescKeyLoopCompletion is the description key for the loop completion flag.
+ DescKeyLoopCompletion = "loop.completion"
+ // DescKeyLoopMaxIterations is the description key for the loop max iterations
+ // flag.
DescKeyLoopMaxIterations = "loop.max-iterations"
- DescKeyLoopOutput = "loop.output"
- DescKeyLoopPrompt = "loop.prompt"
- DescKeyLoopTool = "loop.tool"
+ // DescKeyLoopOutput is the description key for the loop output flag.
+ DescKeyLoopOutput = "loop.output"
+ // DescKeyLoopPrompt is the description key for the loop prompt flag.
+ DescKeyLoopPrompt = "loop.prompt"
+ // DescKeyLoopTool is the description key for the loop tool flag.
+ DescKeyLoopTool = "loop.tool"
)
diff --git a/internal/config/embed/flag/memory.go b/internal/config/embed/flag/memory.go
index 3523653ee..8881491ca 100644
--- a/internal/config/embed/flag/memory.go
+++ b/internal/config/embed/flag/memory.go
@@ -8,8 +8,16 @@ package flag
// DescKeys for memory command flags.
const (
- DescKeyMemoryImportDryRun = "memory.import.dry-run"
+ // DescKeyMemoryImportDryRun is the description key for the memory import dry
+ // run flag.
+ DescKeyMemoryImportDryRun = "memory.import.dry-run"
+ // DescKeyMemoryPublishBudget is the description key for the memory publish
+ // budget flag.
DescKeyMemoryPublishBudget = "memory.publish.budget"
+ // DescKeyMemoryPublishDryRun is the description key for the memory publish
+ // dry run flag.
DescKeyMemoryPublishDryRun = "memory.publish.dry-run"
- DescKeyMemorySyncDryRun = "memory.sync.dry-run"
+ // DescKeyMemorySyncDryRun is the description key for the memory sync dry run
+ // flag.
+ DescKeyMemorySyncDryRun = "memory.sync.dry-run"
)
diff --git a/internal/config/embed/flag/notify.go b/internal/config/embed/flag/notify.go
index c6ec00ef7..1ce919c69 100644
--- a/internal/config/embed/flag/notify.go
+++ b/internal/config/embed/flag/notify.go
@@ -8,8 +8,13 @@ package flag
// DescKeys for notify command flags.
const (
- DescKeyNotifyEvent = "notify.event"
- DescKeyNotifyHook = "notify.hook"
+ // DescKeyNotifyEvent is the description key for the notify event flag.
+ DescKeyNotifyEvent = "notify.event"
+ // DescKeyNotifyHook is the description key for the notify hook flag.
+ DescKeyNotifyHook = "notify.hook"
+ // DescKeyNotifySessionId is the description key for the notify session id
+ // flag.
DescKeyNotifySessionId = "notify.session-id"
- DescKeyNotifyVariant = "notify.variant"
+ // DescKeyNotifyVariant is the description key for the notify variant flag.
+ DescKeyNotifyVariant = "notify.variant"
)
diff --git a/internal/config/embed/flag/pad.go b/internal/config/embed/flag/pad.go
index a8e20e050..5821b4df3 100644
--- a/internal/config/embed/flag/pad.go
+++ b/internal/config/embed/flag/pad.go
@@ -8,15 +8,27 @@ package flag
// DescKeys for pad command flags.
const (
- DescKeyPadAddFile = "pad.add.file"
- DescKeyPadEditAppend = "pad.edit.append"
- DescKeyPadEditFile = "pad.edit.file"
- DescKeyPadEditLabel = "pad.edit.label"
- DescKeyPadEditPrepend = "pad.edit.prepend"
+ // DescKeyPadAddFile is the description key for the pad add file flag.
+ DescKeyPadAddFile = "pad.add.file"
+ // DescKeyPadEditAppend is the description key for the pad edit append flag.
+ DescKeyPadEditAppend = "pad.edit.append"
+ // DescKeyPadEditFile is the description key for the pad edit file flag.
+ DescKeyPadEditFile = "pad.edit.file"
+ // DescKeyPadEditLabel is the description key for the pad edit label flag.
+ DescKeyPadEditLabel = "pad.edit.label"
+ // DescKeyPadEditPrepend is the description key for the pad edit prepend flag.
+ DescKeyPadEditPrepend = "pad.edit.prepend"
+ // DescKeyPadExportDryRun is the description key for the pad export dry run
+ // flag.
DescKeyPadExportDryRun = "pad.export.dry-run"
- DescKeyPadExportForce = "pad.export.force"
- DescKeyPadImpBlob = "pad.root.blob"
- DescKeyPadMergeDryRun = "pad.merge.dry-run"
- DescKeyPadMergeKey = "pad.merge.key"
- DescKeyPadShowOut = "pad.show.out"
+ // DescKeyPadExportForce is the description key for the pad export force flag.
+ DescKeyPadExportForce = "pad.export.force"
+ // DescKeyPadImpBlob is the description key for the pad imp blob flag.
+ DescKeyPadImpBlob = "pad.root.blob"
+ // DescKeyPadMergeDryRun is the description key for the pad merge dry run flag.
+ DescKeyPadMergeDryRun = "pad.merge.dry-run"
+ // DescKeyPadMergeKey is the description key for the pad merge key flag.
+ DescKeyPadMergeKey = "pad.merge.key"
+ // DescKeyPadShowOut is the description key for the pad show out flag.
+ DescKeyPadShowOut = "pad.show.out"
)
diff --git a/internal/config/embed/flag/pause.go b/internal/config/embed/flag/pause.go
index d5c06cf4d..fa5fa3f62 100644
--- a/internal/config/embed/flag/pause.go
+++ b/internal/config/embed/flag/pause.go
@@ -8,6 +8,9 @@ package flag
// DescKeys for pause command flags.
const (
- DescKeyPauseSessionId = "pause.session-id"
+ // DescKeyPauseSessionId is the description key for the pause session id flag.
+ DescKeyPauseSessionId = "pause.session-id"
+ // DescKeyResumeSessionId is the description key for the resume session id
+ // flag.
DescKeyResumeSessionId = "resume.session-id"
)
diff --git a/internal/config/embed/flag/remind.go b/internal/config/embed/flag/remind.go
index 6b57e8a65..ad1467105 100644
--- a/internal/config/embed/flag/remind.go
+++ b/internal/config/embed/flag/remind.go
@@ -8,7 +8,11 @@ package flag
// DescKeys for remind command flags.
const (
- DescKeyRemindAddAfter = "remind.add.after"
- DescKeyRemindAfter = "remind.after"
+ // DescKeyRemindAddAfter is the description key for the remind add after flag.
+ DescKeyRemindAddAfter = "remind.add.after"
+ // DescKeyRemindAfter is the description key for the remind after flag.
+ DescKeyRemindAfter = "remind.after"
+ // DescKeyRemindDismissAll is the description key for the remind dismiss all
+ // flag.
DescKeyRemindDismissAll = "remind.dismiss.all"
)
diff --git a/internal/config/embed/flag/site.go b/internal/config/embed/flag/site.go
index a234085e9..fe397f8fc 100644
--- a/internal/config/embed/flag/site.go
+++ b/internal/config/embed/flag/site.go
@@ -8,6 +8,9 @@ package flag
// DescKeys for site command flags.
const (
+ // DescKeySiteFeedBaseUrl is the description key for the site feed base url
+ // flag.
DescKeySiteFeedBaseUrl = "site.feed.base-url"
- DescKeySiteFeedOut = "site.feed.out"
+ // DescKeySiteFeedOut is the description key for the site feed out flag.
+ DescKeySiteFeedOut = "site.feed.out"
)
diff --git a/internal/config/embed/flag/status.go b/internal/config/embed/flag/status.go
index 372e24ea2..c38464c60 100644
--- a/internal/config/embed/flag/status.go
+++ b/internal/config/embed/flag/status.go
@@ -8,6 +8,8 @@ package flag
// DescKeys for status command flags.
const (
- DescKeyStatusJson = "status.json"
+ // DescKeyStatusJson is the description key for the status json flag.
+ DescKeyStatusJson = "status.json"
+ // DescKeyStatusVerbose is the description key for the status verbose flag.
DescKeyStatusVerbose = "status.verbose"
)
diff --git a/internal/config/embed/flag/system.go b/internal/config/embed/flag/system.go
index a5f5190b9..7bb2f7f47 100644
--- a/internal/config/embed/flag/system.go
+++ b/internal/config/embed/flag/system.go
@@ -8,27 +8,73 @@ package flag
// DescKeys for system command flags.
const (
- DescKeySystemBackupJson = "system.backup.json"
- DescKeySystemBackupScope = "system.backup.scope"
- DescKeySystemBootstrapJson = "system.bootstrap.json"
- DescKeySystemBootstrapQuiet = "system.bootstrap.quiet"
- DescKeySystemEventsAll = "system.events.all"
- DescKeySystemEventsEvent = "system.events.event"
- DescKeySystemEventsHook = "system.events.hook"
- DescKeySystemEventsJson = "system.events.json"
- DescKeySystemEventsLast = "system.events.last"
- DescKeySystemEventsSession = "system.events.session"
- DescKeySystemMarkJournalCheck = "system.markjournal.check"
- DescKeySystemMessageJson = "system.message.json"
- DescKeySystemPauseSessionId = "system.pause.session-id"
- DescKeySystemPruneDays = "system.prune.days"
- DescKeySystemPruneDryRun = "system.prune.dry-run"
- DescKeySystemResourcesJson = "system.resources.json"
- DescKeySystemResumeSessionId = "system.resume.session-id"
+ // DescKeySystemBackupJson is the description key for the system backup json
+ // flag.
+ DescKeySystemBackupJson = "system.backup.json"
+ // DescKeySystemBackupScope is the description key for the system backup scope
+ // flag.
+ DescKeySystemBackupScope = "system.backup.scope"
+ // DescKeySystemBootstrapJson is the description key for the system bootstrap
+ // json flag.
+ DescKeySystemBootstrapJson = "system.bootstrap.json"
+ // DescKeySystemBootstrapQuiet is the description key for the system bootstrap
+ // quiet flag.
+ DescKeySystemBootstrapQuiet = "system.bootstrap.quiet"
+ // DescKeySystemEventsAll is the description key for the system events all
+ // flag.
+ DescKeySystemEventsAll = "system.events.all"
+ // DescKeySystemEventsEvent is the description key for the system events event
+ // flag.
+ DescKeySystemEventsEvent = "system.events.event"
+ // DescKeySystemEventsHook is the description key for the system events hook
+ // flag.
+ DescKeySystemEventsHook = "system.events.hook"
+ // DescKeySystemEventsJson is the description key for the system events json
+ // flag.
+ DescKeySystemEventsJson = "system.events.json"
+ // DescKeySystemEventsLast is the description key for the system events last
+ // flag.
+ DescKeySystemEventsLast = "system.events.last"
+ // DescKeySystemEventsSession is the description key for the system events
+ // session flag.
+ DescKeySystemEventsSession = "system.events.session"
+ // DescKeySystemMarkJournalCheck is the description key for the system mark
+ // journal check flag.
+ DescKeySystemMarkJournalCheck = "system.markjournal.check"
+ // DescKeySystemMessageJson is the description key for the system message json
+ // flag.
+ DescKeySystemMessageJson = "system.message.json"
+ // DescKeySystemPauseSessionId is the description key for the system pause
+ // session id flag.
+ DescKeySystemPauseSessionId = "system.pause.session-id"
+ // DescKeySystemPruneDays is the description key for the system prune days
+ // flag.
+ DescKeySystemPruneDays = "system.prune.days"
+ // DescKeySystemPruneDryRun is the description key for the system prune dry
+ // run flag.
+ DescKeySystemPruneDryRun = "system.prune.dry-run"
+ // DescKeySystemResourcesJson is the description key for the system resources
+ // json flag.
+ DescKeySystemResourcesJson = "system.resources.json"
+ // DescKeySystemResumeSessionId is the description key for the system resume
+ // session id flag.
+ DescKeySystemResumeSessionId = "system.resume.session-id"
+ // DescKeySystemSessionEventCaller is the description key for the system
+ // session event caller flag.
DescKeySystemSessionEventCaller = "system.sessionevent.caller"
- DescKeySystemSessionEventType = "system.sessionevent.type"
- DescKeySystemStatsFollow = "system.stats.follow"
- DescKeySystemStatsJson = "system.stats.json"
- DescKeySystemStatsLast = "system.stats.last"
- DescKeySystemStatsSession = "system.stats.session"
+ // DescKeySystemSessionEventType is the description key for the system session
+ // event type flag.
+ DescKeySystemSessionEventType = "system.sessionevent.type"
+ // DescKeySystemStatsFollow is the description key for the system stats follow
+ // flag.
+ DescKeySystemStatsFollow = "system.stats.follow"
+ // DescKeySystemStatsJson is the description key for the system stats json
+ // flag.
+ DescKeySystemStatsJson = "system.stats.json"
+ // DescKeySystemStatsLast is the description key for the system stats last
+ // flag.
+ DescKeySystemStatsLast = "system.stats.last"
+ // DescKeySystemStatsSession is the description key for the system stats
+ // session flag.
+ DescKeySystemStatsSession = "system.stats.session"
)
diff --git a/internal/config/embed/flag/trace.go b/internal/config/embed/flag/trace.go
index 38684db50..aa20ef06e 100644
--- a/internal/config/embed/flag/trace.go
+++ b/internal/config/embed/flag/trace.go
@@ -8,9 +8,15 @@ package flag
// DescKeys for trace command flags.
const (
- DescKeyTraceLast = "trace.last"
- DescKeyTraceJSON = "trace.json"
- DescKeyTraceFileLast = "trace.file.last"
- DescKeyTraceTagNote = "trace.tag.note"
+ // DescKeyTraceLast is the description key for the trace last flag.
+ DescKeyTraceLast = "trace.last"
+ // DescKeyTraceJSON is the description key for the trace json flag.
+ DescKeyTraceJSON = "trace.json"
+ // DescKeyTraceFileLast is the description key for the trace file last flag.
+ DescKeyTraceFileLast = "trace.file.last"
+ // DescKeyTraceTagNote is the description key for the trace tag note flag.
+ DescKeyTraceTagNote = "trace.tag.note"
+ // DescKeyTraceCollectRecord is the description key for the trace collect
+ // record flag.
DescKeyTraceCollectRecord = "trace.collect.record"
)
diff --git a/internal/config/embed/flag/watch.go b/internal/config/embed/flag/watch.go
index 2f4801152..45a00f208 100644
--- a/internal/config/embed/flag/watch.go
+++ b/internal/config/embed/flag/watch.go
@@ -8,6 +8,8 @@ package flag
// DescKeys for watch command flags.
const (
+ // DescKeyWatchDryRun is the description key for the watch dry run flag.
DescKeyWatchDryRun = "watch.dry-run"
- DescKeyWatchLog = "watch.log"
+ // DescKeyWatchLog is the description key for the watch log flag.
+ DescKeyWatchLog = "watch.log"
)
diff --git a/internal/config/embed/text/agent.go b/internal/config/embed/text/agent.go
index bcdedb2b3..d9a229155 100644
--- a/internal/config/embed/text/agent.go
+++ b/internal/config/embed/text/agent.go
@@ -8,20 +8,43 @@ package text
// DescKeys for agent context.
const (
- DescKeyAgentInstruction = "agent.instruction"
- DescKeyAgentPacketTitle = "agent.packet-title"
- DescKeyAgentPacketMeta = "agent.packet-meta"
- DescKeyAgentSectionReadOrder = "agent.section-read-order"
+ // DescKeyAgentInstruction is the text key for agent instruction messages.
+ DescKeyAgentInstruction = "agent.instruction"
+ // DescKeyAgentPacketTitle is the text key for agent packet title messages.
+ DescKeyAgentPacketTitle = "agent.packet-title"
+ // DescKeyAgentPacketMeta is the text key for agent packet meta messages.
+ DescKeyAgentPacketMeta = "agent.packet-meta"
+ // DescKeyAgentSectionReadOrder is the text key for agent section read order
+ // messages.
+ DescKeyAgentSectionReadOrder = "agent.section-read-order"
+ // DescKeyAgentSectionConstitution is the text key for agent section
+ // constitution messages.
DescKeyAgentSectionConstitution = "agent.section-constitution"
- DescKeyAgentSectionTasks = "agent.section-tasks"
- DescKeyAgentSectionConventions = "agent.section-conventions"
- DescKeyAgentSectionDecisions = "agent.section-decisions"
- DescKeyAgentSectionLearnings = "agent.section-learnings"
- DescKeyAgentSectionSummaries = "agent.section-summaries"
+ // DescKeyAgentSectionTasks is the text key for agent section tasks messages.
+ DescKeyAgentSectionTasks = "agent.section-tasks"
+ // DescKeyAgentSectionConventions is the text key for agent section
+ // conventions messages.
+ DescKeyAgentSectionConventions = "agent.section-conventions"
+ // DescKeyAgentSectionDecisions is the text key for agent section decisions
+ // messages.
+ DescKeyAgentSectionDecisions = "agent.section-decisions"
+ // DescKeyAgentSectionLearnings is the text key for agent section learnings
+ // messages.
+ DescKeyAgentSectionLearnings = "agent.section-learnings"
+ // DescKeyAgentSectionSummaries is the text key for agent section summaries
+ // messages.
+ DescKeyAgentSectionSummaries = "agent.section-summaries"
+ // DescKeyAgentSectionSteering is the text key for agent section steering
+ // messages.
DescKeyAgentSectionSteering = "agent.section-steering"
- DescKeyAgentSectionSkill = "agent.section-skill"
+ // DescKeyAgentSectionSkill is the text key for agent section skill messages.
+ DescKeyAgentSectionSkill = "agent.section-skill"
- DescKeyWriteAgentBulletItem = "write.agent-bullet-item"
+ // DescKeyWriteAgentBulletItem is the text key for write agent bullet item
+ // messages.
+ DescKeyWriteAgentBulletItem = "write.agent-bullet-item"
+ // DescKeyWriteAgentNumberedItem is the text key for write agent numbered item
+ // messages.
DescKeyWriteAgentNumberedItem = "write.agent-numbered-item"
)
diff --git a/internal/config/embed/text/backup.go b/internal/config/embed/text/backup.go
index 2777de976..7bcfd8ae7 100644
--- a/internal/config/embed/text/backup.go
+++ b/internal/config/embed/text/backup.go
@@ -8,24 +8,40 @@ package text
// DescKeys for backup operations.
const (
- DescKeyBackupBoxTitle = "backup.box-title"
- DescKeyBackupNoMarker = "backup.no-marker"
- DescKeyBackupRelayMessage = "backup.relay-message"
- DescKeyBackupRelayPrefix = "backup.relay-prefix"
- DescKeyBackupRunHint = "backup.run-hint"
- DescKeyBackupSMBNotMounted = "backup.smb-not-mounted"
+ // DescKeyBackupBoxTitle is the text key for backup box title messages.
+ DescKeyBackupBoxTitle = "backup.box-title"
+ // DescKeyBackupNoMarker is the text key for backup no marker messages.
+ DescKeyBackupNoMarker = "backup.no-marker"
+ // DescKeyBackupRelayMessage is the text key for backup relay message messages.
+ DescKeyBackupRelayMessage = "backup.relay-message"
+ // DescKeyBackupRelayPrefix is the text key for backup relay prefix messages.
+ DescKeyBackupRelayPrefix = "backup.relay-prefix"
+ // DescKeyBackupRunHint is the text key for backup run hint messages.
+ DescKeyBackupRunHint = "backup.run-hint"
+ // DescKeyBackupSMBNotMounted is the text key for backup smb not mounted
+ // messages.
+ DescKeyBackupSMBNotMounted = "backup.smb-not-mounted"
+ // DescKeyBackupSMBUnavailable is the text key for backup smb unavailable
+ // messages.
DescKeyBackupSMBUnavailable = "backup.smb-unavailable"
- DescKeyBackupStale = "backup.stale"
+ // DescKeyBackupStale is the text key for backup stale messages.
+ DescKeyBackupStale = "backup.stale"
)
// DescKeys for backup result write output.
const (
- DescKeyWriteBackupResult = "write.backup-result"
+ // DescKeyWriteBackupResult is the text key for write backup result messages.
+ DescKeyWriteBackupResult = "write.backup-result"
+ // DescKeyWriteBackupSMBDest is the text key for write backup smb dest
+ // messages.
DescKeyWriteBackupSMBDest = "write.backup-smb-dest"
)
// DescKeys for snapshot write output.
const (
- DescKeyWriteSnapshotSaved = "write.snapshot-saved"
+ // DescKeyWriteSnapshotSaved is the text key for write snapshot saved messages.
+ DescKeyWriteSnapshotSaved = "write.snapshot-saved"
+ // DescKeyWriteSnapshotUpdated is the text key for write snapshot updated
+ // messages.
DescKeyWriteSnapshotUpdated = "write.snapshot-updated"
)
diff --git a/internal/config/embed/text/block.go b/internal/config/embed/text/block.go
index cbac3852f..7aadf5bd5 100644
--- a/internal/config/embed/text/block.go
+++ b/internal/config/embed/text/block.go
@@ -8,13 +8,25 @@ package text
// DescKeys for block formatting.
const (
+ // DescKeyBlockNonPathRelayMessage is the text key for block non path relay
+ // message messages.
DescKeyBlockNonPathRelayMessage = "block.non-path-relay-message"
- DescKeyBlockConstitutionSuffix = "block.constitution-suffix"
- DescKeyBlockMidSudo = "block.mid-sudo"
- DescKeyBlockMidGitPush = "block.mid-git-push"
- DescKeyBlockCpToBin = "block.cp-to-bin"
- DescKeyBlockInstallToLocalBin = "block.install-to-local-bin"
- DescKeyBlockDotSlash = "block.dot-slash"
- DescKeyBlockGoRun = "block.go-run"
- DescKeyBlockAbsolutePath = "block.absolute-path"
+ // DescKeyBlockConstitutionSuffix is the text key for block constitution
+ // suffix messages.
+ DescKeyBlockConstitutionSuffix = "block.constitution-suffix"
+ // DescKeyBlockMidSudo is the text key for block mid sudo messages.
+ DescKeyBlockMidSudo = "block.mid-sudo"
+ // DescKeyBlockMidGitPush is the text key for block mid git push messages.
+ DescKeyBlockMidGitPush = "block.mid-git-push"
+ // DescKeyBlockCpToBin is the text key for block cp to bin messages.
+ DescKeyBlockCpToBin = "block.cp-to-bin"
+ // DescKeyBlockInstallToLocalBin is the text key for block install to local
+ // bin messages.
+ DescKeyBlockInstallToLocalBin = "block.install-to-local-bin"
+ // DescKeyBlockDotSlash is the text key for block dot slash messages.
+ DescKeyBlockDotSlash = "block.dot-slash"
+ // DescKeyBlockGoRun is the text key for block go run messages.
+ DescKeyBlockGoRun = "block.go-run"
+ // DescKeyBlockAbsolutePath is the text key for block absolute path messages.
+ DescKeyBlockAbsolutePath = "block.absolute-path"
)
diff --git a/internal/config/embed/text/bootstrap.go b/internal/config/embed/text/bootstrap.go
index 088a594ab..0eaa4142f 100644
--- a/internal/config/embed/text/bootstrap.go
+++ b/internal/config/embed/text/bootstrap.go
@@ -8,20 +8,39 @@ package text
// DescKeys for bootstrap output.
const (
- DescKeyBootstrapNextSteps = "bootstrap.next-steps"
- DescKeyBootstrapNone = "bootstrap.none"
+ // DescKeyBootstrapNextSteps is the text key for bootstrap next steps messages.
+ DescKeyBootstrapNextSteps = "bootstrap.next-steps"
+ // DescKeyBootstrapNone is the text key for bootstrap none messages.
+ DescKeyBootstrapNone = "bootstrap.none"
+ // DescKeyBootstrapPluginWarning is the text key for bootstrap plugin warning
+ // messages.
DescKeyBootstrapPluginWarning = "bootstrap.plugin-warning"
- DescKeyBootstrapRules = "bootstrap.rules"
+ // DescKeyBootstrapRules is the text key for bootstrap rules messages.
+ DescKeyBootstrapRules = "bootstrap.rules"
)
// DescKeys for bootstrap write output.
const (
- DescKeyWriteBootstrapDir = "write.bootstrap-dir"
- DescKeyWriteBootstrapFiles = "write.bootstrap-files"
+ // DescKeyWriteBootstrapDir is the text key for write bootstrap dir messages.
+ DescKeyWriteBootstrapDir = "write.bootstrap-dir"
+ // DescKeyWriteBootstrapFiles is the text key for write bootstrap files
+ // messages.
+ DescKeyWriteBootstrapFiles = "write.bootstrap-files"
+ // DescKeyWriteBootstrapNextSteps is the text key for write bootstrap next
+ // steps messages.
DescKeyWriteBootstrapNextSteps = "write.bootstrap-next-steps"
- DescKeyWriteBootstrapNumbered = "write.bootstrap-numbered"
- DescKeyWriteBootstrapRules = "write.bootstrap-rules"
- DescKeyWriteBootstrapSep = "write.bootstrap-sep"
- DescKeyWriteBootstrapTitle = "write.bootstrap-title"
- DescKeyWriteBootstrapWarning = "write.bootstrap-warning"
+ // DescKeyWriteBootstrapNumbered is the text key for write bootstrap numbered
+ // messages.
+ DescKeyWriteBootstrapNumbered = "write.bootstrap-numbered"
+ // DescKeyWriteBootstrapRules is the text key for write bootstrap rules
+ // messages.
+ DescKeyWriteBootstrapRules = "write.bootstrap-rules"
+ // DescKeyWriteBootstrapSep is the text key for write bootstrap sep messages.
+ DescKeyWriteBootstrapSep = "write.bootstrap-sep"
+ // DescKeyWriteBootstrapTitle is the text key for write bootstrap title
+ // messages.
+ DescKeyWriteBootstrapTitle = "write.bootstrap-title"
+ // DescKeyWriteBootstrapWarning is the text key for write bootstrap warning
+ // messages.
+ DescKeyWriteBootstrapWarning = "write.bootstrap-warning"
)
diff --git a/internal/config/embed/text/change.go b/internal/config/embed/text/change.go
index 090566a0f..90573d288 100644
--- a/internal/config/embed/text/change.go
+++ b/internal/config/embed/text/change.go
@@ -8,24 +8,43 @@ package text
// DescKeys for change tracking labels.
const (
+ // DescKeyChangesFallbackLabel is the text key for changes fallback label
+ // messages.
DescKeyChangesFallbackLabel = "changes.fallback-label"
- DescKeyChangesSincePrefix = "changes.since-prefix"
+ // DescKeyChangesSincePrefix is the text key for changes since prefix messages.
+ DescKeyChangesSincePrefix = "changes.since-prefix"
)
// DescKeys for change tracking headings and output.
const (
- DescKeyChangesHeading = "changes.heading"
- DescKeyChangesRefPoint = "changes.ref-point"
- DescKeyChangesCtxHeading = "changes.ctx-heading"
- DescKeyChangesCtxLine = "changes.ctx-line"
- DescKeyChangesCodeHeading = "changes.code-heading"
- DescKeyChangesCodeCommits = "changes.code-commits"
- DescKeyChangesCodeLatest = "changes.code-latest"
- DescKeyChangesCodeDirs = "changes.code-dirs"
- DescKeyChangesCodeAuthors = "changes.code-authors"
- DescKeyChangesNone = "changes.none"
- DescKeyChangesHookCtxFiles = "changes.hook-ctx-files"
- DescKeyChangesHookCommits = "changes.hook-commits"
+ // DescKeyChangesHeading is the text key for changes heading messages.
+ DescKeyChangesHeading = "changes.heading"
+ // DescKeyChangesRefPoint is the text key for changes ref point messages.
+ DescKeyChangesRefPoint = "changes.ref-point"
+ // DescKeyChangesCtxHeading is the text key for changes ctx heading messages.
+ DescKeyChangesCtxHeading = "changes.ctx-heading"
+ // DescKeyChangesCtxLine is the text key for changes ctx line messages.
+ DescKeyChangesCtxLine = "changes.ctx-line"
+ // DescKeyChangesCodeHeading is the text key for changes code heading messages.
+ DescKeyChangesCodeHeading = "changes.code-heading"
+ // DescKeyChangesCodeCommits is the text key for changes code commits messages.
+ DescKeyChangesCodeCommits = "changes.code-commits"
+ // DescKeyChangesCodeLatest is the text key for changes code latest messages.
+ DescKeyChangesCodeLatest = "changes.code-latest"
+ // DescKeyChangesCodeDirs is the text key for changes code dirs messages.
+ DescKeyChangesCodeDirs = "changes.code-dirs"
+ // DescKeyChangesCodeAuthors is the text key for changes code authors messages.
+ DescKeyChangesCodeAuthors = "changes.code-authors"
+ // DescKeyChangesNone is the text key for changes none messages.
+ DescKeyChangesNone = "changes.none"
+ // DescKeyChangesHookCtxFiles is the text key for changes hook ctx files
+ // messages.
+ DescKeyChangesHookCtxFiles = "changes.hook-ctx-files"
+ // DescKeyChangesHookCommits is the text key for changes hook commits messages.
+ DescKeyChangesHookCommits = "changes.hook-commits"
+ // DescKeyChangesHookCommitsExtra is the text key for changes hook commits
+ // extra messages.
DescKeyChangesHookCommitsExtra = "changes.hook-commits-extra"
- DescKeyChangesHookPrefix = "changes.hook-prefix"
+ // DescKeyChangesHookPrefix is the text key for changes hook prefix messages.
+ DescKeyChangesHookPrefix = "changes.hook-prefix"
)
diff --git a/internal/config/embed/text/check_ceremony.go b/internal/config/embed/text/check_ceremony.go
index a4ab1008b..5f4074a9c 100644
--- a/internal/config/embed/text/check_ceremony.go
+++ b/internal/config/embed/text/check_ceremony.go
@@ -8,12 +8,26 @@ package text
// DescKeys for ceremony checks.
const (
- DescKeyCeremonyBoxBoth = "ceremony.box-both"
- DescKeyCeremonyBoxRemember = "ceremony.box-remember"
- DescKeyCeremonyBoxWrapup = "ceremony.box-wrapup"
- DescKeyCeremonyFallbackBoth = "ceremony.fallback-both"
+ // DescKeyCeremonyBoxBoth is the text key for ceremony box both messages.
+ DescKeyCeremonyBoxBoth = "ceremony.box-both"
+ // DescKeyCeremonyBoxRemember is the text key for ceremony box remember
+ // messages.
+ DescKeyCeremonyBoxRemember = "ceremony.box-remember"
+ // DescKeyCeremonyBoxWrapup is the text key for ceremony box wrapup messages.
+ DescKeyCeremonyBoxWrapup = "ceremony.box-wrapup"
+ // DescKeyCeremonyFallbackBoth is the text key for ceremony fallback both
+ // messages.
+ DescKeyCeremonyFallbackBoth = "ceremony.fallback-both"
+ // DescKeyCeremonyFallbackRemember is the text key for ceremony fallback
+ // remember messages.
DescKeyCeremonyFallbackRemember = "ceremony.fallback-remember"
- DescKeyCeremonyFallbackWrapup = "ceremony.fallback-wrapup"
- DescKeyCeremonyRelayMessage = "ceremony.relay-message"
- DescKeyCeremonyRelayPrefix = "ceremony.relay-prefix"
+ // DescKeyCeremonyFallbackWrapup is the text key for ceremony fallback wrapup
+ // messages.
+ DescKeyCeremonyFallbackWrapup = "ceremony.fallback-wrapup"
+ // DescKeyCeremonyRelayMessage is the text key for ceremony relay message
+ // messages.
+ DescKeyCeremonyRelayMessage = "ceremony.relay-message"
+ // DescKeyCeremonyRelayPrefix is the text key for ceremony relay prefix
+ // messages.
+ DescKeyCeremonyRelayPrefix = "ceremony.relay-prefix"
)
diff --git a/internal/config/embed/text/check_context.go b/internal/config/embed/text/check_context.go
index 1a7aa4e26..df6b7092b 100644
--- a/internal/config/embed/text/check_context.go
+++ b/internal/config/embed/text/check_context.go
@@ -8,28 +8,76 @@ package text
// DescKeys for context size checks.
const (
- DescKeyCheckContextSizeBillingBoxTitle = "check-context-size.billing-box-title"
- DescKeyCheckContextSizeBillingFallback = "check-context-size.billing-fallback"
- DescKeyCheckContextSizeBillingRelayFormat = "check-context-size.billing-relay-format"
- DescKeyCheckContextSizeBillingRelayPrefix = "check-context-size.billing-relay-prefix"
- DescKeyCheckContextSizeCheckpointBoxTitle = "check-context-size.checkpoint-box-title"
- DescKeyCheckContextSizeCheckpointFallback = "check-context-size.checkpoint-fallback"
+ // DescKeyCheckContextSizeBillingBoxTitle is the text key for check context
+ // size billing box title messages.
+ DescKeyCheckContextSizeBillingBoxTitle = "check-context-size.billing-box-title"
+ // DescKeyCheckContextSizeBillingFallback is the text key for check context
+ // size billing fallback messages.
+ DescKeyCheckContextSizeBillingFallback = "check-context-size.billing-fallback"
+ // DescKeyCheckContextSizeBillingRelayFormat is the text key for check context
+ // size billing relay format messages.
+ DescKeyCheckContextSizeBillingRelayFormat = "check-context-size.billing-relay-format"
+ // DescKeyCheckContextSizeBillingRelayPrefix is the text key for check context
+ // size billing relay prefix messages.
+ DescKeyCheckContextSizeBillingRelayPrefix = "check-context-size.billing-relay-prefix"
+ // DescKeyCheckContextSizeCheckpointBoxTitle is the text key for check context
+ // size checkpoint box title messages.
+ DescKeyCheckContextSizeCheckpointBoxTitle = "check-context-size.checkpoint-box-title"
+ // DescKeyCheckContextSizeCheckpointFallback is the text key for check context
+ // size checkpoint fallback messages.
+ DescKeyCheckContextSizeCheckpointFallback = "check-context-size.checkpoint-fallback"
+ // DescKeyCheckContextSizeCheckpointRelayFormat is the text key for check
+ // context size checkpoint relay format messages.
DescKeyCheckContextSizeCheckpointRelayFormat = "check-context-size.checkpoint-relay-format"
- DescKeyCheckContextSizeOversizeFallback = "check-context-size.oversize-fallback"
- DescKeyCheckContextSizeRelayPrefix = "check-context-size.relay-prefix"
- DescKeyCheckContextSizeRunningLowSuffix = "check-context-size.running-low-suffix"
- DescKeyCheckContextSizeSilentLogFormat = "check-context-size.silent-log-format"
+ // DescKeyCheckContextSizeOversizeFallback is the text key for check context
+ // size oversize fallback messages.
+ DescKeyCheckContextSizeOversizeFallback = "check-context-size.oversize-fallback"
+ // DescKeyCheckContextSizeRelayPrefix is the text key for check context size
+ // relay prefix messages.
+ DescKeyCheckContextSizeRelayPrefix = "check-context-size.relay-prefix"
+ // DescKeyCheckContextSizeRunningLowSuffix is the text key for check context
+ // size running low suffix messages.
+ DescKeyCheckContextSizeRunningLowSuffix = "check-context-size.running-low-suffix"
+ // DescKeyCheckContextSizeSilentLogFormat is the text key for check context
+ // size silent log format messages.
+ DescKeyCheckContextSizeSilentLogFormat = "check-context-size.silent-log-format"
+ // DescKeyCheckContextSizeSilencedCheckpointLog is the text key for check
+ // context size silenced checkpoint log messages.
DescKeyCheckContextSizeSilencedCheckpointLog = "check-context-size.silenced-checkpoint-log"
- DescKeyCheckContextSizeCheckpointLogFormat = "check-context-size.checkpoint-log-format"
- DescKeyCheckContextSizeSuppressedLogFormat = "check-context-size.suppressed-log-format"
- DescKeyCheckContextSizeSilencedWindowLog = "check-context-size.silenced-window-log"
- DescKeyCheckContextSizeWindowLogFormat = "check-context-size.window-log-format"
- DescKeyCheckContextSizeSilencedBillingLog = "check-context-size.silenced-billing-log"
- DescKeyCheckContextSizeBillingLogFormat = "check-context-size.billing-log-format"
- DescKeyCheckContextSizeTokenLow = "check-context-size.token-low"
- DescKeyCheckContextSizeTokenNormal = "check-context-size.token-normal"
- DescKeyCheckContextSizeTokenUsage = "check-context-size.token-usage"
- DescKeyCheckContextSizeWindowBoxTitle = "check-context-size.window-box-title"
- DescKeyCheckContextSizeWindowFallback = "check-context-size.window-fallback"
- DescKeyCheckContextSizeWindowRelayFormat = "check-context-size.window-relay-format"
+ // DescKeyCheckContextSizeCheckpointLogFormat is the text key for check
+ // context size checkpoint log format messages.
+ DescKeyCheckContextSizeCheckpointLogFormat = "check-context-size.checkpoint-log-format"
+ // DescKeyCheckContextSizeSuppressedLogFormat is the text key for check
+ // context size suppressed log format messages.
+ DescKeyCheckContextSizeSuppressedLogFormat = "check-context-size.suppressed-log-format"
+ // DescKeyCheckContextSizeSilencedWindowLog is the text key for check context
+ // size silenced window log messages.
+ DescKeyCheckContextSizeSilencedWindowLog = "check-context-size.silenced-window-log"
+ // DescKeyCheckContextSizeWindowLogFormat is the text key for check context
+ // size window log format messages.
+ DescKeyCheckContextSizeWindowLogFormat = "check-context-size.window-log-format"
+ // DescKeyCheckContextSizeSilencedBillingLog is the text key for check context
+ // size silenced billing log messages.
+ DescKeyCheckContextSizeSilencedBillingLog = "check-context-size.silenced-billing-log"
+ // DescKeyCheckContextSizeBillingLogFormat is the text key for check context
+ // size billing log format messages.
+ DescKeyCheckContextSizeBillingLogFormat = "check-context-size.billing-log-format"
+ // DescKeyCheckContextSizeTokenLow is the text key for check context size
+ // token low messages.
+ DescKeyCheckContextSizeTokenLow = "check-context-size.token-low"
+ // DescKeyCheckContextSizeTokenNormal is the text key for check context size
+ // token normal messages.
+ DescKeyCheckContextSizeTokenNormal = "check-context-size.token-normal"
+ // DescKeyCheckContextSizeTokenUsage is the text key for check context size
+ // token usage messages.
+ DescKeyCheckContextSizeTokenUsage = "check-context-size.token-usage"
+ // DescKeyCheckContextSizeWindowBoxTitle is the text key for check context
+ // size window box title messages.
+ DescKeyCheckContextSizeWindowBoxTitle = "check-context-size.window-box-title"
+ // DescKeyCheckContextSizeWindowFallback is the text key for check context
+ // size window fallback messages.
+ DescKeyCheckContextSizeWindowFallback = "check-context-size.window-fallback"
+ // DescKeyCheckContextSizeWindowRelayFormat is the text key for check context
+ // size window relay format messages.
+ DescKeyCheckContextSizeWindowRelayFormat = "check-context-size.window-relay-format"
)
diff --git a/internal/config/embed/text/check_journal.go b/internal/config/embed/text/check_journal.go
index 4f56c2832..a8c9e5a90 100644
--- a/internal/config/embed/text/check_journal.go
+++ b/internal/config/embed/text/check_journal.go
@@ -8,10 +8,22 @@ package text
// DescKeys for journal checks.
const (
- DescKeyCheckJournalBoxTitle = "check-journal.box-title"
- DescKeyCheckJournalFallbackBoth = "check-journal.fallback-both"
+ // DescKeyCheckJournalBoxTitle is the text key for check journal box title
+ // messages.
+ DescKeyCheckJournalBoxTitle = "check-journal.box-title"
+ // DescKeyCheckJournalFallbackBoth is the text key for check journal fallback
+ // both messages.
+ DescKeyCheckJournalFallbackBoth = "check-journal.fallback-both"
+ // DescKeyCheckJournalFallbackUnenriched is the text key for check journal
+ // fallback unenriched messages.
DescKeyCheckJournalFallbackUnenriched = "check-journal.fallback-unenriched"
+ // DescKeyCheckJournalFallbackUnimported is the text key for check journal
+ // fallback unimported messages.
DescKeyCheckJournalFallbackUnimported = "check-journal.fallback-unimported"
- DescKeyCheckJournalRelayFormat = "check-journal.relay-format"
- DescKeyCheckJournalRelayPrefix = "check-journal.relay-prefix"
+ // DescKeyCheckJournalRelayFormat is the text key for check journal relay
+ // format messages.
+ DescKeyCheckJournalRelayFormat = "check-journal.relay-format"
+ // DescKeyCheckJournalRelayPrefix is the text key for check journal relay
+ // prefix messages.
+ DescKeyCheckJournalRelayPrefix = "check-journal.relay-prefix"
)
diff --git a/internal/config/embed/text/check_knowledge.go b/internal/config/embed/text/check_knowledge.go
index 44c20adb0..fa20aff2c 100644
--- a/internal/config/embed/text/check_knowledge.go
+++ b/internal/config/embed/text/check_knowledge.go
@@ -8,12 +8,26 @@ package text
// DescKeys for knowledge checks.
const (
- DescKeyCheckKnowledgeBoxTitle = "check-knowledge.box-title"
- DescKeyCheckKnowledgeFallback = "check-knowledge.fallback"
+ // DescKeyCheckKnowledgeBoxTitle is the text key for check knowledge box title
+ // messages.
+ DescKeyCheckKnowledgeBoxTitle = "check-knowledge.box-title"
+ // DescKeyCheckKnowledgeFallback is the text key for check knowledge fallback
+ // messages.
+ DescKeyCheckKnowledgeFallback = "check-knowledge.fallback"
+ // DescKeyCheckKnowledgeFindingFormat is the text key for check knowledge
+ // finding format messages.
DescKeyCheckKnowledgeFindingFormat = "check-knowledge.finding-format"
- DescKeyCheckKnowledgeRelayMessage = "check-knowledge.relay-message"
- DescKeyCheckKnowledgeRelayPrefix = "check-knowledge.relay-prefix"
+ // DescKeyCheckKnowledgeRelayMessage is the text key for check knowledge relay
+ // message messages.
+ DescKeyCheckKnowledgeRelayMessage = "check-knowledge.relay-message"
+ // DescKeyCheckKnowledgeRelayPrefix is the text key for check knowledge relay
+ // prefix messages.
+ DescKeyCheckKnowledgeRelayPrefix = "check-knowledge.relay-prefix"
+ // DescKeyWriteKnowledgeUnitEntries is the text key for write knowledge unit
+ // entries messages.
DescKeyWriteKnowledgeUnitEntries = "write.knowledge-unit-entries"
- DescKeyWriteKnowledgeUnitLines = "write.knowledge-unit-lines"
+ // DescKeyWriteKnowledgeUnitLines is the text key for write knowledge unit
+ // lines messages.
+ DescKeyWriteKnowledgeUnitLines = "write.knowledge-unit-lines"
)
diff --git a/internal/config/embed/text/check_map.go b/internal/config/embed/text/check_map.go
index 3f702e5ae..871589cf2 100644
--- a/internal/config/embed/text/check_map.go
+++ b/internal/config/embed/text/check_map.go
@@ -8,8 +8,16 @@ package text
// DescKeys for map staleness checks.
const (
- DescKeyCheckMapStalenessBoxTitle = "check-map-staleness.box-title"
- DescKeyCheckMapStalenessFallback = "check-map-staleness.fallback"
+ // DescKeyCheckMapStalenessBoxTitle is the text key for check map staleness
+ // box title messages.
+ DescKeyCheckMapStalenessBoxTitle = "check-map-staleness.box-title"
+ // DescKeyCheckMapStalenessFallback is the text key for check map staleness
+ // fallback messages.
+ DescKeyCheckMapStalenessFallback = "check-map-staleness.fallback"
+ // DescKeyCheckMapStalenessRelayMessage is the text key for check map
+ // staleness relay message messages.
DescKeyCheckMapStalenessRelayMessage = "check-map-staleness.relay-message"
- DescKeyCheckMapStalenessRelayPrefix = "check-map-staleness.relay-prefix"
+ // DescKeyCheckMapStalenessRelayPrefix is the text key for check map staleness
+ // relay prefix messages.
+ DescKeyCheckMapStalenessRelayPrefix = "check-map-staleness.relay-prefix"
)
diff --git a/internal/config/embed/text/check_memory.go b/internal/config/embed/text/check_memory.go
index 9b67b98e6..890be9231 100644
--- a/internal/config/embed/text/check_memory.go
+++ b/internal/config/embed/text/check_memory.go
@@ -8,8 +8,16 @@ package text
// DescKeys for memory drift checks.
const (
- DescKeyCheckMemoryDriftBoxTitle = "check-memory-drift.box-title"
- DescKeyCheckMemoryDriftContent = "check-memory-drift.content"
+ // DescKeyCheckMemoryDriftBoxTitle is the text key for check memory drift box
+ // title messages.
+ DescKeyCheckMemoryDriftBoxTitle = "check-memory-drift.box-title"
+ // DescKeyCheckMemoryDriftContent is the text key for check memory drift
+ // content messages.
+ DescKeyCheckMemoryDriftContent = "check-memory-drift.content"
+ // DescKeyCheckMemoryDriftRelayMessage is the text key for check memory drift
+ // relay message messages.
DescKeyCheckMemoryDriftRelayMessage = "check-memory-drift.relay-message"
- DescKeyCheckMemoryDriftRelayPrefix = "check-memory-drift.relay-prefix"
+ // DescKeyCheckMemoryDriftRelayPrefix is the text key for check memory drift
+ // relay prefix messages.
+ DescKeyCheckMemoryDriftRelayPrefix = "check-memory-drift.relay-prefix"
)
diff --git a/internal/config/embed/text/check_persistence.go b/internal/config/embed/text/check_persistence.go
index d37644b51..50727fe76 100644
--- a/internal/config/embed/text/check_persistence.go
+++ b/internal/config/embed/text/check_persistence.go
@@ -8,17 +8,43 @@ package text
// DescKeys for persistence checks.
const (
- DescKeyCheckPersistenceBoxTitle = "check-persistence.box-title"
- DescKeyCheckPersistenceBoxTitleFormat = "check-persistence.box-title-format"
- DescKeyCheckPersistenceCheckpointFormat = "check-persistence.checkpoint-format"
- DescKeyCheckPersistenceFallback = "check-persistence.fallback"
- DescKeyCheckPersistenceInitLogFormat = "check-persistence.init-log-format"
- DescKeyCheckPersistenceModifiedLogFormat = "check-persistence.modified-log-format"
- DescKeyCheckPersistenceRelayFormat = "check-persistence.relay-format"
- DescKeyCheckPersistenceRelayPrefix = "check-persistence.relay-prefix"
- DescKeyCheckPersistenceNudgeLogFormat = "check-persistence.nudge-log-format"
- DescKeyCheckPersistenceSilencedLogFormat = "check-persistence.silenced-log-format"
- DescKeyCheckPersistenceSilentLogFormat = "check-persistence.silent-log-format"
- DescKeyCheckPersistenceStateFormat = "check-persistence.state-format"
+ // DescKeyCheckPersistenceBoxTitle is the text key for check persistence box
+ // title messages.
+ DescKeyCheckPersistenceBoxTitle = "check-persistence.box-title"
+ // DescKeyCheckPersistenceBoxTitleFormat is the text key for check persistence
+ // box title format messages.
+ DescKeyCheckPersistenceBoxTitleFormat = "check-persistence.box-title-format"
+ // DescKeyCheckPersistenceCheckpointFormat is the text key for check
+ // persistence checkpoint format messages.
+ DescKeyCheckPersistenceCheckpointFormat = "check-persistence.checkpoint-format"
+ // DescKeyCheckPersistenceFallback is the text key for check persistence
+ // fallback messages.
+ DescKeyCheckPersistenceFallback = "check-persistence.fallback"
+ // DescKeyCheckPersistenceInitLogFormat is the text key for check persistence
+ // init log format messages.
+ DescKeyCheckPersistenceInitLogFormat = "check-persistence.init-log-format"
+ // DescKeyCheckPersistenceModifiedLogFormat is the text key for check
+ // persistence modified log format messages.
+ DescKeyCheckPersistenceModifiedLogFormat = "check-persistence.modified-log-format"
+ // DescKeyCheckPersistenceRelayFormat is the text key for check persistence
+ // relay format messages.
+ DescKeyCheckPersistenceRelayFormat = "check-persistence.relay-format"
+ // DescKeyCheckPersistenceRelayPrefix is the text key for check persistence
+ // relay prefix messages.
+ DescKeyCheckPersistenceRelayPrefix = "check-persistence.relay-prefix"
+ // DescKeyCheckPersistenceNudgeLogFormat is the text key for check persistence
+ // nudge log format messages.
+ DescKeyCheckPersistenceNudgeLogFormat = "check-persistence.nudge-log-format"
+ // DescKeyCheckPersistenceSilencedLogFormat is the text key for check
+ // persistence silenced log format messages.
+ DescKeyCheckPersistenceSilencedLogFormat = "check-persistence.silenced-log-format"
+ // DescKeyCheckPersistenceSilentLogFormat is the text key for check
+ // persistence silent log format messages.
+ DescKeyCheckPersistenceSilentLogFormat = "check-persistence.silent-log-format"
+ // DescKeyCheckPersistenceStateFormat is the text key for check persistence
+ // state format messages.
+ DescKeyCheckPersistenceStateFormat = "check-persistence.state-format"
+ // DescKeyCheckPersistenceSuppressedLogFormat is the text key for check
+ // persistence suppressed log format messages.
DescKeyCheckPersistenceSuppressedLogFormat = "check-persistence.suppressed-log-format"
)
diff --git a/internal/config/embed/text/check_reminder.go b/internal/config/embed/text/check_reminder.go
index 3bbb957ac..34e8cea97 100644
--- a/internal/config/embed/text/check_reminder.go
+++ b/internal/config/embed/text/check_reminder.go
@@ -8,10 +8,22 @@ package text
// DescKeys for reminder checks.
const (
- DescKeyCheckRemindersBoxTitle = "check-reminders.box-title"
- DescKeyCheckRemindersDismissHint = "check-reminders.dismiss-hint"
+ // DescKeyCheckRemindersBoxTitle is the text key for check reminders box title
+ // messages.
+ DescKeyCheckRemindersBoxTitle = "check-reminders.box-title"
+ // DescKeyCheckRemindersDismissHint is the text key for check reminders
+ // dismiss hint messages.
+ DescKeyCheckRemindersDismissHint = "check-reminders.dismiss-hint"
+ // DescKeyCheckRemindersDismissAllHint is the text key for check reminders
+ // dismiss all hint messages.
DescKeyCheckRemindersDismissAllHint = "check-reminders.dismiss-all-hint"
- DescKeyCheckRemindersItemFormat = "check-reminders.item-format"
- DescKeyCheckRemindersNudgeFormat = "check-reminders.nudge-format"
- DescKeyCheckRemindersRelayPrefix = "check-reminders.relay-prefix"
+ // DescKeyCheckRemindersItemFormat is the text key for check reminders item
+ // format messages.
+ DescKeyCheckRemindersItemFormat = "check-reminders.item-format"
+ // DescKeyCheckRemindersNudgeFormat is the text key for check reminders nudge
+ // format messages.
+ DescKeyCheckRemindersNudgeFormat = "check-reminders.nudge-format"
+ // DescKeyCheckRemindersRelayPrefix is the text key for check reminders relay
+ // prefix messages.
+ DescKeyCheckRemindersRelayPrefix = "check-reminders.relay-prefix"
)
diff --git a/internal/config/embed/text/check_resource.go b/internal/config/embed/text/check_resource.go
index eabb52e7d..b51a7279f 100644
--- a/internal/config/embed/text/check_resource.go
+++ b/internal/config/embed/text/check_resource.go
@@ -8,10 +8,22 @@ package text
// DescKeys for resource checks.
const (
- DescKeyCheckResourcesBoxTitle = "check-resources.box-title"
- DescKeyCheckResourcesFallbackLow = "check-resources.fallback-low"
+ // DescKeyCheckResourcesBoxTitle is the text key for check resources box title
+ // messages.
+ DescKeyCheckResourcesBoxTitle = "check-resources.box-title"
+ // DescKeyCheckResourcesFallbackLow is the text key for check resources
+ // fallback low messages.
+ DescKeyCheckResourcesFallbackLow = "check-resources.fallback-low"
+ // DescKeyCheckResourcesFallbackPersist is the text key for check resources
+ // fallback persist messages.
DescKeyCheckResourcesFallbackPersist = "check-resources.fallback-persist"
- DescKeyCheckResourcesFallbackEnd = "check-resources.fallback-end"
- DescKeyCheckResourcesRelayMessage = "check-resources.relay-message"
- DescKeyCheckResourcesRelayPrefix = "check-resources.relay-prefix"
+ // DescKeyCheckResourcesFallbackEnd is the text key for check resources
+ // fallback end messages.
+ DescKeyCheckResourcesFallbackEnd = "check-resources.fallback-end"
+ // DescKeyCheckResourcesRelayMessage is the text key for check resources relay
+ // message messages.
+ DescKeyCheckResourcesRelayMessage = "check-resources.relay-message"
+ // DescKeyCheckResourcesRelayPrefix is the text key for check resources relay
+ // prefix messages.
+ DescKeyCheckResourcesRelayPrefix = "check-resources.relay-prefix"
)
diff --git a/internal/config/embed/text/check_skill_discovery.go b/internal/config/embed/text/check_skill_discovery.go
index 43588b417..c3961c81e 100644
--- a/internal/config/embed/text/check_skill_discovery.go
+++ b/internal/config/embed/text/check_skill_discovery.go
@@ -8,7 +8,13 @@ package text
// DescKeys for skill discovery checks.
const (
+ // DescKeySkillDiscoveryBoxTitle is the text key for skill discovery box title
+ // messages.
DescKeySkillDiscoveryBoxTitle = "skill-discovery.box-title"
- DescKeySkillDiscoveryContent = "skill-discovery.content"
- DescKeySkillDiscoveryPrefix = "skill-discovery.relay-prefix"
+ // DescKeySkillDiscoveryContent is the text key for skill discovery content
+ // messages.
+ DescKeySkillDiscoveryContent = "skill-discovery.content"
+ // DescKeySkillDiscoveryPrefix is the text key for skill discovery prefix
+ // messages.
+ DescKeySkillDiscoveryPrefix = "skill-discovery.relay-prefix"
)
diff --git a/internal/config/embed/text/check_version.go b/internal/config/embed/text/check_version.go
index 5f103d442..2a82bcec1 100644
--- a/internal/config/embed/text/check_version.go
+++ b/internal/config/embed/text/check_version.go
@@ -8,13 +8,31 @@ package text
// DescKeys for version checks.
const (
- DescKeyCheckVersionBoxTitle = "check-version.box-title"
- DescKeyCheckVersionFallback = "check-version.fallback"
- DescKeyCheckVersionKeyBoxTitle = "check-version.key-box-title"
- DescKeyCheckVersionKeyFallback = "check-version.key-fallback"
- DescKeyCheckVersionKeyRelayFormat = "check-version.key-relay-format"
- DescKeyCheckVersionKeyRelayPrefix = "check-version.key-relay-prefix"
+ // DescKeyCheckVersionBoxTitle is the text key for check version box title
+ // messages.
+ DescKeyCheckVersionBoxTitle = "check-version.box-title"
+ // DescKeyCheckVersionFallback is the text key for check version fallback
+ // messages.
+ DescKeyCheckVersionFallback = "check-version.fallback"
+ // DescKeyCheckVersionKeyBoxTitle is the text key for check version key box
+ // title messages.
+ DescKeyCheckVersionKeyBoxTitle = "check-version.key-box-title"
+ // DescKeyCheckVersionKeyFallback is the text key for check version key
+ // fallback messages.
+ DescKeyCheckVersionKeyFallback = "check-version.key-fallback"
+ // DescKeyCheckVersionKeyRelayFormat is the text key for check version key
+ // relay format messages.
+ DescKeyCheckVersionKeyRelayFormat = "check-version.key-relay-format"
+ // DescKeyCheckVersionKeyRelayPrefix is the text key for check version key
+ // relay prefix messages.
+ DescKeyCheckVersionKeyRelayPrefix = "check-version.key-relay-prefix"
+ // DescKeyCheckVersionMismatchRelayFormat is the text key for check version
+ // mismatch relay format messages.
DescKeyCheckVersionMismatchRelayFormat = "check-version.mismatch-relay-format"
- DescKeyCheckVersionPluginReadError = "check-version.plugin-read-error"
- DescKeyCheckVersionRelayPrefix = "check-version.relay-prefix"
+ // DescKeyCheckVersionPluginReadError is the text key for check version plugin
+ // read error messages.
+ DescKeyCheckVersionPluginReadError = "check-version.plugin-read-error"
+ // DescKeyCheckVersionRelayPrefix is the text key for check version relay
+ // prefix messages.
+ DescKeyCheckVersionRelayPrefix = "check-version.relay-prefix"
)
diff --git a/internal/config/embed/text/colummn.go b/internal/config/embed/text/colummn.go
index 01ade2c09..99748b3a8 100644
--- a/internal/config/embed/text/colummn.go
+++ b/internal/config/embed/text/colummn.go
@@ -8,6 +8,8 @@ package text
// DescKeys for column formatting.
const (
+ // DescKeyColumnDecision is the text key for column decision messages.
DescKeyColumnDecision = "column.decision"
+ // DescKeyColumnLearning is the text key for column learning messages.
DescKeyColumnLearning = "column.learning"
)
diff --git a/internal/config/embed/text/compact.go b/internal/config/embed/text/compact.go
index 8b38ce54c..367ba3384 100644
--- a/internal/config/embed/text/compact.go
+++ b/internal/config/embed/text/compact.go
@@ -8,10 +8,17 @@ package text
// DescKeys for compact output.
const (
- DescKeyCompactHeading = "compact.heading"
- DescKeyCompactSeparator = "compact.separator"
- DescKeyCompactTaskError = "compact.task-error"
+ // DescKeyCompactHeading is the text key for compact heading messages.
+ DescKeyCompactHeading = "compact.heading"
+ // DescKeyCompactSeparator is the text key for compact separator messages.
+ DescKeyCompactSeparator = "compact.separator"
+ // DescKeyCompactTaskError is the text key for compact task error messages.
+ DescKeyCompactTaskError = "compact.task-error"
+ // DescKeyCompactSectionsRemoved is the text key for compact sections removed
+ // messages.
DescKeyCompactSectionsRemoved = "compact.sections-removed"
- DescKeyCompactClean = "compact.clean"
- DescKeyCompactSummary = "compact.summary"
+ // DescKeyCompactClean is the text key for compact clean messages.
+ DescKeyCompactClean = "compact.clean"
+ // DescKeyCompactSummary is the text key for compact summary messages.
+ DescKeyCompactSummary = "compact.summary"
)
diff --git a/internal/config/embed/text/config.go b/internal/config/embed/text/config.go
index 128db924d..93081828b 100644
--- a/internal/config/embed/text/config.go
+++ b/internal/config/embed/text/config.go
@@ -8,14 +8,23 @@ package text
// DescKeys for configuration display write output.
const (
+ // DescKeyWriteConfigProfileBase is the text key for write config profile base
+ // messages.
DescKeyWriteConfigProfileBase = "write.config-profile-base"
- DescKeyWriteConfigProfileDev = "write.config-profile-dev"
+ // DescKeyWriteConfigProfileDev is the text key for write config profile dev
+ // messages.
+ DescKeyWriteConfigProfileDev = "write.config-profile-dev"
+ // DescKeyWriteConfigProfileNone is the text key for write config profile none
+ // messages.
DescKeyWriteConfigProfileNone = "write.config-profile-none"
)
// DescKeys for configuration display.
const (
+ // DescKeyConfigAlreadyOn is the text key for config already on messages.
DescKeyConfigAlreadyOn = "config.already-on"
- DescKeyConfigCreated = "config.created"
- DescKeyConfigSwitched = "config.switched"
+ // DescKeyConfigCreated is the text key for config created messages.
+ DescKeyConfigCreated = "config.created"
+ // DescKeyConfigSwitched is the text key for config switched messages.
+ DescKeyConfigSwitched = "config.switched"
)
diff --git a/internal/config/embed/text/context.go b/internal/config/embed/text/context.go
index 3071ae539..7c5720633 100644
--- a/internal/config/embed/text/context.go
+++ b/internal/config/embed/text/context.go
@@ -8,18 +8,42 @@ package text
// DescKeys for context rendering.
const (
- DescKeyContextLoadGateFileHeader = "context-load-gate.file-header"
- DescKeyContextLoadGateFooter = "context-load-gate.footer"
- DescKeyContextLoadGateHeader = "context-load-gate.header"
- DescKeyContextLoadGateOversizeAction = "context-load-gate.oversize-action"
+ // DescKeyContextLoadGateFileHeader is the text key for context load gate file
+ // header messages.
+ DescKeyContextLoadGateFileHeader = "context-load-gate.file-header"
+ // DescKeyContextLoadGateFooter is the text key for context load gate footer
+ // messages.
+ DescKeyContextLoadGateFooter = "context-load-gate.footer"
+ // DescKeyContextLoadGateHeader is the text key for context load gate header
+ // messages.
+ DescKeyContextLoadGateHeader = "context-load-gate.header"
+ // DescKeyContextLoadGateOversizeAction is the text key for context load gate
+ // oversize action messages.
+ DescKeyContextLoadGateOversizeAction = "context-load-gate.oversize-action"
+ // DescKeyContextLoadGateOversizeBreakdown is the text key for context load
+ // gate oversize breakdown messages.
DescKeyContextLoadGateOversizeBreakdown = "context-load-gate.oversize-breakdown"
+ // DescKeyContextLoadGateOversizeFileEntry is the text key for context load
+ // gate oversize file entry messages.
DescKeyContextLoadGateOversizeFileEntry = "context-load-gate.oversize-file-entry"
- DescKeyContextLoadGateOversizeHeader = "context-load-gate.oversize-header"
- DescKeyContextLoadGateOversizeInjected = "context-load-gate.oversize-injected"
+ // DescKeyContextLoadGateOversizeHeader is the text key for context load gate
+ // oversize header messages.
+ DescKeyContextLoadGateOversizeHeader = "context-load-gate.oversize-header"
+ // DescKeyContextLoadGateOversizeInjected is the text key for context load
+ // gate oversize injected messages.
+ DescKeyContextLoadGateOversizeInjected = "context-load-gate.oversize-injected"
+ // DescKeyContextLoadGateOversizeTimestamp is the text key for context load
+ // gate oversize timestamp messages.
DescKeyContextLoadGateOversizeTimestamp = "context-load-gate.oversize-timestamp"
- DescKeyContextLoadGateWebhook = "context-load-gate.webhook"
+ // DescKeyContextLoadGateWebhook is the text key for context load gate webhook
+ // messages.
+ DescKeyContextLoadGateWebhook = "context-load-gate.webhook"
// Context directory display labels.
- DescKeyWriteContextDirLabel = "write.context-dir-label"
+ // DescKeyWriteContextDirLabel is the text key for write context dir label
+ // messages.
+ DescKeyWriteContextDirLabel = "write.context-dir-label"
+ // DescKeyWriteContextDirBracket is the text key for write context dir bracket
+ // messages.
DescKeyWriteContextDirBracket = "write.context-dir-bracket"
)
diff --git a/internal/config/embed/text/dep.go b/internal/config/embed/text/dep.go
index 8b0e133c2..ad1533bca 100644
--- a/internal/config/embed/text/dep.go
+++ b/internal/config/embed/text/dep.go
@@ -8,8 +8,14 @@ package text
// DescKeys for dependency tracking write output.
const (
+ // DescKeyWriteDepsLookingFor is the text key for write deps looking for
+ // messages.
DescKeyWriteDepsLookingFor = "write.deps-looking-for"
- DescKeyWriteDepsNoDeps = "write.deps-no-deps"
- DescKeyWriteDepsNoProject = "write.deps-no-project"
- DescKeyWriteDepsUseType = "write.deps-use-type"
+ // DescKeyWriteDepsNoDeps is the text key for write deps no deps messages.
+ DescKeyWriteDepsNoDeps = "write.deps-no-deps"
+ // DescKeyWriteDepsNoProject is the text key for write deps no project
+ // messages.
+ DescKeyWriteDepsNoProject = "write.deps-no-project"
+ // DescKeyWriteDepsUseType is the text key for write deps use type messages.
+ DescKeyWriteDepsUseType = "write.deps-use-type"
)
diff --git a/internal/config/embed/text/doctor.go b/internal/config/embed/text/doctor.go
index 6fcc85feb..bfb3a6b4a 100644
--- a/internal/config/embed/text/doctor.go
+++ b/internal/config/embed/text/doctor.go
@@ -8,45 +8,122 @@ package text
// DescKeys for doctor diagnostics.
const (
- DescKeyDoctorContextFileFormat = "doctor.context-file.format"
- DescKeyDoctorContextInitializedError = "doctor.context-initialized.error"
- DescKeyDoctorContextInitializedOk = "doctor.context-initialized.ok"
- DescKeyDoctorContextSizeFormat = "doctor.context-size.format"
- DescKeyDoctorContextSizeWarningSuffix = "doctor.context-size.warning-suffix"
- DescKeyDoctorCtxrcValidationError = "doctor.ctxrc-validation.error"
- DescKeyDoctorCtxrcValidationOk = "doctor.ctxrc-validation.ok"
- DescKeyDoctorCtxrcValidationOkNoFile = "doctor.ctxrc-validation.ok-no-file"
- DescKeyDoctorCtxrcValidationWarning = "doctor.ctxrc-validation.warning"
- DescKeyDoctorDriftDetected = "doctor.drift.detected"
- DescKeyDoctorDriftOk = "doctor.drift.ok"
- DescKeyDoctorDriftViolations = "doctor.drift.violations"
- DescKeyDoctorDriftWarningLoad = "doctor.drift.warning-load"
- DescKeyDoctorDriftWarnings = "doctor.drift.warnings"
- DescKeyDoctorEventLoggingInfo = "doctor.event-logging.info"
- DescKeyDoctorEventLoggingOk = "doctor.event-logging.ok"
- DescKeyDoctorOutputHeader = "doctor.output.header"
- DescKeyDoctorOutputResultLine = "doctor.output.result-line"
- DescKeyDoctorOutputSeparator = "doctor.output.separator"
- DescKeyDoctorOutputSummary = "doctor.output.summary"
- DescKeyDoctorCompanionConfigOk = "doctor.companion-config.ok"
- DescKeyDoctorCompanionConfigInfo = "doctor.companion-config.info"
- DescKeyDoctorPluginEnabledGlobalOk = "doctor.plugin-enabled-global.ok"
- DescKeyDoctorPluginEnabledLocalOk = "doctor.plugin-enabled-local.ok"
- DescKeyDoctorPluginEnabledWarning = "doctor.plugin-enabled.warning"
- DescKeyDoctorPluginInstalledInfo = "doctor.plugin-installed.info"
- DescKeyDoctorPluginInstalledOk = "doctor.plugin-installed.ok"
- DescKeyDoctorRecentEventsInfo = "doctor.recent-events.info"
- DescKeyDoctorRecentEventsOk = "doctor.recent-events.ok"
- DescKeyDoctorRemindersInfo = "doctor.reminders.info"
- DescKeyDoctorRemindersOk = "doctor.reminders.ok"
- DescKeyDoctorRequiredFilesError = "doctor.required-files.error"
- DescKeyDoctorRequiredFilesOk = "doctor.required-files.ok"
- DescKeyDoctorResourceDiskFormat = "doctor.resource-disk.format"
- DescKeyDoctorResourceLoadFormat = "doctor.resource-load.format"
- DescKeyDoctorResourceMemoryFormat = "doctor.resource-memory.format"
- DescKeyDoctorResourceSwapFormat = "doctor.resource-swap.format"
- DescKeyDoctorTaskCompletionFormat = "doctor.task-completion.format"
+ // DescKeyDoctorContextFileFormat is the text key for doctor context file
+ // format messages.
+ DescKeyDoctorContextFileFormat = "doctor.context-file.format"
+ // DescKeyDoctorContextInitializedError is the text key for doctor context
+ // initialized error messages.
+ DescKeyDoctorContextInitializedError = "doctor.context-initialized.error"
+ // DescKeyDoctorContextInitializedOk is the text key for doctor context
+ // initialized ok messages.
+ DescKeyDoctorContextInitializedOk = "doctor.context-initialized.ok"
+ // DescKeyDoctorContextSizeFormat is the text key for doctor context size
+ // format messages.
+ DescKeyDoctorContextSizeFormat = "doctor.context-size.format"
+ // DescKeyDoctorContextSizeWarningSuffix is the text key for doctor context
+ // size warning suffix messages.
+ DescKeyDoctorContextSizeWarningSuffix = "doctor.context-size.warning-suffix"
+ // DescKeyDoctorCtxrcValidationError is the text key for doctor ctxrc
+ // validation error messages.
+ DescKeyDoctorCtxrcValidationError = "doctor.ctxrc-validation.error"
+ // DescKeyDoctorCtxrcValidationOk is the text key for doctor ctxrc validation
+ // ok messages.
+ DescKeyDoctorCtxrcValidationOk = "doctor.ctxrc-validation.ok"
+ // DescKeyDoctorCtxrcValidationOkNoFile is the text key for doctor ctxrc
+ // validation ok no file messages.
+ DescKeyDoctorCtxrcValidationOkNoFile = "doctor.ctxrc-validation.ok-no-file"
+ // DescKeyDoctorCtxrcValidationWarning is the text key for doctor ctxrc
+ // validation warning messages.
+ DescKeyDoctorCtxrcValidationWarning = "doctor.ctxrc-validation.warning"
+ // DescKeyDoctorDriftDetected is the text key for doctor drift detected
+ // messages.
+ DescKeyDoctorDriftDetected = "doctor.drift.detected"
+ // DescKeyDoctorDriftOk is the text key for doctor drift ok messages.
+ DescKeyDoctorDriftOk = "doctor.drift.ok"
+ // DescKeyDoctorDriftViolations is the text key for doctor drift violations
+ // messages.
+ DescKeyDoctorDriftViolations = "doctor.drift.violations"
+ // DescKeyDoctorDriftWarningLoad is the text key for doctor drift warning load
+ // messages.
+ DescKeyDoctorDriftWarningLoad = "doctor.drift.warning-load"
+ // DescKeyDoctorDriftWarnings is the text key for doctor drift warnings
+ // messages.
+ DescKeyDoctorDriftWarnings = "doctor.drift.warnings"
+ // DescKeyDoctorEventLoggingInfo is the text key for doctor event logging info
+ // messages.
+ DescKeyDoctorEventLoggingInfo = "doctor.event-logging.info"
+ // DescKeyDoctorEventLoggingOk is the text key for doctor event logging ok
+ // messages.
+ DescKeyDoctorEventLoggingOk = "doctor.event-logging.ok"
+ // DescKeyDoctorOutputHeader is the text key for doctor output header messages.
+ DescKeyDoctorOutputHeader = "doctor.output.header"
+ // DescKeyDoctorOutputResultLine is the text key for doctor output result line
+ // messages.
+ DescKeyDoctorOutputResultLine = "doctor.output.result-line"
+ // DescKeyDoctorOutputSeparator is the text key for doctor output separator
+ // messages.
+ DescKeyDoctorOutputSeparator = "doctor.output.separator"
+ // DescKeyDoctorOutputSummary is the text key for doctor output summary
+ // messages.
+ DescKeyDoctorOutputSummary = "doctor.output.summary"
+ // DescKeyDoctorCompanionConfigOk is the text key for doctor companion config
+ // ok messages.
+ DescKeyDoctorCompanionConfigOk = "doctor.companion-config.ok"
+ // DescKeyDoctorCompanionConfigInfo is the text key for doctor companion
+ // config info messages.
+ DescKeyDoctorCompanionConfigInfo = "doctor.companion-config.info"
+ // DescKeyDoctorPluginEnabledGlobalOk is the text key for doctor plugin
+ // enabled global ok messages.
+ DescKeyDoctorPluginEnabledGlobalOk = "doctor.plugin-enabled-global.ok"
+ // DescKeyDoctorPluginEnabledLocalOk is the text key for doctor plugin enabled
+ // local ok messages.
+ DescKeyDoctorPluginEnabledLocalOk = "doctor.plugin-enabled-local.ok"
+ // DescKeyDoctorPluginEnabledWarning is the text key for doctor plugin enabled
+ // warning messages.
+ DescKeyDoctorPluginEnabledWarning = "doctor.plugin-enabled.warning"
+ // DescKeyDoctorPluginInstalledInfo is the text key for doctor plugin
+ // installed info messages.
+ DescKeyDoctorPluginInstalledInfo = "doctor.plugin-installed.info"
+ // DescKeyDoctorPluginInstalledOk is the text key for doctor plugin installed
+ // ok messages.
+ DescKeyDoctorPluginInstalledOk = "doctor.plugin-installed.ok"
+ // DescKeyDoctorRecentEventsInfo is the text key for doctor recent events info
+ // messages.
+ DescKeyDoctorRecentEventsInfo = "doctor.recent-events.info"
+ // DescKeyDoctorRecentEventsOk is the text key for doctor recent events ok
+ // messages.
+ DescKeyDoctorRecentEventsOk = "doctor.recent-events.ok"
+ // DescKeyDoctorRemindersInfo is the text key for doctor reminders info
+ // messages.
+ DescKeyDoctorRemindersInfo = "doctor.reminders.info"
+ // DescKeyDoctorRemindersOk is the text key for doctor reminders ok messages.
+ DescKeyDoctorRemindersOk = "doctor.reminders.ok"
+ // DescKeyDoctorRequiredFilesError is the text key for doctor required files
+ // error messages.
+ DescKeyDoctorRequiredFilesError = "doctor.required-files.error"
+ // DescKeyDoctorRequiredFilesOk is the text key for doctor required files ok
+ // messages.
+ DescKeyDoctorRequiredFilesOk = "doctor.required-files.ok"
+ // DescKeyDoctorResourceDiskFormat is the text key for doctor resource disk
+ // format messages.
+ DescKeyDoctorResourceDiskFormat = "doctor.resource-disk.format"
+ // DescKeyDoctorResourceLoadFormat is the text key for doctor resource load
+ // format messages.
+ DescKeyDoctorResourceLoadFormat = "doctor.resource-load.format"
+ // DescKeyDoctorResourceMemoryFormat is the text key for doctor resource
+ // memory format messages.
+ DescKeyDoctorResourceMemoryFormat = "doctor.resource-memory.format"
+ // DescKeyDoctorResourceSwapFormat is the text key for doctor resource swap
+ // format messages.
+ DescKeyDoctorResourceSwapFormat = "doctor.resource-swap.format"
+ // DescKeyDoctorTaskCompletionFormat is the text key for doctor task
+ // completion format messages.
+ DescKeyDoctorTaskCompletionFormat = "doctor.task-completion.format"
+ // DescKeyDoctorTaskCompletionWarningSuffix is the text key for doctor task
+ // completion warning suffix messages.
DescKeyDoctorTaskCompletionWarningSuffix = "doctor.task-completion.warning-suffix"
- DescKeyDoctorWebhookInfo = "doctor.webhook.info"
- DescKeyDoctorWebhookOk = "doctor.webhook.ok"
+ // DescKeyDoctorWebhookInfo is the text key for doctor webhook info messages.
+ DescKeyDoctorWebhookInfo = "doctor.webhook.info"
+ // DescKeyDoctorWebhookOk is the text key for doctor webhook ok messages.
+ DescKeyDoctorWebhookOk = "doctor.webhook.ok"
)
diff --git a/internal/config/embed/text/drift.go b/internal/config/embed/text/drift.go
index 32c5069b6..c1b841a8c 100644
--- a/internal/config/embed/text/drift.go
+++ b/internal/config/embed/text/drift.go
@@ -8,57 +8,128 @@ package text
// DescKeys for drift detection.
const (
- DescKeyDriftDeadPath = "drift.dead-path"
- DescKeyDriftEntryCount = "drift.entry-count"
- DescKeyDriftMissingFile = "drift.missing-file"
- DescKeyDriftRegenerated = "drift.regenerated"
- DescKeyDriftMissingPackage = "drift.missing-package"
- DescKeyDriftSecret = "drift.secret"
- DescKeyDriftStaleAge = "drift.stale-age"
- DescKeyDriftStaleness = "drift.staleness"
- DescKeyDriftCleared = "drift.cleared"
- DescKeyDriftApplying = "drift.applying"
- DescKeyDriftFixedCount = "drift.fixed-count"
- DescKeyDriftSkippedCount = "drift.skipped-count"
- DescKeyDriftFixError = "drift.fix-error"
- DescKeyDriftRechecking = "drift.rechecking"
- DescKeyDriftFixStaleness = "drift.fix-staleness"
- DescKeyDriftFixStalenessErr = "drift.fix-staleness-err"
- DescKeyDriftFixMissing = "drift.fix-missing"
- DescKeyDriftFixMissingErr = "drift.fix-missing-err"
- DescKeyDriftSkipDeadPath = "drift.skip-dead-path"
- DescKeyDriftSkipStaleAge = "drift.skip-stale-age"
- DescKeyDriftSkipSensitiveFile = "drift.skip-sensitive-file"
- DescKeyDriftArchived = "drift.archived"
- DescKeyDriftReportHeading = "drift.report-heading"
- DescKeyDriftReportSeparator = "drift.report-separator"
- DescKeyDriftViolationsHeading = "drift.violations-heading"
- DescKeyDriftViolationLine = "drift.violation-line"
- DescKeyDriftViolationLineLoc = "drift.violation-line-loc"
- DescKeyDriftViolationRule = "drift.violation-rule"
- DescKeyDriftWarningsHeading = "drift.warnings-heading"
- DescKeyDriftPathRefsLabel = "drift.path-refs-label"
- DescKeyDriftPathRefLine = "drift.path-ref-line"
- DescKeyDriftStalenessLabel = "drift.staleness-label"
- DescKeyDriftStalenessLine = "drift.staleness-line"
- DescKeyDriftOtherLabel = "drift.other-label"
- DescKeyDriftOtherLine = "drift.other-line"
- DescKeyDriftPassedHeading = "drift.passed-heading"
- DescKeyDriftPassedLine = "drift.passed-line"
- DescKeyDriftStatusViolation = "drift.status-violation"
- DescKeyDriftStatusWarning = "drift.status-warning"
- DescKeyDriftStatusOK = "drift.status-ok"
- DescKeyDriftCheckPathRefs = "drift.check-path-refs"
- DescKeyDriftCheckStaleness = "drift.check-staleness"
- DescKeyDriftCheckConstitution = "drift.check-constitution"
- DescKeyDriftCheckRequired = "drift.check-required"
- DescKeyDriftCheckFileAge = "drift.check-file-age"
- DescKeyDriftStaleHeader = "drift.stale-header"
- DescKeyDriftCheckTemplateHeader = "drift.check-template-header"
- DescKeyDriftInvalidTool = "drift.invalid-tool"
- DescKeyDriftHookNoExec = "drift.hook-no-exec"
- DescKeyDriftStaleSyncFile = "drift.stale-sync-file"
- DescKeyDriftToolSuffix = "drift.tool-suffix"
- DescKeyVersionDriftRelayMessage = "version-drift.relay-message"
+ // DescKeyDriftDeadPath is the text key for drift dead path messages.
+ DescKeyDriftDeadPath = "drift.dead-path"
+ // DescKeyDriftEntryCount is the text key for drift entry count messages.
+ DescKeyDriftEntryCount = "drift.entry-count"
+ // DescKeyDriftMissingFile is the text key for drift missing file messages.
+ DescKeyDriftMissingFile = "drift.missing-file"
+ // DescKeyDriftRegenerated is the text key for drift regenerated messages.
+ DescKeyDriftRegenerated = "drift.regenerated"
+ // DescKeyDriftMissingPackage is the text key for drift missing package
+ // messages.
+ DescKeyDriftMissingPackage = "drift.missing-package"
+ // DescKeyDriftSecret is the text key for drift secret messages.
+ DescKeyDriftSecret = "drift.secret"
+ // DescKeyDriftStaleAge is the text key for drift stale age messages.
+ DescKeyDriftStaleAge = "drift.stale-age"
+ // DescKeyDriftStaleness is the text key for drift staleness messages.
+ DescKeyDriftStaleness = "drift.staleness"
+ // DescKeyDriftCleared is the text key for drift cleared messages.
+ DescKeyDriftCleared = "drift.cleared"
+ // DescKeyDriftApplying is the text key for drift applying messages.
+ DescKeyDriftApplying = "drift.applying"
+ // DescKeyDriftFixedCount is the text key for drift fixed count messages.
+ DescKeyDriftFixedCount = "drift.fixed-count"
+ // DescKeyDriftSkippedCount is the text key for drift skipped count messages.
+ DescKeyDriftSkippedCount = "drift.skipped-count"
+ // DescKeyDriftFixError is the text key for drift fix error messages.
+ DescKeyDriftFixError = "drift.fix-error"
+ // DescKeyDriftRechecking is the text key for drift rechecking messages.
+ DescKeyDriftRechecking = "drift.rechecking"
+ // DescKeyDriftFixStaleness is the text key for drift fix staleness messages.
+ DescKeyDriftFixStaleness = "drift.fix-staleness"
+ // DescKeyDriftFixStalenessErr is the text key for drift fix staleness err
+ // messages.
+ DescKeyDriftFixStalenessErr = "drift.fix-staleness-err"
+ // DescKeyDriftFixMissing is the text key for drift fix missing messages.
+ DescKeyDriftFixMissing = "drift.fix-missing"
+ // DescKeyDriftFixMissingErr is the text key for drift fix missing err
+ // messages.
+ DescKeyDriftFixMissingErr = "drift.fix-missing-err"
+ // DescKeyDriftSkipDeadPath is the text key for drift skip dead path messages.
+ DescKeyDriftSkipDeadPath = "drift.skip-dead-path"
+ // DescKeyDriftSkipStaleAge is the text key for drift skip stale age messages.
+ DescKeyDriftSkipStaleAge = "drift.skip-stale-age"
+ // DescKeyDriftSkipSensitiveFile is the text key for drift skip sensitive file
+ // messages.
+ DescKeyDriftSkipSensitiveFile = "drift.skip-sensitive-file"
+ // DescKeyDriftArchived is the text key for drift archived messages.
+ DescKeyDriftArchived = "drift.archived"
+ // DescKeyDriftReportHeading is the text key for drift report heading messages.
+ DescKeyDriftReportHeading = "drift.report-heading"
+ // DescKeyDriftReportSeparator is the text key for drift report separator
+ // messages.
+ DescKeyDriftReportSeparator = "drift.report-separator"
+ // DescKeyDriftViolationsHeading is the text key for drift violations heading
+ // messages.
+ DescKeyDriftViolationsHeading = "drift.violations-heading"
+ // DescKeyDriftViolationLine is the text key for drift violation line messages.
+ DescKeyDriftViolationLine = "drift.violation-line"
+ // DescKeyDriftViolationLineLoc is the text key for drift violation line loc
+ // messages.
+ DescKeyDriftViolationLineLoc = "drift.violation-line-loc"
+ // DescKeyDriftViolationRule is the text key for drift violation rule messages.
+ DescKeyDriftViolationRule = "drift.violation-rule"
+ // DescKeyDriftWarningsHeading is the text key for drift warnings heading
+ // messages.
+ DescKeyDriftWarningsHeading = "drift.warnings-heading"
+ // DescKeyDriftPathRefsLabel is the text key for drift path refs label
+ // messages.
+ DescKeyDriftPathRefsLabel = "drift.path-refs-label"
+ // DescKeyDriftPathRefLine is the text key for drift path ref line messages.
+ DescKeyDriftPathRefLine = "drift.path-ref-line"
+ // DescKeyDriftStalenessLabel is the text key for drift staleness label
+ // messages.
+ DescKeyDriftStalenessLabel = "drift.staleness-label"
+ // DescKeyDriftStalenessLine is the text key for drift staleness line messages.
+ DescKeyDriftStalenessLine = "drift.staleness-line"
+ // DescKeyDriftOtherLabel is the text key for drift other label messages.
+ DescKeyDriftOtherLabel = "drift.other-label"
+ // DescKeyDriftOtherLine is the text key for drift other line messages.
+ DescKeyDriftOtherLine = "drift.other-line"
+ // DescKeyDriftPassedHeading is the text key for drift passed heading messages.
+ DescKeyDriftPassedHeading = "drift.passed-heading"
+ // DescKeyDriftPassedLine is the text key for drift passed line messages.
+ DescKeyDriftPassedLine = "drift.passed-line"
+ // DescKeyDriftStatusViolation is the text key for drift status violation
+ // messages.
+ DescKeyDriftStatusViolation = "drift.status-violation"
+ // DescKeyDriftStatusWarning is the text key for drift status warning messages.
+ DescKeyDriftStatusWarning = "drift.status-warning"
+ // DescKeyDriftStatusOK is the text key for drift status ok messages.
+ DescKeyDriftStatusOK = "drift.status-ok"
+ // DescKeyDriftCheckPathRefs is the text key for drift check path refs
+ // messages.
+ DescKeyDriftCheckPathRefs = "drift.check-path-refs"
+ // DescKeyDriftCheckStaleness is the text key for drift check staleness
+ // messages.
+ DescKeyDriftCheckStaleness = "drift.check-staleness"
+ // DescKeyDriftCheckConstitution is the text key for drift check constitution
+ // messages.
+ DescKeyDriftCheckConstitution = "drift.check-constitution"
+ // DescKeyDriftCheckRequired is the text key for drift check required messages.
+ DescKeyDriftCheckRequired = "drift.check-required"
+ // DescKeyDriftCheckFileAge is the text key for drift check file age messages.
+ DescKeyDriftCheckFileAge = "drift.check-file-age"
+ // DescKeyDriftStaleHeader is the text key for drift stale header messages.
+ DescKeyDriftStaleHeader = "drift.stale-header"
+ // DescKeyDriftCheckTemplateHeader is the text key for drift check template
+ // header messages.
+ DescKeyDriftCheckTemplateHeader = "drift.check-template-header"
+ // DescKeyDriftInvalidTool is the text key for drift invalid tool messages.
+ DescKeyDriftInvalidTool = "drift.invalid-tool"
+ // DescKeyDriftHookNoExec is the text key for drift hook no exec messages.
+ DescKeyDriftHookNoExec = "drift.hook-no-exec"
+ // DescKeyDriftStaleSyncFile is the text key for drift stale sync file
+ // messages.
+ DescKeyDriftStaleSyncFile = "drift.stale-sync-file"
+ // DescKeyDriftToolSuffix is the text key for drift tool suffix messages.
+ DescKeyDriftToolSuffix = "drift.tool-suffix"
+ // DescKeyVersionDriftRelayMessage is the text key for version drift relay
+ // message messages.
+ DescKeyVersionDriftRelayMessage = "version-drift.relay-message"
+ // DescKeyWriteVersionDriftFallback is the text key for write version drift
+ // fallback messages.
DescKeyWriteVersionDriftFallback = "write.version-drift-fallback"
)
diff --git a/internal/config/embed/text/err_add.go b/internal/config/embed/text/err_add.go
index 8d26a970a..4e61fef8a 100644
--- a/internal/config/embed/text/err_add.go
+++ b/internal/config/embed/text/err_add.go
@@ -8,10 +8,19 @@ package text
// DescKeys for add operations errors.
const (
- DescKeyErrAddFileNotFound = "err.add.file-not-found"
- DescKeyErrAddIndexUpdate = "err.add.index-update"
- DescKeyErrAddMissingFields = "err.add.missing-fields"
- DescKeyErrAddNoContent = "err.add.no-content"
+ // DescKeyErrAddFileNotFound is the text key for err add file not found
+ // messages.
+ DescKeyErrAddFileNotFound = "err.add.file-not-found"
+ // DescKeyErrAddIndexUpdate is the text key for err add index update messages.
+ DescKeyErrAddIndexUpdate = "err.add.index-update"
+ // DescKeyErrAddMissingFields is the text key for err add missing fields
+ // messages.
+ DescKeyErrAddMissingFields = "err.add.missing-fields"
+ // DescKeyErrAddNoContent is the text key for err add no content messages.
+ DescKeyErrAddNoContent = "err.add.no-content"
+ // DescKeyErrAddNoContentProvided is the text key for err add no content
+ // provided messages.
DescKeyErrAddNoContentProvided = "err.add.no-content-provided"
- DescKeyErrAddUnknownType = "err.add.unknown-type"
+ // DescKeyErrAddUnknownType is the text key for err add unknown type messages.
+ DescKeyErrAddUnknownType = "err.add.unknown-type"
)
diff --git a/internal/config/embed/text/err_backup.go b/internal/config/embed/text/err_backup.go
index 0b785908b..55d422b00 100644
--- a/internal/config/embed/text/err_backup.go
+++ b/internal/config/embed/text/err_backup.go
@@ -8,18 +8,45 @@ package text
// DescKeys for backup operations errors.
const (
- DescKeyErrBackupBackupGlobal = "err.backup.backup-global"
- DescKeyErrBackupBackupProject = "err.backup.backup-project"
- DescKeyErrBackupBackupSMBConfig = "err.backup.backup-smb-config"
+ // DescKeyErrBackupBackupGlobal is the text key for err backup backup global
+ // messages.
+ DescKeyErrBackupBackupGlobal = "err.backup.backup-global"
+ // DescKeyErrBackupBackupProject is the text key for err backup backup project
+ // messages.
+ DescKeyErrBackupBackupProject = "err.backup.backup-project"
+ // DescKeyErrBackupBackupSMBConfig is the text key for err backup backup smb
+ // config messages.
+ DescKeyErrBackupBackupSMBConfig = "err.backup.backup-smb-config"
+ // DescKeyErrBackupContextDirNotFound is the text key for err backup context
+ // dir not found messages.
DescKeyErrBackupContextDirNotFound = "err.backup.context-dir-not-found"
- DescKeyErrBackupCreateArchive = "err.backup.create-archive"
- DescKeyErrBackupCreateArchiveDir = "err.backup.create-archive-dir"
- DescKeyErrBackupCreateBackup = "err.backup.create-backup"
+ // DescKeyErrBackupCreateArchive is the text key for err backup create archive
+ // messages.
+ DescKeyErrBackupCreateArchive = "err.backup.create-archive"
+ // DescKeyErrBackupCreateArchiveDir is the text key for err backup create
+ // archive dir messages.
+ DescKeyErrBackupCreateArchiveDir = "err.backup.create-archive-dir"
+ // DescKeyErrBackupCreateBackup is the text key for err backup create backup
+ // messages.
+ DescKeyErrBackupCreateBackup = "err.backup.create-backup"
+ // DescKeyErrBackupInvalidBackupScope is the text key for err backup invalid
+ // backup scope messages.
DescKeyErrBackupInvalidBackupScope = "err.backup.invalid-backup-scope"
- DescKeyErrBackupInvalidSMBURL = "err.backup.invalid-smb-url"
- DescKeyErrBackupMountFailed = "err.backup.mount-failed"
- DescKeyErrBackupSMBMissingShare = "err.backup.smb-missing-share"
- DescKeyErrBackupSourceNotFound = "err.backup.source-not-found"
- DescKeyErrBackupWriteArchive = "err.backup.write-archive"
- DescKeyErrBackupWriteSMB = "err.backup.write-smb"
+ // DescKeyErrBackupInvalidSMBURL is the text key for err backup invalid smburl
+ // messages.
+ DescKeyErrBackupInvalidSMBURL = "err.backup.invalid-smb-url"
+ // DescKeyErrBackupMountFailed is the text key for err backup mount failed
+ // messages.
+ DescKeyErrBackupMountFailed = "err.backup.mount-failed"
+ // DescKeyErrBackupSMBMissingShare is the text key for err backup smb missing
+ // share messages.
+ DescKeyErrBackupSMBMissingShare = "err.backup.smb-missing-share"
+ // DescKeyErrBackupSourceNotFound is the text key for err backup source not
+ // found messages.
+ DescKeyErrBackupSourceNotFound = "err.backup.source-not-found"
+ // DescKeyErrBackupWriteArchive is the text key for err backup write archive
+ // messages.
+ DescKeyErrBackupWriteArchive = "err.backup.write-archive"
+ // DescKeyErrBackupWriteSMB is the text key for err backup write smb messages.
+ DescKeyErrBackupWriteSMB = "err.backup.write-smb"
)
diff --git a/internal/config/embed/text/err_cli.go b/internal/config/embed/text/err_cli.go
index 76ab35cee..ea7e91817 100644
--- a/internal/config/embed/text/err_cli.go
+++ b/internal/config/embed/text/err_cli.go
@@ -8,5 +8,7 @@ package text
// DescKeys for CLI errors.
const (
+ // DescKeyErrCliNoToolSpecified is the text key for err cli no tool specified
+ // messages.
DescKeyErrCliNoToolSpecified = "err.cli.no-tool-specified"
)
diff --git a/internal/config/embed/text/err_config.go b/internal/config/embed/text/err_config.go
index ea44132ac..11d7672b0 100644
--- a/internal/config/embed/text/err_config.go
+++ b/internal/config/embed/text/err_config.go
@@ -8,16 +8,40 @@ package text
// DescKeys for configuration errors.
const (
- DescKeyErrConfigGoldenNotFound = "err.config.golden-not-found"
- DescKeyErrConfigInvalidTool = "err.config.invalid-tool"
- DescKeyErrConfigMarshalPlugins = "err.config.marshal-plugins"
- DescKeyErrConfigMarshalSettings = "err.config.marshal-settings"
+ // DescKeyErrConfigGoldenNotFound is the text key for err config golden not
+ // found messages.
+ DescKeyErrConfigGoldenNotFound = "err.config.golden-not-found"
+ // DescKeyErrConfigInvalidTool is the text key for err config invalid tool
+ // messages.
+ DescKeyErrConfigInvalidTool = "err.config.invalid-tool"
+ // DescKeyErrConfigMarshalPlugins is the text key for err config marshal
+ // plugins messages.
+ DescKeyErrConfigMarshalPlugins = "err.config.marshal-plugins"
+ // DescKeyErrConfigMarshalSettings is the text key for err config marshal
+ // settings messages.
+ DescKeyErrConfigMarshalSettings = "err.config.marshal-settings"
+ // DescKeyErrConfigReadEmbeddedSchema is the text key for err config read
+ // embedded schema messages.
DescKeyErrConfigReadEmbeddedSchema = "err.config.read-embedded-schema"
- DescKeyErrConfigReadProfile = "err.config.read-profile"
- DescKeyErrConfigSettingsNotFound = "err.config.settings-not-found"
- DescKeyErrConfigUnknownFormat = "err.config.unknown-format"
- DescKeyErrConfigUnknownProfile = "err.config.unknown-profile"
+ // DescKeyErrConfigReadProfile is the text key for err config read profile
+ // messages.
+ DescKeyErrConfigReadProfile = "err.config.read-profile"
+ // DescKeyErrConfigSettingsNotFound is the text key for err config settings
+ // not found messages.
+ DescKeyErrConfigSettingsNotFound = "err.config.settings-not-found"
+ // DescKeyErrConfigUnknownFormat is the text key for err config unknown format
+ // messages.
+ DescKeyErrConfigUnknownFormat = "err.config.unknown-format"
+ // DescKeyErrConfigUnknownProfile is the text key for err config unknown
+ // profile messages.
+ DescKeyErrConfigUnknownProfile = "err.config.unknown-profile"
+ // DescKeyErrConfigUnknownProjectType is the text key for err config unknown
+ // project type messages.
DescKeyErrConfigUnknownProjectType = "err.config.unknown-project-type"
- DescKeyErrConfigUnknownUpdateType = "err.config.unknown-update-type"
- DescKeyErrConfigUnsupportedTool = "err.config.unsupported-tool"
+ // DescKeyErrConfigUnknownUpdateType is the text key for err config unknown
+ // update type messages.
+ DescKeyErrConfigUnknownUpdateType = "err.config.unknown-update-type"
+ // DescKeyErrConfigUnsupportedTool is the text key for err config unsupported
+ // tool messages.
+ DescKeyErrConfigUnsupportedTool = "err.config.unsupported-tool"
)
diff --git a/internal/config/embed/text/err_crypto.go b/internal/config/embed/text/err_crypto.go
index d583b301f..ba35531bd 100644
--- a/internal/config/embed/text/err_crypto.go
+++ b/internal/config/embed/text/err_crypto.go
@@ -8,19 +8,43 @@ package text
// DescKeys for cryptography errors.
const (
+ // DescKeyErrCryptoCiphertextTooShort is the text key for err crypto
+ // ciphertext too short messages.
DescKeyErrCryptoCiphertextTooShort = "err.crypto.ciphertext-too-short"
- DescKeyErrCryptoCreateCipher = "err.crypto.create-cipher"
- DescKeyErrCryptoCreateGCM = "err.crypto.create-gcm"
- DescKeyErrCryptoDecrypt = "err.crypto.decrypt"
- DescKeyErrCryptoDecryptFailed = "err.crypto.decrypt-failed"
- DescKeyErrCryptoEncryptFailed = "err.crypto.encrypt-failed"
- DescKeyErrCryptoGenerateKey = "err.crypto.generate-key"
- DescKeyErrCryptoGenerateNonce = "err.crypto.generate-nonce"
- DescKeyErrCryptoInvalidKeySize = "err.crypto.invalid-key-size"
- DescKeyErrCryptoLoadKey = "err.crypto.load-key"
- DescKeyErrCryptoMkdirKeyDir = "err.crypto.mkdir-key-dir"
- DescKeyErrCryptoNoKeyAt = "err.crypto.no-key-at"
- DescKeyErrCryptoReadKey = "err.crypto.read-key"
- DescKeyErrCryptoSaveKey = "err.crypto.save-key"
- DescKeyErrCryptoWriteKey = "err.crypto.write-key"
+ // DescKeyErrCryptoCreateCipher is the text key for err crypto create cipher
+ // messages.
+ DescKeyErrCryptoCreateCipher = "err.crypto.create-cipher"
+ // DescKeyErrCryptoCreateGCM is the text key for err crypto create gcm
+ // messages.
+ DescKeyErrCryptoCreateGCM = "err.crypto.create-gcm"
+ // DescKeyErrCryptoDecrypt is the text key for err crypto decrypt messages.
+ DescKeyErrCryptoDecrypt = "err.crypto.decrypt"
+ // DescKeyErrCryptoDecryptFailed is the text key for err crypto decrypt failed
+ // messages.
+ DescKeyErrCryptoDecryptFailed = "err.crypto.decrypt-failed"
+ // DescKeyErrCryptoEncryptFailed is the text key for err crypto encrypt failed
+ // messages.
+ DescKeyErrCryptoEncryptFailed = "err.crypto.encrypt-failed"
+ // DescKeyErrCryptoGenerateKey is the text key for err crypto generate key
+ // messages.
+ DescKeyErrCryptoGenerateKey = "err.crypto.generate-key"
+ // DescKeyErrCryptoGenerateNonce is the text key for err crypto generate nonce
+ // messages.
+ DescKeyErrCryptoGenerateNonce = "err.crypto.generate-nonce"
+ // DescKeyErrCryptoInvalidKeySize is the text key for err crypto invalid key
+ // size messages.
+ DescKeyErrCryptoInvalidKeySize = "err.crypto.invalid-key-size"
+ // DescKeyErrCryptoLoadKey is the text key for err crypto load key messages.
+ DescKeyErrCryptoLoadKey = "err.crypto.load-key"
+ // DescKeyErrCryptoMkdirKeyDir is the text key for err crypto mkdir key dir
+ // messages.
+ DescKeyErrCryptoMkdirKeyDir = "err.crypto.mkdir-key-dir"
+ // DescKeyErrCryptoNoKeyAt is the text key for err crypto no key at messages.
+ DescKeyErrCryptoNoKeyAt = "err.crypto.no-key-at"
+ // DescKeyErrCryptoReadKey is the text key for err crypto read key messages.
+ DescKeyErrCryptoReadKey = "err.crypto.read-key"
+ // DescKeyErrCryptoSaveKey is the text key for err crypto save key messages.
+ DescKeyErrCryptoSaveKey = "err.crypto.save-key"
+ // DescKeyErrCryptoWriteKey is the text key for err crypto write key messages.
+ DescKeyErrCryptoWriteKey = "err.crypto.write-key"
)
diff --git a/internal/config/embed/text/err_dep.go b/internal/config/embed/text/err_dep.go
index cd267230e..9a57405db 100644
--- a/internal/config/embed/text/err_dep.go
+++ b/internal/config/embed/text/err_dep.go
@@ -8,6 +8,10 @@ package text
// DescKeys for dependency tracking errors.
const (
+ // DescKeyErrDepsCargoMetadataFailed is the text key for err deps cargo
+ // metadata failed messages.
DescKeyErrDepsCargoMetadataFailed = "err.deps.cargo-metadata-failed"
- DescKeyErrDepsParseCargoMetadata = "err.deps.parse-cargo-metadata"
+ // DescKeyErrDepsParseCargoMetadata is the text key for err deps parse cargo
+ // metadata messages.
+ DescKeyErrDepsParseCargoMetadata = "err.deps.parse-cargo-metadata"
)
diff --git a/internal/config/embed/text/err_fs.go b/internal/config/embed/text/err_fs.go
index 4a8df49d3..9c6e2a94a 100644
--- a/internal/config/embed/text/err_fs.go
+++ b/internal/config/embed/text/err_fs.go
@@ -8,40 +8,76 @@ package text
// DescKeys for filesystem operations errors.
const (
- DescKeyErrFsBoundaryViolation = "err.fs.boundary-violation"
- DescKeyErrFsCreateDir = "err.fs.create-dir"
- DescKeyErrFsDirNotFound = "err.fs.dir-not-found"
- DescKeyErrFsFileAmend = "err.fs.file-amend"
- DescKeyErrFsFileRead = "err.fs.file-read"
- DescKeyErrFsFileUpdate = "err.fs.file-update"
- DescKeyErrFsFileWrite = "err.fs.file-write"
- DescKeyErrFsMkdir = "err.fs.mkdir"
- DescKeyErrFsNoInput = "err.fs.no-input"
- DescKeyErrFsNotDirectory = "err.fs.not-directory"
- DescKeyErrFsOpenFile = "err.fs.open-file"
- DescKeyErrFsPathEscapesBase = "err.fs.path-escapes-base"
- DescKeyErrFsReadDir = "err.fs.read-dir"
- DescKeyErrFsReadDirectory = "err.fs.read-directory"
- DescKeyErrFsReadFile = "err.fs.read-file"
- DescKeyErrFsReadInput = "err.fs.read-input"
- DescKeyErrFsReadInputStream = "err.fs.read-input-stream"
- DescKeyErrFsRefuseSystemPath = "err.fs.refuse-system-path"
+ // DescKeyErrFsBoundaryViolation is the text key for err fs boundary violation
+ // messages.
+ DescKeyErrFsBoundaryViolation = "err.fs.boundary-violation"
+ // DescKeyErrFsCreateDir is the text key for err fs create dir messages.
+ DescKeyErrFsCreateDir = "err.fs.create-dir"
+ // DescKeyErrFsDirNotFound is the text key for err fs dir not found messages.
+ DescKeyErrFsDirNotFound = "err.fs.dir-not-found"
+ // DescKeyErrFsFileAmend is the text key for err fs file amend messages.
+ DescKeyErrFsFileAmend = "err.fs.file-amend"
+ // DescKeyErrFsFileRead is the text key for err fs file read messages.
+ DescKeyErrFsFileRead = "err.fs.file-read"
+ // DescKeyErrFsFileUpdate is the text key for err fs file update messages.
+ DescKeyErrFsFileUpdate = "err.fs.file-update"
+ // DescKeyErrFsFileWrite is the text key for err fs file write messages.
+ DescKeyErrFsFileWrite = "err.fs.file-write"
+ // DescKeyErrFsMkdir is the text key for err fs mkdir messages.
+ DescKeyErrFsMkdir = "err.fs.mkdir"
+ // DescKeyErrFsNoInput is the text key for err fs no input messages.
+ DescKeyErrFsNoInput = "err.fs.no-input"
+ // DescKeyErrFsNotDirectory is the text key for err fs not directory messages.
+ DescKeyErrFsNotDirectory = "err.fs.not-directory"
+ // DescKeyErrFsOpenFile is the text key for err fs open file messages.
+ DescKeyErrFsOpenFile = "err.fs.open-file"
+ // DescKeyErrFsPathEscapesBase is the text key for err fs path escapes base
+ // messages.
+ DescKeyErrFsPathEscapesBase = "err.fs.path-escapes-base"
+ // DescKeyErrFsReadDir is the text key for err fs read dir messages.
+ DescKeyErrFsReadDir = "err.fs.read-dir"
+ // DescKeyErrFsReadDirectory is the text key for err fs read directory
+ // messages.
+ DescKeyErrFsReadDirectory = "err.fs.read-directory"
+ // DescKeyErrFsReadFile is the text key for err fs read file messages.
+ DescKeyErrFsReadFile = "err.fs.read-file"
+ // DescKeyErrFsReadInput is the text key for err fs read input messages.
+ DescKeyErrFsReadInput = "err.fs.read-input"
+ // DescKeyErrFsReadInputStream is the text key for err fs read input stream
+ // messages.
+ DescKeyErrFsReadInputStream = "err.fs.read-input-stream"
+ // DescKeyErrFsRefuseSystemPath is the text key for err fs refuse system path
+ // messages.
+ DescKeyErrFsRefuseSystemPath = "err.fs.refuse-system-path"
+ // DescKeyErrFsRefuseSystemPathRoot is the text key for err fs refuse system
+ // path root messages.
DescKeyErrFsRefuseSystemPathRoot = "err.fs.refuse-system-path-root"
- DescKeyErrFsResolveBase = "err.fs.resolve-base"
- DescKeyErrFsResolvePath = "err.fs.resolve-path"
- DescKeyErrFsStatPath = "err.fs.stat-path"
- DescKeyErrFsStdinRead = "err.fs.stdin-read"
- DescKeyErrFsWriteBuffer = "err.fs.write-buffer"
- DescKeyErrFsWriteFileFailed = "err.fs.write-file-failed"
- DescKeyErrFsWriteMerged = "err.fs.write-merged"
+ // DescKeyErrFsResolveBase is the text key for err fs resolve base messages.
+ DescKeyErrFsResolveBase = "err.fs.resolve-base"
+ // DescKeyErrFsResolvePath is the text key for err fs resolve path messages.
+ DescKeyErrFsResolvePath = "err.fs.resolve-path"
+ // DescKeyErrFsStatPath is the text key for err fs stat path messages.
+ DescKeyErrFsStatPath = "err.fs.stat-path"
+ // DescKeyErrFsStdinRead is the text key for err fs stdin read messages.
+ DescKeyErrFsStdinRead = "err.fs.stdin-read"
+ // DescKeyErrFsWriteBuffer is the text key for err fs write buffer messages.
+ DescKeyErrFsWriteBuffer = "err.fs.write-buffer"
+ // DescKeyErrFsWriteFileFailed is the text key for err fs write file failed
+ // messages.
+ DescKeyErrFsWriteFileFailed = "err.fs.write-file-failed"
+ // DescKeyErrFsWriteMerged is the text key for err fs write merged messages.
+ DescKeyErrFsWriteMerged = "err.fs.write-merged"
)
// DescKeys for context directory errors.
const (
+ // DescKeyErrContextDirNotFound is the text key for err context dir not found
+ // messages.
DescKeyErrContextDirNotFound = "err.context.dir-not-found"
)
// DescKeys for filesystem write output.
const (
+ // DescKeyWritePathExists is the text key for write path exists messages.
DescKeyWritePathExists = "write.path-exists"
)
diff --git a/internal/config/embed/text/err_hook.go b/internal/config/embed/text/err_hook.go
index 4befa572f..9f16f6a99 100644
--- a/internal/config/embed/text/err_hook.go
+++ b/internal/config/embed/text/err_hook.go
@@ -8,25 +8,59 @@ package text
// DescKeys for hook execution errors.
const (
- DescKeyErrHookChmod = "err.hook.chmod"
- DescKeyErrHookCreateDir = "err.hook.create-dir"
- DescKeyErrHookDiscover = "err.hook.discover"
+ // DescKeyErrHookChmod is the text key for err hook chmod messages.
+ DescKeyErrHookChmod = "err.hook.chmod"
+ // DescKeyErrHookCreateDir is the text key for err hook create dir messages.
+ DescKeyErrHookCreateDir = "err.hook.create-dir"
+ // DescKeyErrHookDiscover is the text key for err hook discover messages.
+ DescKeyErrHookDiscover = "err.hook.discover"
+ // DescKeyErrHookEmbeddedTemplateNotFound is the text key for err hook
+ // embedded template not found messages.
DescKeyErrHookEmbeddedTemplateNotFound = "err.hook.embedded-template-not-found"
- DescKeyErrHookExit = "err.hook.exit"
- DescKeyErrHookInvalidJSONOutput = "err.hook.invalid-json-output"
- DescKeyErrHookInvalidType = "err.hook.invalid-type"
- DescKeyErrHookMarshalInput = "err.hook.marshal-input"
- DescKeyErrHookNotFound = "err.hook.not-found"
- DescKeyErrHookOverrideExists = "err.hook.override-exists"
- DescKeyErrHookRemoveOverride = "err.hook.remove-override"
- DescKeyErrHookResolveHooksDir = "err.hook.resolve-hooks-dir"
- DescKeyErrHookResolvePath = "err.hook.resolve-path"
- DescKeyErrHookScriptExists = "err.hook.script-exists"
- DescKeyErrHookStat = "err.hook.stat"
- DescKeyErrHookStatPath = "err.hook.stat-path"
- DescKeyErrHookTimeout = "err.hook.timeout"
- DescKeyErrHookUnknownHook = "err.hook.unknown-hook"
- DescKeyErrHookUnknownVariant = "err.hook.unknown-variant"
- DescKeyErrHookWriteOverride = "err.hook.write-override"
- DescKeyErrHookWriteScript = "err.hook.write-script"
+ // DescKeyErrHookExit is the text key for err hook exit messages.
+ DescKeyErrHookExit = "err.hook.exit"
+ // DescKeyErrHookInvalidJSONOutput is the text key for err hook invalid json
+ // output messages.
+ DescKeyErrHookInvalidJSONOutput = "err.hook.invalid-json-output"
+ // DescKeyErrHookInvalidType is the text key for err hook invalid type
+ // messages.
+ DescKeyErrHookInvalidType = "err.hook.invalid-type"
+ // DescKeyErrHookMarshalInput is the text key for err hook marshal input
+ // messages.
+ DescKeyErrHookMarshalInput = "err.hook.marshal-input"
+ // DescKeyErrHookNotFound is the text key for err hook not found messages.
+ DescKeyErrHookNotFound = "err.hook.not-found"
+ // DescKeyErrHookOverrideExists is the text key for err hook override exists
+ // messages.
+ DescKeyErrHookOverrideExists = "err.hook.override-exists"
+ // DescKeyErrHookRemoveOverride is the text key for err hook remove override
+ // messages.
+ DescKeyErrHookRemoveOverride = "err.hook.remove-override"
+ // DescKeyErrHookResolveHooksDir is the text key for err hook resolve hooks
+ // dir messages.
+ DescKeyErrHookResolveHooksDir = "err.hook.resolve-hooks-dir"
+ // DescKeyErrHookResolvePath is the text key for err hook resolve path
+ // messages.
+ DescKeyErrHookResolvePath = "err.hook.resolve-path"
+ // DescKeyErrHookScriptExists is the text key for err hook script exists
+ // messages.
+ DescKeyErrHookScriptExists = "err.hook.script-exists"
+ // DescKeyErrHookStat is the text key for err hook stat messages.
+ DescKeyErrHookStat = "err.hook.stat"
+ // DescKeyErrHookStatPath is the text key for err hook stat path messages.
+ DescKeyErrHookStatPath = "err.hook.stat-path"
+ // DescKeyErrHookTimeout is the text key for err hook timeout messages.
+ DescKeyErrHookTimeout = "err.hook.timeout"
+ // DescKeyErrHookUnknownHook is the text key for err hook unknown hook
+ // messages.
+ DescKeyErrHookUnknownHook = "err.hook.unknown-hook"
+ // DescKeyErrHookUnknownVariant is the text key for err hook unknown variant
+ // messages.
+ DescKeyErrHookUnknownVariant = "err.hook.unknown-variant"
+ // DescKeyErrHookWriteOverride is the text key for err hook write override
+ // messages.
+ DescKeyErrHookWriteOverride = "err.hook.write-override"
+ // DescKeyErrHookWriteScript is the text key for err hook write script
+ // messages.
+ DescKeyErrHookWriteScript = "err.hook.write-script"
)
diff --git a/internal/config/embed/text/err_http.go b/internal/config/embed/text/err_http.go
index 4d68b408c..9409a2c2d 100644
--- a/internal/config/embed/text/err_http.go
+++ b/internal/config/embed/text/err_http.go
@@ -8,7 +8,12 @@ package text
// DescKeys for HTTP operations errors.
const (
- DescKeyErrHttpParseURL = "err.http.parse-url"
+ // DescKeyErrHttpParseURL is the text key for err http parse url messages.
+ DescKeyErrHttpParseURL = "err.http.parse-url"
+ // DescKeyErrHttpTooManyRedirects is the text key for err http too many
+ // redirects messages.
DescKeyErrHttpTooManyRedirects = "err.http.too-many-redirects"
- DescKeyErrHttpUnsafeURLScheme = "err.http.unsafe-url-scheme"
+ // DescKeyErrHttpUnsafeURLScheme is the text key for err http unsafe url
+ // scheme messages.
+ DescKeyErrHttpUnsafeURLScheme = "err.http.unsafe-url-scheme"
)
diff --git a/internal/config/embed/text/err_init.go b/internal/config/embed/text/err_init.go
index 24149d63c..fedfd47eb 100644
--- a/internal/config/embed/text/err_init.go
+++ b/internal/config/embed/text/err_init.go
@@ -8,12 +8,27 @@ package text
// DescKeys for initialization errors.
const (
- DescKeyErrInitCtxNotInPath = "err.init.ctx-not-in-path"
+ // DescKeyErrInitCtxNotInPath is the text key for err init ctx not in path
+ // messages.
+ DescKeyErrInitCtxNotInPath = "err.init.ctx-not-in-path"
+ // DescKeyErrInitContextNotInitialized is the text key for err init context
+ // not initialized messages.
DescKeyErrInitContextNotInitialized = "err.init.context-not-initialized"
- DescKeyErrInitCreateMakefile = "err.init.create-makefile"
- DescKeyErrInitDetectReferenceTime = "err.init.detect-reference-time"
- DescKeyErrInitHomeDir = "err.init.home-dir"
- DescKeyErrInitNotInitialized = "err.init.not-initialized"
- DescKeyErrInitReadInitTemplate = "err.init.read-init-template"
- DescKeyErrInitReadProjectReadme = "err.init.read-project-readme"
+ // DescKeyErrInitCreateMakefile is the text key for err init create makefile
+ // messages.
+ DescKeyErrInitCreateMakefile = "err.init.create-makefile"
+ // DescKeyErrInitDetectReferenceTime is the text key for err init detect
+ // reference time messages.
+ DescKeyErrInitDetectReferenceTime = "err.init.detect-reference-time"
+ // DescKeyErrInitHomeDir is the text key for err init home dir messages.
+ DescKeyErrInitHomeDir = "err.init.home-dir"
+ // DescKeyErrInitNotInitialized is the text key for err init not initialized
+ // messages.
+ DescKeyErrInitNotInitialized = "err.init.not-initialized"
+ // DescKeyErrInitReadInitTemplate is the text key for err init read init
+ // template messages.
+ DescKeyErrInitReadInitTemplate = "err.init.read-init-template"
+ // DescKeyErrInitReadProjectReadme is the text key for err init read project
+ // readme messages.
+ DescKeyErrInitReadProjectReadme = "err.init.read-project-readme"
)
diff --git a/internal/config/embed/text/err_journal.go b/internal/config/embed/text/err_journal.go
index e2e88278c..9a904dacc 100644
--- a/internal/config/embed/text/err_journal.go
+++ b/internal/config/embed/text/err_journal.go
@@ -8,14 +8,34 @@ package text
// DescKeys for journal operations errors.
const (
- DescKeyErrJournalLoadJournalState = "err.journal.load-journal-state"
- DescKeyErrJournalNoEntriesMatch = "err.journal.no-entries-match"
- DescKeyErrJournalNoJournalDir = "err.journal.no-journal-dir"
- DescKeyErrJournalNoJournalEntries = "err.journal.no-journal-entries"
- DescKeyErrJournalReadJournalDir = "err.journal.read-journal-dir"
+ // DescKeyErrJournalLoadJournalState is the text key for err journal load
+ // journal state messages.
+ DescKeyErrJournalLoadJournalState = "err.journal.load-journal-state"
+ // DescKeyErrJournalNoEntriesMatch is the text key for err journal no entries
+ // match messages.
+ DescKeyErrJournalNoEntriesMatch = "err.journal.no-entries-match"
+ // DescKeyErrJournalNoJournalDir is the text key for err journal no journal
+ // dir messages.
+ DescKeyErrJournalNoJournalDir = "err.journal.no-journal-dir"
+ // DescKeyErrJournalNoJournalEntries is the text key for err journal no
+ // journal entries messages.
+ DescKeyErrJournalNoJournalEntries = "err.journal.no-journal-entries"
+ // DescKeyErrJournalReadJournalDir is the text key for err journal read
+ // journal dir messages.
+ DescKeyErrJournalReadJournalDir = "err.journal.read-journal-dir"
+ // DescKeyErrJournalRegenerateRequiresAll is the text key for err journal
+ // regenerate requires all messages.
DescKeyErrJournalRegenerateRequiresAll = "err.journal.regenerate-requires-all"
- DescKeyErrJournalSaveJournalState = "err.journal.save-journal-state"
- DescKeyErrJournalScanJournal = "err.journal.scan-journal"
- DescKeyErrJournalStageNotSet = "err.journal.stage-not-set"
- DescKeyErrJournalUnknownStage = "err.journal.unknown-stage"
+ // DescKeyErrJournalSaveJournalState is the text key for err journal save
+ // journal state messages.
+ DescKeyErrJournalSaveJournalState = "err.journal.save-journal-state"
+ // DescKeyErrJournalScanJournal is the text key for err journal scan journal
+ // messages.
+ DescKeyErrJournalScanJournal = "err.journal.scan-journal"
+ // DescKeyErrJournalStageNotSet is the text key for err journal stage not set
+ // messages.
+ DescKeyErrJournalStageNotSet = "err.journal.stage-not-set"
+ // DescKeyErrJournalUnknownStage is the text key for err journal unknown stage
+ // messages.
+ DescKeyErrJournalUnknownStage = "err.journal.unknown-stage"
)
diff --git a/internal/config/embed/text/err_journal_source.go b/internal/config/embed/text/err_journal_source.go
index 6c2e29a4b..3c97d528c 100644
--- a/internal/config/embed/text/err_journal_source.go
+++ b/internal/config/embed/text/err_journal_source.go
@@ -8,10 +8,22 @@ package text
// DescKeys for journal source errors.
const (
- DescKeyErrJournalSourceEventLogRead = "err.journal.source.event-log-read"
- DescKeyErrJournalSourceOpenLogFile = "err.journal.source.open-log-file"
+ // DescKeyErrJournalSourceEventLogRead is the text key for err journal source
+ // event log read messages.
+ DescKeyErrJournalSourceEventLogRead = "err.journal.source.event-log-read"
+ // DescKeyErrJournalSourceOpenLogFile is the text key for err journal source
+ // open log file messages.
+ DescKeyErrJournalSourceOpenLogFile = "err.journal.source.open-log-file"
+ // DescKeyErrJournalSourceReindexFileNotFound is the text key for err journal
+ // source reindex file not found messages.
DescKeyErrJournalSourceReindexFileNotFound = "err.journal.source.reindex-file-not-found"
- DescKeyErrJournalSourceReindexFileRead = "err.journal.source.reindex-file-read"
- DescKeyErrJournalSourceReindexFileWrite = "err.journal.source.reindex-file-write"
- DescKeyErrJournalSourceStatsGlob = "err.journal.source.stats-glob"
+ // DescKeyErrJournalSourceReindexFileRead is the text key for err journal
+ // source reindex file read messages.
+ DescKeyErrJournalSourceReindexFileRead = "err.journal.source.reindex-file-read"
+ // DescKeyErrJournalSourceReindexFileWrite is the text key for err journal
+ // source reindex file write messages.
+ DescKeyErrJournalSourceReindexFileWrite = "err.journal.source.reindex-file-write"
+ // DescKeyErrJournalSourceStatsGlob is the text key for err journal source
+ // stats glob messages.
+ DescKeyErrJournalSourceStatsGlob = "err.journal.source.stats-glob"
)
diff --git a/internal/config/embed/text/err_lifecycle_hook.go b/internal/config/embed/text/err_lifecycle_hook.go
index bc4a99699..60d106ca6 100644
--- a/internal/config/embed/text/err_lifecycle_hook.go
+++ b/internal/config/embed/text/err_lifecycle_hook.go
@@ -8,7 +8,13 @@ package text
// DescKeys for lifecycle hook error messages.
const (
- DescKeyErrLifecycleHookSymlink = "err.lifecycle-hook.symlink"
- DescKeyErrLifecycleHookBoundary = "err.lifecycle-hook.boundary"
+ // DescKeyErrLifecycleHookSymlink is the text key for err lifecycle hook
+ // symlink messages.
+ DescKeyErrLifecycleHookSymlink = "err.lifecycle-hook.symlink"
+ // DescKeyErrLifecycleHookBoundary is the text key for err lifecycle hook
+ // boundary messages.
+ DescKeyErrLifecycleHookBoundary = "err.lifecycle-hook.boundary"
+ // DescKeyErrLifecycleHookNotExecutable is the text key for err lifecycle hook
+ // not executable messages.
DescKeyErrLifecycleHookNotExecutable = "err.lifecycle-hook.not-executable"
)
diff --git a/internal/config/embed/text/err_memory.go b/internal/config/embed/text/err_memory.go
index 65caba2ad..5329152d1 100644
--- a/internal/config/embed/text/err_memory.go
+++ b/internal/config/embed/text/err_memory.go
@@ -8,26 +8,69 @@ package text
// DescKeys for memory operations errors.
const (
- DescKeyErrMemoryDiscoverNoMemory = "err.memory.discover-no-memory"
+ // DescKeyErrMemoryDiscoverNoMemory is the text key for err memory discover no
+ // memory messages.
+ DescKeyErrMemoryDiscoverNoMemory = "err.memory.discover-no-memory"
+ // DescKeyErrMemoryDiscoverResolveHome is the text key for err memory discover
+ // resolve home messages.
DescKeyErrMemoryDiscoverResolveHome = "err.memory.discover-resolve-home"
+ // DescKeyErrMemoryDiscoverResolveRoot is the text key for err memory discover
+ // resolve root messages.
DescKeyErrMemoryDiscoverResolveRoot = "err.memory.discover-resolve-root"
- DescKeyErrMemoryArchivePrevious = "err.memory.memory-archive-previous"
- DescKeyErrMemoryCreateArchiveDir = "err.memory.memory-create-archive-dir"
- DescKeyErrMemoryCreateDir = "err.memory.memory-create-dir"
- DescKeyErrMemoryDiffFailed = "err.memory.memory-diff-failed"
- DescKeyErrMemoryDiscoverFailed = "err.memory.memory-discover-failed"
- DescKeyErrMemoryNotFound = "err.memory.memory-not-found"
- DescKeyErrMemoryReadDiffSource = "err.memory.memory-read-diff-source"
- DescKeyErrMemoryReadMirror = "err.memory.memory-read-mirror"
- DescKeyErrMemoryReadMirrorArchive = "err.memory.memory-read-mirror-archive"
- DescKeyErrMemoryReadSource = "err.memory.memory-read-source"
- DescKeyErrMemorySelectContent = "err.memory.memory-select-content"
- DescKeyErrMemoryWriteArchive = "err.memory.memory-write-archive"
- DescKeyErrMemoryWriteMemory = "err.memory.memory-write-memory"
- DescKeyErrMemoryWriteMirror = "err.memory.memory-write-mirror"
- DescKeyErrMemoryPublishFailed = "err.memory.publish-failed"
- DescKeyErrMemoryReadMemory = "err.memory.read-memory"
+ // DescKeyErrMemoryArchivePrevious is the text key for err memory archive
+ // previous messages.
+ DescKeyErrMemoryArchivePrevious = "err.memory.memory-archive-previous"
+ // DescKeyErrMemoryCreateArchiveDir is the text key for err memory create
+ // archive dir messages.
+ DescKeyErrMemoryCreateArchiveDir = "err.memory.memory-create-archive-dir"
+ // DescKeyErrMemoryCreateDir is the text key for err memory create dir
+ // messages.
+ DescKeyErrMemoryCreateDir = "err.memory.memory-create-dir"
+ // DescKeyErrMemoryDiffFailed is the text key for err memory diff failed
+ // messages.
+ DescKeyErrMemoryDiffFailed = "err.memory.memory-diff-failed"
+ // DescKeyErrMemoryDiscoverFailed is the text key for err memory discover
+ // failed messages.
+ DescKeyErrMemoryDiscoverFailed = "err.memory.memory-discover-failed"
+ // DescKeyErrMemoryNotFound is the text key for err memory not found messages.
+ DescKeyErrMemoryNotFound = "err.memory.memory-not-found"
+ // DescKeyErrMemoryReadDiffSource is the text key for err memory read diff
+ // source messages.
+ DescKeyErrMemoryReadDiffSource = "err.memory.memory-read-diff-source"
+ // DescKeyErrMemoryReadMirror is the text key for err memory read mirror
+ // messages.
+ DescKeyErrMemoryReadMirror = "err.memory.memory-read-mirror"
+ // DescKeyErrMemoryReadMirrorArchive is the text key for err memory read
+ // mirror archive messages.
+ DescKeyErrMemoryReadMirrorArchive = "err.memory.memory-read-mirror-archive"
+ // DescKeyErrMemoryReadSource is the text key for err memory read source
+ // messages.
+ DescKeyErrMemoryReadSource = "err.memory.memory-read-source"
+ // DescKeyErrMemorySelectContent is the text key for err memory select content
+ // messages.
+ DescKeyErrMemorySelectContent = "err.memory.memory-select-content"
+ // DescKeyErrMemoryWriteArchive is the text key for err memory write archive
+ // messages.
+ DescKeyErrMemoryWriteArchive = "err.memory.memory-write-archive"
+ // DescKeyErrMemoryWriteMemory is the text key for err memory write memory
+ // messages.
+ DescKeyErrMemoryWriteMemory = "err.memory.memory-write-memory"
+ // DescKeyErrMemoryWriteMirror is the text key for err memory write mirror
+ // messages.
+ DescKeyErrMemoryWriteMirror = "err.memory.memory-write-mirror"
+ // DescKeyErrMemoryPublishFailed is the text key for err memory publish failed
+ // messages.
+ DescKeyErrMemoryPublishFailed = "err.memory.publish-failed"
+ // DescKeyErrMemoryReadMemory is the text key for err memory read memory
+ // messages.
+ DescKeyErrMemoryReadMemory = "err.memory.read-memory"
+ // DescKeyErrMemorySelectContentFailed is the text key for err memory select
+ // content failed messages.
DescKeyErrMemorySelectContentFailed = "err.memory.select-content-failed"
- DescKeyErrMemorySyncFailed = "err.memory.sync-failed"
- DescKeyErrMemoryWriteMemoryTop = "err.memory.write-memory"
+ // DescKeyErrMemorySyncFailed is the text key for err memory sync failed
+ // messages.
+ DescKeyErrMemorySyncFailed = "err.memory.sync-failed"
+ // DescKeyErrMemoryWriteMemoryTop is the text key for err memory write memory
+ // top messages.
+ DescKeyErrMemoryWriteMemoryTop = "err.memory.write-memory"
)
diff --git a/internal/config/embed/text/err_notify.go b/internal/config/embed/text/err_notify.go
index 54ad7ff8f..7daf8d0d5 100644
--- a/internal/config/embed/text/err_notify.go
+++ b/internal/config/embed/text/err_notify.go
@@ -8,9 +8,19 @@ package text
// DescKeys for notifications errors.
const (
- DescKeyErrNotifyLoadWebhook = "err.notify.load-webhook"
- DescKeyErrNotifyMarshalPayload = "err.notify.marshal-payload"
- DescKeyErrNotifySaveWebhook = "err.notify.save-webhook"
+ // DescKeyErrNotifyLoadWebhook is the text key for err notify load webhook
+ // messages.
+ DescKeyErrNotifyLoadWebhook = "err.notify.load-webhook"
+ // DescKeyErrNotifyMarshalPayload is the text key for err notify marshal
+ // payload messages.
+ DescKeyErrNotifyMarshalPayload = "err.notify.marshal-payload"
+ // DescKeyErrNotifySaveWebhook is the text key for err notify save webhook
+ // messages.
+ DescKeyErrNotifySaveWebhook = "err.notify.save-webhook"
+ // DescKeyErrNotifySendNotification is the text key for err notify send
+ // notification messages.
DescKeyErrNotifySendNotification = "err.notify.send-notification"
- DescKeyErrNotifyWebhookEmpty = "err.notify.webhook-empty"
+ // DescKeyErrNotifyWebhookEmpty is the text key for err notify webhook empty
+ // messages.
+ DescKeyErrNotifyWebhookEmpty = "err.notify.webhook-empty"
)
diff --git a/internal/config/embed/text/err_pad.go b/internal/config/embed/text/err_pad.go
index 828e18d4e..82c72de8f 100644
--- a/internal/config/embed/text/err_pad.go
+++ b/internal/config/embed/text/err_pad.go
@@ -8,17 +8,41 @@ package text
// DescKeys for scratchpad operations errors.
const (
- DescKeyErrPadBlobAppendNotAllowed = "err.pad.blob-append-not-allowed"
+ // DescKeyErrPadBlobAppendNotAllowed is the text key for err pad blob append
+ // not allowed messages.
+ DescKeyErrPadBlobAppendNotAllowed = "err.pad.blob-append-not-allowed"
+ // DescKeyErrPadBlobPrependNotAllowed is the text key for err pad blob prepend
+ // not allowed messages.
DescKeyErrPadBlobPrependNotAllowed = "err.pad.blob-prepend-not-allowed"
- DescKeyErrPadEditBlobTextConflict = "err.pad.edit-blob-text-conflict"
- DescKeyErrPadEditNoMode = "err.pad.edit-no-mode"
- DescKeyErrPadEditTextConflict = "err.pad.edit-text-conflict"
- DescKeyErrPadEntryRange = "err.pad.entry-range"
- DescKeyErrPadFileTooLarge = "err.pad.file-too-large"
- DescKeyErrPadInvalidIndex = "err.pad.invalid-index"
- DescKeyErrPadNoConflictFiles = "err.pad.no-conflict-files"
- DescKeyErrPadNotBlobEntry = "err.pad.not-blob-entry"
- DescKeyErrPadOutFlagRequiresBlob = "err.pad.out-flag-requires-blob"
- DescKeyErrPadReadScratchpad = "err.pad.read-scratchpad"
- DescKeyErrPadResolveNotEncrypted = "err.pad.resolve-not-encrypted"
+ // DescKeyErrPadEditBlobTextConflict is the text key for err pad edit blob
+ // text conflict messages.
+ DescKeyErrPadEditBlobTextConflict = "err.pad.edit-blob-text-conflict"
+ // DescKeyErrPadEditNoMode is the text key for err pad edit no mode messages.
+ DescKeyErrPadEditNoMode = "err.pad.edit-no-mode"
+ // DescKeyErrPadEditTextConflict is the text key for err pad edit text
+ // conflict messages.
+ DescKeyErrPadEditTextConflict = "err.pad.edit-text-conflict"
+ // DescKeyErrPadEntryRange is the text key for err pad entry range messages.
+ DescKeyErrPadEntryRange = "err.pad.entry-range"
+ // DescKeyErrPadFileTooLarge is the text key for err pad file too large
+ // messages.
+ DescKeyErrPadFileTooLarge = "err.pad.file-too-large"
+ // DescKeyErrPadInvalidIndex is the text key for err pad invalid index
+ // messages.
+ DescKeyErrPadInvalidIndex = "err.pad.invalid-index"
+ // DescKeyErrPadNoConflictFiles is the text key for err pad no conflict files
+ // messages.
+ DescKeyErrPadNoConflictFiles = "err.pad.no-conflict-files"
+ // DescKeyErrPadNotBlobEntry is the text key for err pad not blob entry
+ // messages.
+ DescKeyErrPadNotBlobEntry = "err.pad.not-blob-entry"
+ // DescKeyErrPadOutFlagRequiresBlob is the text key for err pad out flag
+ // requires blob messages.
+ DescKeyErrPadOutFlagRequiresBlob = "err.pad.out-flag-requires-blob"
+ // DescKeyErrPadReadScratchpad is the text key for err pad read scratchpad
+ // messages.
+ DescKeyErrPadReadScratchpad = "err.pad.read-scratchpad"
+ // DescKeyErrPadResolveNotEncrypted is the text key for err pad resolve not
+ // encrypted messages.
+ DescKeyErrPadResolveNotEncrypted = "err.pad.resolve-not-encrypted"
)
diff --git a/internal/config/embed/text/err_parse.go b/internal/config/embed/text/err_parse.go
index c21bdf69b..c6043de8d 100644
--- a/internal/config/embed/text/err_parse.go
+++ b/internal/config/embed/text/err_parse.go
@@ -8,11 +8,19 @@ package text
// DescKeys for parsing errors.
const (
+ // DescKeyErrParserFileError is the text key for err parser file error
+ // messages.
DescKeyErrParserFileError = "err.parser.file-error"
- DescKeyErrParserNoMatch = "err.parser.no-match"
- DescKeyErrParserOpenFile = "err.parser.open-file"
- DescKeyErrParserReadFile = "err.parser.read-file"
- DescKeyErrParserScanFile = "err.parser.scan-file"
+ // DescKeyErrParserNoMatch is the text key for err parser no match messages.
+ DescKeyErrParserNoMatch = "err.parser.no-match"
+ // DescKeyErrParserOpenFile is the text key for err parser open file messages.
+ DescKeyErrParserOpenFile = "err.parser.open-file"
+ // DescKeyErrParserReadFile is the text key for err parser read file messages.
+ DescKeyErrParserReadFile = "err.parser.read-file"
+ // DescKeyErrParserScanFile is the text key for err parser scan file messages.
+ DescKeyErrParserScanFile = "err.parser.scan-file"
+ // DescKeyErrParserUnmarshal is the text key for err parser unmarshal messages.
DescKeyErrParserUnmarshal = "err.parser.unmarshal"
- DescKeyErrParserWalkDir = "err.parser.walk-dir"
+ // DescKeyErrParserWalkDir is the text key for err parser walk dir messages.
+ DescKeyErrParserWalkDir = "err.parser.walk-dir"
)
diff --git a/internal/config/embed/text/err_prompt.go b/internal/config/embed/text/err_prompt.go
index a296e8078..23f94ad59 100644
--- a/internal/config/embed/text/err_prompt.go
+++ b/internal/config/embed/text/err_prompt.go
@@ -8,11 +8,25 @@ package text
// DescKeys for prompt handling errors.
const (
- DescKeyErrPromptListEntryTemplates = "err.prompt.list-entry-templates"
- DescKeyErrPromptListTemplates = "err.prompt.list-templates"
- DescKeyErrPromptMarkerNotFound = "err.prompt.marker-not-found"
- DescKeyErrPromptNoTemplate = "err.prompt.no-template"
- DescKeyErrPromptReadEntryTemplate = "err.prompt.read-entry-template"
- DescKeyErrPromptReadTemplate = "err.prompt.read-template"
+ // DescKeyErrPromptListEntryTemplates is the text key for err prompt list
+ // entry templates messages.
+ DescKeyErrPromptListEntryTemplates = "err.prompt.list-entry-templates"
+ // DescKeyErrPromptListTemplates is the text key for err prompt list templates
+ // messages.
+ DescKeyErrPromptListTemplates = "err.prompt.list-templates"
+ // DescKeyErrPromptMarkerNotFound is the text key for err prompt marker not
+ // found messages.
+ DescKeyErrPromptMarkerNotFound = "err.prompt.marker-not-found"
+ // DescKeyErrPromptNoTemplate is the text key for err prompt no template
+ // messages.
+ DescKeyErrPromptNoTemplate = "err.prompt.no-template"
+ // DescKeyErrPromptReadEntryTemplate is the text key for err prompt read entry
+ // template messages.
+ DescKeyErrPromptReadEntryTemplate = "err.prompt.read-entry-template"
+ // DescKeyErrPromptReadTemplate is the text key for err prompt read template
+ // messages.
+ DescKeyErrPromptReadTemplate = "err.prompt.read-template"
+ // DescKeyErrPromptTemplateMissingMarkers is the text key for err prompt
+ // template missing markers messages.
DescKeyErrPromptTemplateMissingMarkers = "err.prompt.template-missing-markers"
)
diff --git a/internal/config/embed/text/err_remind.go b/internal/config/embed/text/err_remind.go
index 4661d9a2f..86067df4a 100644
--- a/internal/config/embed/text/err_remind.go
+++ b/internal/config/embed/text/err_remind.go
@@ -8,9 +8,19 @@ package text
// DescKeys for reminder operations errors.
const (
- DescKeyErrReminderInvalidID = "err.reminder.invalid-id"
+ // DescKeyErrReminderInvalidID is the text key for err reminder invalid id
+ // messages.
+ DescKeyErrReminderInvalidID = "err.reminder.invalid-id"
+ // DescKeyErrReminderParseReminders is the text key for err reminder parse
+ // reminders messages.
DescKeyErrReminderParseReminders = "err.reminder.parse-reminders"
- DescKeyErrReminderReadReminders = "err.reminder.read-reminders"
- DescKeyErrReminderIDRequired = "err.reminder.reminder-id-required"
- DescKeyErrReminderNotFound = "err.reminder.reminder-not-found"
+ // DescKeyErrReminderReadReminders is the text key for err reminder read
+ // reminders messages.
+ DescKeyErrReminderReadReminders = "err.reminder.read-reminders"
+ // DescKeyErrReminderIDRequired is the text key for err reminder id required
+ // messages.
+ DescKeyErrReminderIDRequired = "err.reminder.reminder-id-required"
+ // DescKeyErrReminderNotFound is the text key for err reminder not found
+ // messages.
+ DescKeyErrReminderNotFound = "err.reminder.reminder-not-found"
)
diff --git a/internal/config/embed/text/err_session.go b/internal/config/embed/text/err_session.go
index dca5add9e..50e31ab87 100644
--- a/internal/config/embed/text/err_session.go
+++ b/internal/config/embed/text/err_session.go
@@ -8,15 +8,37 @@ package text
// DescKeys for session management errors.
const (
- DescKeyErrSessionAllWithPattern = "err.session.all-with-pattern"
- DescKeyErrSessionAllWithSessionID = "err.session.all-with-session-id"
- DescKeyErrSessionAmbiguousQuery = "err.session.ambiguous-query"
- DescKeyErrSessionFindSessions = "err.session.find-sessions"
- DescKeyErrSessionNoSessionsFound = "err.session.no-sessions-found"
+ // DescKeyErrSessionAllWithPattern is the text key for err session all with
+ // pattern messages.
+ DescKeyErrSessionAllWithPattern = "err.session.all-with-pattern"
+ // DescKeyErrSessionAllWithSessionID is the text key for err session all with
+ // session id messages.
+ DescKeyErrSessionAllWithSessionID = "err.session.all-with-session-id"
+ // DescKeyErrSessionAmbiguousQuery is the text key for err session ambiguous
+ // query messages.
+ DescKeyErrSessionAmbiguousQuery = "err.session.ambiguous-query"
+ // DescKeyErrSessionFindSessions is the text key for err session find sessions
+ // messages.
+ DescKeyErrSessionFindSessions = "err.session.find-sessions"
+ // DescKeyErrSessionNoSessionsFound is the text key for err session no
+ // sessions found messages.
+ DescKeyErrSessionNoSessionsFound = "err.session.no-sessions-found"
+ // DescKeyErrSessionNoSessionsFoundHint is the text key for err session no
+ // sessions found hint messages.
DescKeyErrSessionNoSessionsFoundHint = "err.session.no-sessions-found-hint"
- DescKeyErrSessionIDRequired = "err.session.session-id-required"
- DescKeyErrSessionNotFound = "err.session.session-not-found"
- DescKeyErrSessionEventInvalidType = "err.session.event-invalid-type"
- DescKeyErrSiteMarshalFeed = "err.site.marshal-feed"
- DescKeyErrSiteNoSiteConfig = "err.site.no-site-config"
+ // DescKeyErrSessionIDRequired is the text key for err session id required
+ // messages.
+ DescKeyErrSessionIDRequired = "err.session.session-id-required"
+ // DescKeyErrSessionNotFound is the text key for err session not found
+ // messages.
+ DescKeyErrSessionNotFound = "err.session.session-not-found"
+ // DescKeyErrSessionEventInvalidType is the text key for err session event
+ // invalid type messages.
+ DescKeyErrSessionEventInvalidType = "err.session.event-invalid-type"
+ // DescKeyErrSiteMarshalFeed is the text key for err site marshal feed
+ // messages.
+ DescKeyErrSiteMarshalFeed = "err.site.marshal-feed"
+ // DescKeyErrSiteNoSiteConfig is the text key for err site no site config
+ // messages.
+ DescKeyErrSiteNoSiteConfig = "err.site.no-site-config"
)
diff --git a/internal/config/embed/text/err_setup.go b/internal/config/embed/text/err_setup.go
index 0b6175ba3..b26b7b798 100644
--- a/internal/config/embed/text/err_setup.go
+++ b/internal/config/embed/text/err_setup.go
@@ -8,8 +8,14 @@ package text
// DescKeys for setup operations errors.
const (
- DescKeyErrSetupCreateDir = "err.setup.create-dir"
+ // DescKeyErrSetupCreateDir is the text key for err setup create dir messages.
+ DescKeyErrSetupCreateDir = "err.setup.create-dir"
+ // DescKeyErrSetupMarshalConfig is the text key for err setup marshal config
+ // messages.
DescKeyErrSetupMarshalConfig = "err.setup.marshal-config"
- DescKeyErrSetupFileWrite = "err.setup.write-file"
- DescKeyErrSetupSyncSteering = "err.setup.sync-steering"
+ // DescKeyErrSetupFileWrite is the text key for err setup file write messages.
+ DescKeyErrSetupFileWrite = "err.setup.write-file"
+ // DescKeyErrSetupSyncSteering is the text key for err setup sync steering
+ // messages.
+ DescKeyErrSetupSyncSteering = "err.setup.sync-steering"
)
diff --git a/internal/config/embed/text/err_skill.go b/internal/config/embed/text/err_skill.go
index 022ac8232..ee07b7f6b 100644
--- a/internal/config/embed/text/err_skill.go
+++ b/internal/config/embed/text/err_skill.go
@@ -8,20 +8,44 @@ package text
// DescKeys for skill operations errors.
const (
- DescKeyErrSkillCreateDest = "err.skill.create-dest"
- DescKeyErrSkillInstall = "err.skill.install"
- DescKeyErrSkillInvalidManifest = "err.skill.invalid-manifest"
- DescKeyErrSkillInvalidYAML = "err.skill.invalid-yaml"
- DescKeyErrSkillList = "err.skill.skill-list"
- DescKeyErrSkillLoad = "err.skill.load"
+ // DescKeyErrSkillCreateDest is the text key for err skill create dest
+ // messages.
+ DescKeyErrSkillCreateDest = "err.skill.create-dest"
+ // DescKeyErrSkillInstall is the text key for err skill install messages.
+ DescKeyErrSkillInstall = "err.skill.install"
+ // DescKeyErrSkillInvalidManifest is the text key for err skill invalid
+ // manifest messages.
+ DescKeyErrSkillInvalidManifest = "err.skill.invalid-manifest"
+ // DescKeyErrSkillInvalidYAML is the text key for err skill invalid yaml
+ // messages.
+ DescKeyErrSkillInvalidYAML = "err.skill.invalid-yaml"
+ // DescKeyErrSkillList is the text key for err skill list messages.
+ DescKeyErrSkillList = "err.skill.skill-list"
+ // DescKeyErrSkillLoad is the text key for err skill load messages.
+ DescKeyErrSkillLoad = "err.skill.load"
+ // DescKeyErrSkillMissingClosingDelim is the text key for err skill missing
+ // closing delim messages.
DescKeyErrSkillMissingClosingDelim = "err.skill.missing-closing-delimiter"
- DescKeyErrSkillMissingName = "err.skill.missing-name"
+ // DescKeyErrSkillMissingName is the text key for err skill missing name
+ // messages.
+ DescKeyErrSkillMissingName = "err.skill.missing-name"
+ // DescKeyErrSkillMissingOpeningDelim is the text key for err skill missing
+ // opening delim messages.
DescKeyErrSkillMissingOpeningDelim = "err.skill.missing-opening-delimiter"
- DescKeyErrSkillNotFound = "err.skill.not-found"
- DescKeyErrSkillNotValidDir = "err.skill.not-valid-dir"
- DescKeyErrSkillNotValidSource = "err.skill.not-valid-source"
- DescKeyErrSkillRead = "err.skill.skill-read"
- DescKeyErrSkillReadDir = "err.skill.read-dir"
- DescKeyErrSkillRemove = "err.skill.remove"
- DescKeyErrSkillSkillLoad = "err.skill.skill-load"
+ // DescKeyErrSkillNotFound is the text key for err skill not found messages.
+ DescKeyErrSkillNotFound = "err.skill.not-found"
+ // DescKeyErrSkillNotValidDir is the text key for err skill not valid dir
+ // messages.
+ DescKeyErrSkillNotValidDir = "err.skill.not-valid-dir"
+ // DescKeyErrSkillNotValidSource is the text key for err skill not valid
+ // source messages.
+ DescKeyErrSkillNotValidSource = "err.skill.not-valid-source"
+ // DescKeyErrSkillRead is the text key for err skill read messages.
+ DescKeyErrSkillRead = "err.skill.skill-read"
+ // DescKeyErrSkillReadDir is the text key for err skill read dir messages.
+ DescKeyErrSkillReadDir = "err.skill.read-dir"
+ // DescKeyErrSkillRemove is the text key for err skill remove messages.
+ DescKeyErrSkillRemove = "err.skill.remove"
+ // DescKeyErrSkillSkillLoad is the text key for err skill skill load messages.
+ DescKeyErrSkillSkillLoad = "err.skill.skill-load"
)
diff --git a/internal/config/embed/text/err_state.go b/internal/config/embed/text/err_state.go
index 188e70865..1ee17af6e 100644
--- a/internal/config/embed/text/err_state.go
+++ b/internal/config/embed/text/err_state.go
@@ -8,7 +8,11 @@ package text
// DescKeys for state management errors.
const (
- DescKeyErrStateLoadState = "err.state.load-state"
+ // DescKeyErrStateLoadState is the text key for err state load state messages.
+ DescKeyErrStateLoadState = "err.state.load-state"
+ // DescKeyErrStateReadingStateDir is the text key for err state reading state
+ // dir messages.
DescKeyErrStateReadingStateDir = "err.state.reading-state-dir"
- DescKeyErrStateSaveState = "err.state.save-state"
+ // DescKeyErrStateSaveState is the text key for err state save state messages.
+ DescKeyErrStateSaveState = "err.state.save-state"
)
diff --git a/internal/config/embed/text/err_steering.go b/internal/config/embed/text/err_steering.go
index c4df1a46e..0da4a79ad 100644
--- a/internal/config/embed/text/err_steering.go
+++ b/internal/config/embed/text/err_steering.go
@@ -8,24 +8,62 @@ package text
// DescKeys for steering operations errors.
const (
- DescKeyErrSteeringComputeRelPath = "err.steering.compute-rel-path"
- DescKeyErrSteeringContextDirMissing = "err.steering.context-dir-missing"
- DescKeyErrSteeringCreateDir = "err.steering.create-dir"
- DescKeyErrSteeringFileExists = "err.steering.file-exists"
- DescKeyErrSteeringInvalidYAML = "err.steering.invalid-yaml"
+ // DescKeyErrSteeringComputeRelPath is the text key for err steering compute
+ // rel path messages.
+ DescKeyErrSteeringComputeRelPath = "err.steering.compute-rel-path"
+ // DescKeyErrSteeringContextDirMissing is the text key for err steering
+ // context dir missing messages.
+ DescKeyErrSteeringContextDirMissing = "err.steering.context-dir-missing"
+ // DescKeyErrSteeringCreateDir is the text key for err steering create dir
+ // messages.
+ DescKeyErrSteeringCreateDir = "err.steering.create-dir"
+ // DescKeyErrSteeringFileExists is the text key for err steering file exists
+ // messages.
+ DescKeyErrSteeringFileExists = "err.steering.file-exists"
+ // DescKeyErrSteeringInvalidYAML is the text key for err steering invalid yaml
+ // messages.
+ DescKeyErrSteeringInvalidYAML = "err.steering.invalid-yaml"
+ // DescKeyErrSteeringMissingClosingDelim is the text key for err steering
+ // missing closing delim messages.
DescKeyErrSteeringMissingClosingDelim = "err.steering.missing-closing-delimiter"
+ // DescKeyErrSteeringMissingOpeningDelim is the text key for err steering
+ // missing opening delim messages.
DescKeyErrSteeringMissingOpeningDelim = "err.steering.missing-opening-delimiter"
- DescKeyErrSteeringNoTool = "err.steering.no-tool"
- DescKeyErrSteeringOutputEscapesRoot = "err.steering.output-escapes-root"
- DescKeyErrSteeringParse = "err.steering.parse"
- DescKeyErrSteeringReadDir = "err.steering.read-dir"
- DescKeyErrSteeringReadFile = "err.steering.read-file"
- DescKeyErrSteeringResolveOutput = "err.steering.resolve-output"
- DescKeyErrSteeringResolveRoot = "err.steering.resolve-root"
- DescKeyErrSteeringSyncAll = "err.steering.sync-all"
- DescKeyErrSteeringSyncName = "err.steering.sync-name"
- DescKeyErrSteeringUnsupportedTool = "err.steering.unsupported-tool"
- DescKeyErrSteeringWriteFile = "err.steering.write-file"
- DescKeyErrSteeringWriteSteeringFile = "err.steering.write-steering-file"
- DescKeyErrSteeringWriteInitFile = "err.steering.write-init-file"
+ // DescKeyErrSteeringNoTool is the text key for err steering no tool messages.
+ DescKeyErrSteeringNoTool = "err.steering.no-tool"
+ // DescKeyErrSteeringOutputEscapesRoot is the text key for err steering output
+ // escapes root messages.
+ DescKeyErrSteeringOutputEscapesRoot = "err.steering.output-escapes-root"
+ // DescKeyErrSteeringParse is the text key for err steering parse messages.
+ DescKeyErrSteeringParse = "err.steering.parse"
+ // DescKeyErrSteeringReadDir is the text key for err steering read dir
+ // messages.
+ DescKeyErrSteeringReadDir = "err.steering.read-dir"
+ // DescKeyErrSteeringReadFile is the text key for err steering read file
+ // messages.
+ DescKeyErrSteeringReadFile = "err.steering.read-file"
+ // DescKeyErrSteeringResolveOutput is the text key for err steering resolve
+ // output messages.
+ DescKeyErrSteeringResolveOutput = "err.steering.resolve-output"
+ // DescKeyErrSteeringResolveRoot is the text key for err steering resolve root
+ // messages.
+ DescKeyErrSteeringResolveRoot = "err.steering.resolve-root"
+ // DescKeyErrSteeringSyncAll is the text key for err steering sync all
+ // messages.
+ DescKeyErrSteeringSyncAll = "err.steering.sync-all"
+ // DescKeyErrSteeringSyncName is the text key for err steering sync name
+ // messages.
+ DescKeyErrSteeringSyncName = "err.steering.sync-name"
+ // DescKeyErrSteeringUnsupportedTool is the text key for err steering
+ // unsupported tool messages.
+ DescKeyErrSteeringUnsupportedTool = "err.steering.unsupported-tool"
+ // DescKeyErrSteeringWriteFile is the text key for err steering write file
+ // messages.
+ DescKeyErrSteeringWriteFile = "err.steering.write-file"
+ // DescKeyErrSteeringWriteSteeringFile is the text key for err steering write
+ // steering file messages.
+ DescKeyErrSteeringWriteSteeringFile = "err.steering.write-steering-file"
+ // DescKeyErrSteeringWriteInitFile is the text key for err steering write init
+ // file messages.
+ DescKeyErrSteeringWriteInitFile = "err.steering.write-init-file"
)
diff --git a/internal/config/embed/text/err_task.go b/internal/config/embed/text/err_task.go
index c67d21233..e2ff5eb13 100644
--- a/internal/config/embed/text/err_task.go
+++ b/internal/config/embed/text/err_task.go
@@ -8,13 +8,28 @@ package text
// DescKeys for task operations errors.
const (
+ // DescKeyErrTaskNoCompletedTasks is the text key for err task no completed
+ // tasks messages.
DescKeyErrTaskNoCompletedTasks = "err.task.no-completed-tasks"
- DescKeyErrTaskNoTaskMatch = "err.task.no-task-match"
- DescKeyErrTaskNoTaskSpecified = "err.task.no-task-specified"
- DescKeyErrTaskSnapshotWrite = "err.task.snapshot-write"
- DescKeyErrTaskFileNotFound = "err.task.task-file-not-found"
- DescKeyErrTaskFileRead = "err.task.task-file-read"
- DescKeyErrTaskFileWrite = "err.task.task-file-write"
- DescKeyErrTaskMultipleMatches = "err.task.task-multiple-matches"
- DescKeyErrTaskNotFound = "err.task.task-not-found"
+ // DescKeyErrTaskNoTaskMatch is the text key for err task no task match
+ // messages.
+ DescKeyErrTaskNoTaskMatch = "err.task.no-task-match"
+ // DescKeyErrTaskNoTaskSpecified is the text key for err task no task
+ // specified messages.
+ DescKeyErrTaskNoTaskSpecified = "err.task.no-task-specified"
+ // DescKeyErrTaskSnapshotWrite is the text key for err task snapshot write
+ // messages.
+ DescKeyErrTaskSnapshotWrite = "err.task.snapshot-write"
+ // DescKeyErrTaskFileNotFound is the text key for err task file not found
+ // messages.
+ DescKeyErrTaskFileNotFound = "err.task.task-file-not-found"
+ // DescKeyErrTaskFileRead is the text key for err task file read messages.
+ DescKeyErrTaskFileRead = "err.task.task-file-read"
+ // DescKeyErrTaskFileWrite is the text key for err task file write messages.
+ DescKeyErrTaskFileWrite = "err.task.task-file-write"
+ // DescKeyErrTaskMultipleMatches is the text key for err task multiple matches
+ // messages.
+ DescKeyErrTaskMultipleMatches = "err.task.task-multiple-matches"
+ // DescKeyErrTaskNotFound is the text key for err task not found messages.
+ DescKeyErrTaskNotFound = "err.task.task-not-found"
)
diff --git a/internal/config/embed/text/err_time.go b/internal/config/embed/text/err_time.go
index 7b5b85438..4233a6e14 100644
--- a/internal/config/embed/text/err_time.go
+++ b/internal/config/embed/text/err_time.go
@@ -8,6 +8,10 @@ package text
// DescKeys for time operations errors.
const (
- DescKeyErrDateInvalidDate = "err.date.invalid-date"
+ // DescKeyErrDateInvalidDate is the text key for err date invalid date
+ // messages.
+ DescKeyErrDateInvalidDate = "err.date.invalid-date"
+ // DescKeyErrDateInvalidDateValue is the text key for err date invalid date
+ // value messages.
DescKeyErrDateInvalidDateValue = "err.date.invalid-date-value"
)
diff --git a/internal/config/embed/text/err_trace.go b/internal/config/embed/text/err_trace.go
index 825f605f5..03bd46f62 100644
--- a/internal/config/embed/text/err_trace.go
+++ b/internal/config/embed/text/err_trace.go
@@ -8,13 +8,28 @@ package text
// DescKeys for trace operations errors.
const (
- DescKeyErrTraceGitDir = "err.trace.git-dir"
- DescKeyErrTraceGitLog = "err.trace.git-log"
- DescKeyErrTraceHookExists = "err.trace.hook-exists"
- DescKeyErrTraceHookWrite = "err.trace.hook-write"
- DescKeyErrTraceNoteRequired = "err.trace.note-required"
+ // DescKeyErrTraceGitDir is the text key for err trace git dir messages.
+ DescKeyErrTraceGitDir = "err.trace.git-dir"
+ // DescKeyErrTraceGitLog is the text key for err trace git log messages.
+ DescKeyErrTraceGitLog = "err.trace.git-log"
+ // DescKeyErrTraceHookExists is the text key for err trace hook exists
+ // messages.
+ DescKeyErrTraceHookExists = "err.trace.hook-exists"
+ // DescKeyErrTraceHookWrite is the text key for err trace hook write messages.
+ DescKeyErrTraceHookWrite = "err.trace.hook-write"
+ // DescKeyErrTraceNoteRequired is the text key for err trace note required
+ // messages.
+ DescKeyErrTraceNoteRequired = "err.trace.note-required"
+ // DescKeyErrTraceResolveCommit is the text key for err trace resolve commit
+ // messages.
DescKeyErrTraceResolveCommit = "err.trace.resolve-commit"
+ // DescKeyErrTraceUnknownAction is the text key for err trace unknown action
+ // messages.
DescKeyErrTraceUnknownAction = "err.trace.unknown-action"
- DescKeyErrTraceWriteHistory = "err.trace.write-history"
+ // DescKeyErrTraceWriteHistory is the text key for err trace write history
+ // messages.
+ DescKeyErrTraceWriteHistory = "err.trace.write-history"
+ // DescKeyErrTraceWriteOverride is the text key for err trace write override
+ // messages.
DescKeyErrTraceWriteOverride = "err.trace.write-override"
)
diff --git a/internal/config/embed/text/err_validate.go b/internal/config/embed/text/err_validate.go
index be4869fba..c8dabfdd6 100644
--- a/internal/config/embed/text/err_validate.go
+++ b/internal/config/embed/text/err_validate.go
@@ -8,15 +8,37 @@ package text
// DescKeys for validation errors.
const (
- DescKeyErrValidateContextDirSymlink = "err.validate.context-dir-symlink"
+ // DescKeyErrValidateContextDirSymlink is the text key for err validate
+ // context dir symlink messages.
+ DescKeyErrValidateContextDirSymlink = "err.validate.context-dir-symlink"
+ // DescKeyErrValidateContextFileSymlink is the text key for err validate
+ // context file symlink messages.
DescKeyErrValidateContextFileSymlink = "err.validate.context-file-symlink"
+ // DescKeyErrValidateContextOutsideRoot is the text key for err validate
+ // context outside root messages.
DescKeyErrValidateContextOutsideRoot = "err.validate.context-outside-root"
- DescKeyErrValidateInvalidSelection = "err.validate.invalid-selection"
- DescKeyErrValidateUnknownDocument = "err.validate.unknown-document"
- DescKeyErrValidateArgRequired = "err.validation.arg-required"
- DescKeyErrValidateCtxNotInPath = "err.validation.ctx-not-in-path"
- DescKeyErrValidateDriftViolations = "err.validation.drift-violations"
- DescKeyErrValidateFlagRequired = "err.validation.flag-required"
- DescKeyErrValidateParseFile = "err.validation.parse-file"
- DescKeyErrValidateWorkingDirectory = "err.validation.working-directory"
+ // DescKeyErrValidateInvalidSelection is the text key for err validate invalid
+ // selection messages.
+ DescKeyErrValidateInvalidSelection = "err.validate.invalid-selection"
+ // DescKeyErrValidateUnknownDocument is the text key for err validate unknown
+ // document messages.
+ DescKeyErrValidateUnknownDocument = "err.validate.unknown-document"
+ // DescKeyErrValidateArgRequired is the text key for err validate arg required
+ // messages.
+ DescKeyErrValidateArgRequired = "err.validation.arg-required"
+ // DescKeyErrValidateCtxNotInPath is the text key for err validate ctx not in
+ // path messages.
+ DescKeyErrValidateCtxNotInPath = "err.validation.ctx-not-in-path"
+ // DescKeyErrValidateDriftViolations is the text key for err validate drift
+ // violations messages.
+ DescKeyErrValidateDriftViolations = "err.validation.drift-violations"
+ // DescKeyErrValidateFlagRequired is the text key for err validate flag
+ // required messages.
+ DescKeyErrValidateFlagRequired = "err.validation.flag-required"
+ // DescKeyErrValidateParseFile is the text key for err validate parse file
+ // messages.
+ DescKeyErrValidateParseFile = "err.validation.parse-file"
+ // DescKeyErrValidateWorkingDirectory is the text key for err validate working
+ // directory messages.
+ DescKeyErrValidateWorkingDirectory = "err.validation.working-directory"
)
diff --git a/internal/config/embed/text/err_write.go b/internal/config/embed/text/err_write.go
index ed663f504..c7c0ba056 100644
--- a/internal/config/embed/text/err_write.go
+++ b/internal/config/embed/text/err_write.go
@@ -8,6 +8,8 @@ package text
// DescKeys for write operations errors.
const (
+ // DescKeyWritePrefixError is the text key for write prefix error messages.
DescKeyWritePrefixError = "write.prefix-error"
- DescKeyWritePrefixWarn = "write.prefix-warn"
+ // DescKeyWritePrefixWarn is the text key for write prefix warn messages.
+ DescKeyWritePrefixWarn = "write.prefix-warn"
)
diff --git a/internal/config/embed/text/event.go b/internal/config/embed/text/event.go
index 975b894d7..6924322ee 100644
--- a/internal/config/embed/text/event.go
+++ b/internal/config/embed/text/event.go
@@ -8,6 +8,8 @@ package text
// DescKeys for event logging.
const (
- DescKeyEventsEmpty = "events.empty"
+ // DescKeyEventsEmpty is the text key for events empty messages.
+ DescKeyEventsEmpty = "events.empty"
+ // DescKeyEventsHumanFormat is the text key for events human format messages.
DescKeyEventsHumanFormat = "events.human-format"
)
diff --git a/internal/config/embed/text/format.go b/internal/config/embed/text/format.go
index 3ed513f72..edd5e1605 100644
--- a/internal/config/embed/text/format.go
+++ b/internal/config/embed/text/format.go
@@ -8,27 +8,63 @@ package text
// DescKeys for format write output.
const (
- DescKeyWriteFormatBytesRaw = "write.format-bytes-raw"
- DescKeyWriteFormatBytesUnit = "write.format-bytes-unit"
- DescKeyWriteFormatBytes = "write.format-bytes"
- DescKeyWriteFormatGVFSPath = "write.format-gvfs-path"
- DescKeyWriteFormatDurationDay = "write.format-duration-day"
- DescKeyWriteFormatDurationHour = "write.format-duration-hour"
+ // DescKeyWriteFormatBytesRaw is the text key for write format bytes raw
+ // messages.
+ DescKeyWriteFormatBytesRaw = "write.format-bytes-raw"
+ // DescKeyWriteFormatBytesUnit is the text key for write format bytes unit
+ // messages.
+ DescKeyWriteFormatBytesUnit = "write.format-bytes-unit"
+ // DescKeyWriteFormatBytes is the text key for write format bytes messages.
+ DescKeyWriteFormatBytes = "write.format-bytes"
+ // DescKeyWriteFormatGVFSPath is the text key for write format gvfs path
+ // messages.
+ DescKeyWriteFormatGVFSPath = "write.format-gvfs-path"
+ // DescKeyWriteFormatDurationDay is the text key for write format duration day
+ // messages.
+ DescKeyWriteFormatDurationDay = "write.format-duration-day"
+ // DescKeyWriteFormatDurationHour is the text key for write format duration
+ // hour messages.
+ DescKeyWriteFormatDurationHour = "write.format-duration-hour"
+ // DescKeyWriteFormatDurationHourMin is the text key for write format duration
+ // hour min messages.
DescKeyWriteFormatDurationHourMin = "write.format-duration-hour-min"
- DescKeyWriteFormatDurationLTMin = "write.format-duration-less-than-min"
- DescKeyWriteFormatDurationMin = "write.format-duration-min"
- DescKeyWriteFormatKilobytes = "write.format-kilobytes"
- DescKeyWriteFormatMegabytes = "write.format-megabytes"
- DescKeyWriteFormatSIInteger = "write.format-si-integer"
- DescKeyWriteFormatSIKilo = "write.format-si-kilo"
- DescKeyWriteFormatSIKiloInt = "write.format-si-kilo-int"
- DescKeyWriteFormatSIKiloUpper = "write.format-si-kilo-upper"
- DescKeyWriteFormatSIMegaUpper = "write.format-si-mega-upper"
- DescKeyWriteFormatThousands = "write.format-thousands"
+ // DescKeyWriteFormatDurationLTMin is the text key for write format duration
+ // lt min messages.
+ DescKeyWriteFormatDurationLTMin = "write.format-duration-less-than-min"
+ // DescKeyWriteFormatDurationMin is the text key for write format duration min
+ // messages.
+ DescKeyWriteFormatDurationMin = "write.format-duration-min"
+ // DescKeyWriteFormatKilobytes is the text key for write format kilobytes
+ // messages.
+ DescKeyWriteFormatKilobytes = "write.format-kilobytes"
+ // DescKeyWriteFormatMegabytes is the text key for write format megabytes
+ // messages.
+ DescKeyWriteFormatMegabytes = "write.format-megabytes"
+ // DescKeyWriteFormatSIInteger is the text key for write format si integer
+ // messages.
+ DescKeyWriteFormatSIInteger = "write.format-si-integer"
+ // DescKeyWriteFormatSIKilo is the text key for write format si kilo messages.
+ DescKeyWriteFormatSIKilo = "write.format-si-kilo"
+ // DescKeyWriteFormatSIKiloInt is the text key for write format si kilo int
+ // messages.
+ DescKeyWriteFormatSIKiloInt = "write.format-si-kilo-int"
+ // DescKeyWriteFormatSIKiloUpper is the text key for write format si kilo
+ // upper messages.
+ DescKeyWriteFormatSIKiloUpper = "write.format-si-kilo-upper"
+ // DescKeyWriteFormatSIMegaUpper is the text key for write format si mega
+ // upper messages.
+ DescKeyWriteFormatSIMegaUpper = "write.format-si-mega-upper"
+ // DescKeyWriteFormatThousands is the text key for write format thousands
+ // messages.
+ DescKeyWriteFormatThousands = "write.format-thousands"
)
// DescKeys for miscellaneous format write output.
const (
- DescKeyWriteBackupSkipEntry = "write.backup-skip-entry"
+ // DescKeyWriteBackupSkipEntry is the text key for write backup skip entry
+ // messages.
+ DescKeyWriteBackupSkipEntry = "write.backup-skip-entry"
+ // DescKeyWriteWikilinkListItem is the text key for write wikilink list item
+ // messages.
DescKeyWriteWikilinkListItem = "write.wikilink-list-item"
)
diff --git a/internal/config/embed/text/freshness.go b/internal/config/embed/text/freshness.go
index a8acdbc07..dc11a423c 100644
--- a/internal/config/embed/text/freshness.go
+++ b/internal/config/embed/text/freshness.go
@@ -8,10 +8,18 @@ package text
// DescKeys for freshness tracking.
const (
- DescKeyFreshnessBoxTitle = "freshness.box-title"
- DescKeyFreshnessFileEntry = "freshness.file-entry"
+ // DescKeyFreshnessBoxTitle is the text key for freshness box title messages.
+ DescKeyFreshnessBoxTitle = "freshness.box-title"
+ // DescKeyFreshnessFileEntry is the text key for freshness file entry messages.
+ DescKeyFreshnessFileEntry = "freshness.file-entry"
+ // DescKeyFreshnessRelayMessage is the text key for freshness relay message
+ // messages.
DescKeyFreshnessRelayMessage = "freshness.relay-message"
- DescKeyFreshnessRelayPrefix = "freshness.relay-prefix"
- DescKeyFreshnessReviewURL = "freshness.review-url"
- DescKeyFreshnessTouchHint = "freshness.touch-hint"
+ // DescKeyFreshnessRelayPrefix is the text key for freshness relay prefix
+ // messages.
+ DescKeyFreshnessRelayPrefix = "freshness.relay-prefix"
+ // DescKeyFreshnessReviewURL is the text key for freshness review url messages.
+ DescKeyFreshnessReviewURL = "freshness.review-url"
+ // DescKeyFreshnessTouchHint is the text key for freshness touch hint messages.
+ DescKeyFreshnessTouchHint = "freshness.touch-hint"
)
diff --git a/internal/config/embed/text/git.go b/internal/config/embed/text/git.go
index 4079a06bf..c3a4298d1 100644
--- a/internal/config/embed/text/git.go
+++ b/internal/config/embed/text/git.go
@@ -10,6 +10,10 @@ package text
// DescKeys for git integration.
const (
- DescKeyErrGitNotInGitRepo = "err.git.not-in-git-repo"
+ // DescKeyErrGitNotInGitRepo is the text key for err git not in git repo
+ // messages.
+ DescKeyErrGitNotInGitRepo = "err.git.not-in-git-repo"
+ // DescKeyErrParserGitNotFound is the text key for err parser git not found
+ // messages.
DescKeyErrParserGitNotFound = "err.parser.git-not-found"
)
diff --git a/internal/config/embed/text/governance.go b/internal/config/embed/text/governance.go
index d4de65a33..d10d8e6b6 100644
--- a/internal/config/embed/text/governance.go
+++ b/internal/config/embed/text/governance.go
@@ -8,10 +8,21 @@ package text
// DescKeys for governance checks.
const (
+ // DescKeyGovSessionNotStarted is the text key for gov session not started
+ // messages.
DescKeyGovSessionNotStarted = "mcp.gov-session-not-started"
- DescKeyGovContextNotLoaded = "mcp.gov-context-not-loaded"
- DescKeyGovDriftNotChecked = "mcp.gov-drift-not-checked"
+ // DescKeyGovContextNotLoaded is the text key for gov context not loaded
+ // messages.
+ DescKeyGovContextNotLoaded = "mcp.gov-context-not-loaded"
+ // DescKeyGovDriftNotChecked is the text key for gov drift not checked
+ // messages.
+ DescKeyGovDriftNotChecked = "mcp.gov-drift-not-checked"
+ // DescKeyGovDriftNeverChecked is the text key for gov drift never checked
+ // messages.
DescKeyGovDriftNeverChecked = "mcp.gov-drift-never-checked"
- DescKeyGovPersistNudge = "mcp.gov-persist-nudge"
+ // DescKeyGovPersistNudge is the text key for gov persist nudge messages.
+ DescKeyGovPersistNudge = "mcp.gov-persist-nudge"
+ // DescKeyGovViolationCritical is the text key for gov violation critical
+ // messages.
DescKeyGovViolationCritical = "mcp.gov-violation-critical"
)
diff --git a/internal/config/embed/text/group.go b/internal/config/embed/text/group.go
index 84e4954e5..41c3f4921 100644
--- a/internal/config/embed/text/group.go
+++ b/internal/config/embed/text/group.go
@@ -8,17 +8,28 @@ package text
// Group title text keys for CLI help output.
const (
+ // DescKeyGroupGettingStarted is the text key for group getting started
+ // messages.
DescKeyGroupGettingStarted = "group.getting-started"
- DescKeyGroupContext = "group.context"
- DescKeyGroupArtifacts = "group.artifacts"
- DescKeyGroupSessions = "group.sessions"
- DescKeyGroupRuntime = "group.runtime"
- DescKeyGroupIntegration = "group.integration"
- DescKeyGroupDiagnostics = "group.diagnostics"
- DescKeyGroupUtilities = "group.utilities"
+ // DescKeyGroupContext is the text key for group context messages.
+ DescKeyGroupContext = "group.context"
+ // DescKeyGroupArtifacts is the text key for group artifacts messages.
+ DescKeyGroupArtifacts = "group.artifacts"
+ // DescKeyGroupSessions is the text key for group sessions messages.
+ DescKeyGroupSessions = "group.sessions"
+ // DescKeyGroupRuntime is the text key for group runtime messages.
+ DescKeyGroupRuntime = "group.runtime"
+ // DescKeyGroupIntegration is the text key for group integration messages.
+ DescKeyGroupIntegration = "group.integration"
+ // DescKeyGroupDiagnostics is the text key for group diagnostics messages.
+ DescKeyGroupDiagnostics = "group.diagnostics"
+ // DescKeyGroupUtilities is the text key for group utilities messages.
+ DescKeyGroupUtilities = "group.utilities"
)
// Help text keys for CLI-wide output elements.
const (
+ // DescKeyHelpCommunityFooter is the text key for help community footer
+ // messages.
DescKeyHelpCommunityFooter = "help.community-footer"
)
diff --git a/internal/config/embed/text/guide.go b/internal/config/embed/text/guide.go
index 94c0eb689..1588c771f 100644
--- a/internal/config/embed/text/guide.go
+++ b/internal/config/embed/text/guide.go
@@ -8,7 +8,10 @@ package text
// DescKeys for guide display.
const (
- DescKeyGuideDefault = "guide.default"
+ // DescKeyGuideDefault is the text key for guide default messages.
+ DescKeyGuideDefault = "guide.default"
+ // DescKeyGuideCommandsHead is the text key for guide commands head messages.
DescKeyGuideCommandsHead = "guide.commands-heading"
- DescKeyGuideCommandLine = "guide.command-line"
+ // DescKeyGuideCommandLine is the text key for guide command line messages.
+ DescKeyGuideCommandLine = "guide.command-line"
)
diff --git a/internal/config/embed/text/heading.go b/internal/config/embed/text/heading.go
index eb5a31f8d..a069bc2ed 100644
--- a/internal/config/embed/text/heading.go
+++ b/internal/config/embed/text/heading.go
@@ -8,27 +8,56 @@ package text
// DescKeys for section headings.
const (
- DescKeyHeadingCompleted = "heading.completed"
- DescKeyHeadingArchivedTasks = "heading.archived-tasks"
- DescKeyHeadingContext = "heading.context"
- DescKeyHeadingLoopStart = "heading.loop-start"
- DescKeyHeadingToolUsage = "heading.tool-usage"
- DescKeyHeadingConversation = "heading.conversation"
- DescKeyHeadingSessionJournal = "heading.session-journal"
- DescKeyHeadingTopics = "heading.topics"
- DescKeyHeadingPopularTopics = "heading.popular-topics"
- DescKeyHeadingLongtailTopics = "heading.longtail-topics"
- DescKeyHeadingKeyFiles = "heading.key-files"
+ // DescKeyHeadingCompleted is the text key for heading completed messages.
+ DescKeyHeadingCompleted = "heading.completed"
+ // DescKeyHeadingArchivedTasks is the text key for heading archived tasks
+ // messages.
+ DescKeyHeadingArchivedTasks = "heading.archived-tasks"
+ // DescKeyHeadingContext is the text key for heading context messages.
+ DescKeyHeadingContext = "heading.context"
+ // DescKeyHeadingLoopStart is the text key for heading loop start messages.
+ DescKeyHeadingLoopStart = "heading.loop-start"
+ // DescKeyHeadingToolUsage is the text key for heading tool usage messages.
+ DescKeyHeadingToolUsage = "heading.tool-usage"
+ // DescKeyHeadingConversation is the text key for heading conversation
+ // messages.
+ DescKeyHeadingConversation = "heading.conversation"
+ // DescKeyHeadingSessionJournal is the text key for heading session journal
+ // messages.
+ DescKeyHeadingSessionJournal = "heading.session-journal"
+ // DescKeyHeadingTopics is the text key for heading topics messages.
+ DescKeyHeadingTopics = "heading.topics"
+ // DescKeyHeadingPopularTopics is the text key for heading popular topics
+ // messages.
+ DescKeyHeadingPopularTopics = "heading.popular-topics"
+ // DescKeyHeadingLongtailTopics is the text key for heading longtail topics
+ // messages.
+ DescKeyHeadingLongtailTopics = "heading.longtail-topics"
+ // DescKeyHeadingKeyFiles is the text key for heading key files messages.
+ DescKeyHeadingKeyFiles = "heading.key-files"
+ // DescKeyHeadingFrequentlyTouched is the text key for heading frequently
+ // touched messages.
DescKeyHeadingFrequentlyTouched = "heading.frequently-touched"
- DescKeyHeadingSingleSession = "heading.single-session"
- DescKeyHeadingSessionTypes = "heading.session-types"
- DescKeyHeadingSuggestions = "heading.suggestions"
- DescKeyHeadingRecentSessions = "heading.recent-sessions"
+ // DescKeyHeadingSingleSession is the text key for heading single session
+ // messages.
+ DescKeyHeadingSingleSession = "heading.single-session"
+ // DescKeyHeadingSessionTypes is the text key for heading session types
+ // messages.
+ DescKeyHeadingSessionTypes = "heading.session-types"
+ // DescKeyHeadingSuggestions is the text key for heading suggestions messages.
+ DescKeyHeadingSuggestions = "heading.suggestions"
+ // DescKeyHeadingRecentSessions is the text key for heading recent sessions
+ // messages.
+ DescKeyHeadingRecentSessions = "heading.recent-sessions"
)
// Headings, column headers, and navigation labels (headings.yaml).
const (
- DescKeyHeadingDecisions = "heading.decisions"
- DescKeyHeadingLearnings = "heading.learnings"
+ // DescKeyHeadingDecisions is the text key for heading decisions messages.
+ DescKeyHeadingDecisions = "heading.decisions"
+ // DescKeyHeadingLearnings is the text key for heading learnings messages.
+ DescKeyHeadingLearnings = "heading.learnings"
+ // DescKeyHeadingLearningStart is the text key for heading learning start
+ // messages.
DescKeyHeadingLearningStart = "heading.learning-entry-start"
)
diff --git a/internal/config/embed/text/heartbeat.go b/internal/config/embed/text/heartbeat.go
index 7753fe202..759789462 100644
--- a/internal/config/embed/text/heartbeat.go
+++ b/internal/config/embed/text/heartbeat.go
@@ -8,8 +8,14 @@ package text
// DescKeys for heartbeat output.
const (
- DescKeyHeartbeatLogTokens = "heartbeat.log-tokens"
- DescKeyHeartbeatLogPlain = "heartbeat.log-plain"
+ // DescKeyHeartbeatLogTokens is the text key for heartbeat log tokens messages.
+ DescKeyHeartbeatLogTokens = "heartbeat.log-tokens"
+ // DescKeyHeartbeatLogPlain is the text key for heartbeat log plain messages.
+ DescKeyHeartbeatLogPlain = "heartbeat.log-plain"
+ // DescKeyHeartbeatNotifyTokens is the text key for heartbeat notify tokens
+ // messages.
DescKeyHeartbeatNotifyTokens = "heartbeat.notify-tokens"
- DescKeyHeartbeatNotifyPlain = "heartbeat.notify-plain"
+ // DescKeyHeartbeatNotifyPlain is the text key for heartbeat notify plain
+ // messages.
+ DescKeyHeartbeatNotifyPlain = "heartbeat.notify-plain"
)
diff --git a/internal/config/embed/text/hook.go b/internal/config/embed/text/hook.go
index 755c0c4c9..10eeb7466 100644
--- a/internal/config/embed/text/hook.go
+++ b/internal/config/embed/text/hook.go
@@ -8,29 +8,64 @@ package text
// DescKeys for hook messages.
const (
- DescKeyHookAider = "hook.aider"
- DescKeyHookAgents = "hook.agents"
- DescKeyHookClaude = "hook.claude"
- DescKeyHookCopilot = "hook.copilot"
- DescKeyHookCopilotCLI = "hook.copilot-cli"
+ // DescKeyHookAider is the text key for hook aider messages.
+ DescKeyHookAider = "hook.aider"
+ // DescKeyHookAgents is the text key for hook agents messages.
+ DescKeyHookAgents = "hook.agents"
+ // DescKeyHookClaude is the text key for hook claude messages.
+ DescKeyHookClaude = "hook.claude"
+ // DescKeyHookCopilot is the text key for hook copilot messages.
+ DescKeyHookCopilot = "hook.copilot"
+ // DescKeyHookCopilotCLI is the text key for hook copilot cli messages.
+ DescKeyHookCopilotCLI = "hook.copilot-cli"
+ // DescKeyHookSupportedTools is the text key for hook supported tools messages.
DescKeyHookSupportedTools = "hook.supported-tools"
- DescKeyHookWindsurf = "hook.windsurf"
+ // DescKeyHookWindsurf is the text key for hook windsurf messages.
+ DescKeyHookWindsurf = "hook.windsurf"
)
// DescKeys for hook write output.
const (
- DescKeyWriteHookAgentsCreated = "write.hook-agents-created"
- DescKeyWriteHookAgentsMerged = "write.hook-agents-merged"
- DescKeyWriteHookAgentsSkipped = "write.hook-agents-skipped"
- DescKeyWriteHookAgentsSummary = "write.hook-agents-summary"
- DescKeyWriteHookCopilotCLICreated = "write.hook-copilot-cli-created"
- DescKeyWriteHookCopilotCLISkipped = "write.hook-copilot-cli-skipped"
- DescKeyWriteHookCopilotCLISummary = "write.hook-copilot-cli-summary"
- DescKeyWriteHookCopilotCreated = "write.hook-copilot-created"
- DescKeyWriteHookCopilotForceHint = "write.hook-copilot-force-hint"
- DescKeyWriteHookCopilotMerged = "write.hook-copilot-merged"
+ // DescKeyWriteHookAgentsCreated is the text key for write hook agents created
+ // messages.
+ DescKeyWriteHookAgentsCreated = "write.hook-agents-created"
+ // DescKeyWriteHookAgentsMerged is the text key for write hook agents merged
+ // messages.
+ DescKeyWriteHookAgentsMerged = "write.hook-agents-merged"
+ // DescKeyWriteHookAgentsSkipped is the text key for write hook agents skipped
+ // messages.
+ DescKeyWriteHookAgentsSkipped = "write.hook-agents-skipped"
+ // DescKeyWriteHookAgentsSummary is the text key for write hook agents summary
+ // messages.
+ DescKeyWriteHookAgentsSummary = "write.hook-agents-summary"
+ // DescKeyWriteHookCopilotCLICreated is the text key for write hook copilot
+ // cli created messages.
+ DescKeyWriteHookCopilotCLICreated = "write.hook-copilot-cli-created"
+ // DescKeyWriteHookCopilotCLISkipped is the text key for write hook copilot
+ // cli skipped messages.
+ DescKeyWriteHookCopilotCLISkipped = "write.hook-copilot-cli-skipped"
+ // DescKeyWriteHookCopilotCLISummary is the text key for write hook copilot
+ // cli summary messages.
+ DescKeyWriteHookCopilotCLISummary = "write.hook-copilot-cli-summary"
+ // DescKeyWriteHookCopilotCreated is the text key for write hook copilot
+ // created messages.
+ DescKeyWriteHookCopilotCreated = "write.hook-copilot-created"
+ // DescKeyWriteHookCopilotForceHint is the text key for write hook copilot
+ // force hint messages.
+ DescKeyWriteHookCopilotForceHint = "write.hook-copilot-force-hint"
+ // DescKeyWriteHookCopilotMerged is the text key for write hook copilot merged
+ // messages.
+ DescKeyWriteHookCopilotMerged = "write.hook-copilot-merged"
+ // DescKeyWriteHookCopilotSessionsDir is the text key for write hook copilot
+ // sessions dir messages.
DescKeyWriteHookCopilotSessionsDir = "write.hook-copilot-sessions-dir"
- DescKeyWriteHookCopilotSkipped = "write.hook-copilot-skipped"
- DescKeyWriteHookCopilotSummary = "write.hook-copilot-summary"
- DescKeyWriteHookUnknownTool = "write.hook-unknown-tool"
+ // DescKeyWriteHookCopilotSkipped is the text key for write hook copilot
+ // skipped messages.
+ DescKeyWriteHookCopilotSkipped = "write.hook-copilot-skipped"
+ // DescKeyWriteHookCopilotSummary is the text key for write hook copilot
+ // summary messages.
+ DescKeyWriteHookCopilotSummary = "write.hook-copilot-summary"
+ // DescKeyWriteHookUnknownTool is the text key for write hook unknown tool
+ // messages.
+ DescKeyWriteHookUnknownTool = "write.hook-unknown-tool"
)
diff --git a/internal/config/embed/text/import.go b/internal/config/embed/text/import.go
index 38cfb6c40..30dcef79b 100644
--- a/internal/config/embed/text/import.go
+++ b/internal/config/embed/text/import.go
@@ -8,37 +8,81 @@ package text
// DescKeys for import operations.
const (
+ // DescKeyImportCountConvention is the text key for import count convention
+ // messages.
DescKeyImportCountConvention = "import.count-convention"
- DescKeyImportCountDecision = "import.count-decision"
- DescKeyImportCountLearning = "import.count-learning"
- DescKeyImportCountTask = "import.count-task"
+ // DescKeyImportCountDecision is the text key for import count decision
+ // messages.
+ DescKeyImportCountDecision = "import.count-decision"
+ // DescKeyImportCountLearning is the text key for import count learning
+ // messages.
+ DescKeyImportCountLearning = "import.count-learning"
+ // DescKeyImportCountTask is the text key for import count task messages.
+ DescKeyImportCountTask = "import.count-task"
)
// DescKeys for memory import write output.
const (
- DescKeyWriteImportAdded = "write.import-added"
- DescKeyWriteImportBreakdown = "write.import-breakdown"
- DescKeyWriteImportClassified = "write.import-classified"
+ // DescKeyWriteImportAdded is the text key for write import added messages.
+ DescKeyWriteImportAdded = "write.import-added"
+ // DescKeyWriteImportBreakdown is the text key for write import breakdown
+ // messages.
+ DescKeyWriteImportBreakdown = "write.import-breakdown"
+ // DescKeyWriteImportClassified is the text key for write import classified
+ // messages.
+ DescKeyWriteImportClassified = "write.import-classified"
+ // DescKeyWriteImportClassifiedSkip is the text key for write import
+ // classified skip messages.
DescKeyWriteImportClassifiedSkip = "write.import-classified-skip"
- DescKeyWriteImportDuplicates = "write.import-duplicates"
- DescKeyWriteImportEntry = "write.import-entry"
- DescKeyWriteImportFound = "write.import-found"
- DescKeyWriteImportNoEntries = "write.import-no-entries"
- DescKeyWriteImportScanning = "write.import-scanning"
- DescKeyWriteImportSkipped = "write.import-skipped"
- DescKeyWriteImportErrorPromote = "write.import-error-promote"
- DescKeyWriteImportSummary = "write.import-summary"
- DescKeyWriteImportSummaryDryRun = "write.import-summary-dry-run"
+ // DescKeyWriteImportDuplicates is the text key for write import duplicates
+ // messages.
+ DescKeyWriteImportDuplicates = "write.import-duplicates"
+ // DescKeyWriteImportEntry is the text key for write import entry messages.
+ DescKeyWriteImportEntry = "write.import-entry"
+ // DescKeyWriteImportFound is the text key for write import found messages.
+ DescKeyWriteImportFound = "write.import-found"
+ // DescKeyWriteImportNoEntries is the text key for write import no entries
+ // messages.
+ DescKeyWriteImportNoEntries = "write.import-no-entries"
+ // DescKeyWriteImportScanning is the text key for write import scanning
+ // messages.
+ DescKeyWriteImportScanning = "write.import-scanning"
+ // DescKeyWriteImportSkipped is the text key for write import skipped messages.
+ DescKeyWriteImportSkipped = "write.import-skipped"
+ // DescKeyWriteImportErrorPromote is the text key for write import error
+ // promote messages.
+ DescKeyWriteImportErrorPromote = "write.import-error-promote"
+ // DescKeyWriteImportSummary is the text key for write import summary messages.
+ DescKeyWriteImportSummary = "write.import-summary"
+ // DescKeyWriteImportSummaryDryRun is the text key for write import summary
+ // dry run messages.
+ DescKeyWriteImportSummaryDryRun = "write.import-summary-dry-run"
)
// DescKeys for journal import write output.
const (
- DescKeyWriteJournalImportNothing = "write.journal-import-nothing"
- DescKeyWriteJournalImportPartNew = "write.journal-import-part-new"
- DescKeyWriteJournalImportPartRegen = "write.journal-import-part-regen"
- DescKeyWriteJournalImportPartSkip = "write.journal-import-part-skip"
+ // DescKeyWriteJournalImportNothing is the text key for write journal import
+ // nothing messages.
+ DescKeyWriteJournalImportNothing = "write.journal-import-nothing"
+ // DescKeyWriteJournalImportPartNew is the text key for write journal import
+ // part new messages.
+ DescKeyWriteJournalImportPartNew = "write.journal-import-part-new"
+ // DescKeyWriteJournalImportPartRegen is the text key for write journal import
+ // part regen messages.
+ DescKeyWriteJournalImportPartRegen = "write.journal-import-part-regen"
+ // DescKeyWriteJournalImportPartSkip is the text key for write journal import
+ // part skip messages.
+ DescKeyWriteJournalImportPartSkip = "write.journal-import-part-skip"
+ // DescKeyWriteJournalImportPartSkipLock is the text key for write journal
+ // import part skip lock messages.
DescKeyWriteJournalImportPartSkipLock = "write.journal-import-part-skip-locked"
- DescKeyWriteJournalImportSummary = "write.journal-import-summary"
- DescKeyWriteJournalImportVerb = "write.journal-import-verb"
- DescKeyWriteJournalImportVerbDryRun = "write.journal-import-verb-dry-run"
+ // DescKeyWriteJournalImportSummary is the text key for write journal import
+ // summary messages.
+ DescKeyWriteJournalImportSummary = "write.journal-import-summary"
+ // DescKeyWriteJournalImportVerb is the text key for write journal import verb
+ // messages.
+ DescKeyWriteJournalImportVerb = "write.journal-import-verb"
+ // DescKeyWriteJournalImportVerbDryRun is the text key for write journal
+ // import verb dry run messages.
+ DescKeyWriteJournalImportVerbDryRun = "write.journal-import-verb-dry-run"
)
diff --git a/internal/config/embed/text/initialize.go b/internal/config/embed/text/initialize.go
index 249d8c9ae..42b3413cd 100644
--- a/internal/config/embed/text/initialize.go
+++ b/internal/config/embed/text/initialize.go
@@ -8,94 +8,173 @@ package text
// DescKeys for init setup and abort messages.
const (
- DescKeyWriteInitAborted = "write.init-aborted"
- DescKeyWriteInitBackup = "write.init-backup"
- DescKeyWriteInitCreatedDir = "write.init-created-dir"
+ // DescKeyWriteInitAborted is the text key for write init aborted messages.
+ DescKeyWriteInitAborted = "write.init-aborted"
+ // DescKeyWriteInitBackup is the text key for write init backup messages.
+ DescKeyWriteInitBackup = "write.init-backup"
+ // DescKeyWriteInitCreatedDir is the text key for write init created dir
+ // messages.
+ DescKeyWriteInitCreatedDir = "write.init-created-dir"
+ // DescKeyWriteInitCreatingRootFiles is the text key for write init creating
+ // root files messages.
DescKeyWriteInitCreatingRootFiles = "write.init-creating-root-files"
- DescKeyWriteInitCtxContentExists = "write.init-ctx-content-exists"
- DescKeyWriteInitExistsSkipped = "write.init-exists-skipped"
+ // DescKeyWriteInitCtxContentExists is the text key for write init ctx content
+ // exists messages.
+ DescKeyWriteInitCtxContentExists = "write.init-ctx-content-exists"
+ // DescKeyWriteInitExistsSkipped is the text key for write init exists skipped
+ // messages.
+ DescKeyWriteInitExistsSkipped = "write.init-exists-skipped"
)
// DescKeys for init file creation output.
const (
- DescKeyWriteInitFileCreated = "write.init-file-created"
+ // DescKeyWriteInitFileCreated is the text key for write init file created
+ // messages.
+ DescKeyWriteInitFileCreated = "write.init-file-created"
+ // DescKeyWriteInitFileExistsNoCtx is the text key for write init file exists
+ // no ctx messages.
DescKeyWriteInitFileExistsNoCtx = "write.init-file-exists-no-ctx"
)
// DescKeys for init gitignore output.
const (
- DescKeyWriteInitGitignoreReview = "write.init-gitignore-review"
+ // DescKeyWriteInitGitignoreReview is the text key for write init gitignore
+ // review messages.
+ DescKeyWriteInitGitignoreReview = "write.init-gitignore-review"
+ // DescKeyWriteInitGitignoreUpdated is the text key for write init gitignore
+ // updated messages.
DescKeyWriteInitGitignoreUpdated = "write.init-gitignore-updated"
)
// DescKeys for init Makefile output.
const (
+ // DescKeyWriteInitMakefileAppended is the text key for write init makefile
+ // appended messages.
DescKeyWriteInitMakefileAppended = "write.init-makefile-appended"
- DescKeyWriteInitMakefileCreated = "write.init-makefile-created"
+ // DescKeyWriteInitMakefileCreated is the text key for write init makefile
+ // created messages.
+ DescKeyWriteInitMakefileCreated = "write.init-makefile-created"
+ // DescKeyWriteInitMakefileIncludes is the text key for write init makefile
+ // includes messages.
DescKeyWriteInitMakefileIncludes = "write.init-makefile-includes"
)
// DescKeys for init merge and prompt output.
const (
- DescKeyWriteInitMerged = "write.init-merged"
- DescKeyWriteInitNextStepsBlock = "write.init-next-steps-block"
- DescKeyWriteInitWorkflowTips = "write.init-workflow-tips"
- DescKeyWriteInitNoChanges = "write.init-no-changes"
+ // DescKeyWriteInitMerged is the text key for write init merged messages.
+ DescKeyWriteInitMerged = "write.init-merged"
+ // DescKeyWriteInitNextStepsBlock is the text key for write init next steps
+ // block messages.
+ DescKeyWriteInitNextStepsBlock = "write.init-next-steps-block"
+ // DescKeyWriteInitWorkflowTips is the text key for write init workflow tips
+ // messages.
+ DescKeyWriteInitWorkflowTips = "write.init-workflow-tips"
+ // DescKeyWriteInitNoChanges is the text key for write init no changes
+ // messages.
+ DescKeyWriteInitNoChanges = "write.init-no-changes"
+ // DescKeyWriteInitOverwritePrompt is the text key for write init overwrite
+ // prompt messages.
DescKeyWriteInitOverwritePrompt = "write.init-overwrite-prompt"
)
// DescKeys for init permission setup output.
const (
- DescKeyWriteInitPermsAllow = "write.init-perms-allow"
- DescKeyWriteInitPermsAllowDeny = "write.init-perms-allow-deny"
- DescKeyWriteInitPermsDeduped = "write.init-perms-deduped"
- DescKeyWriteInitPermsDeny = "write.init-perms-deny"
+ // DescKeyWriteInitPermsAllow is the text key for write init perms allow
+ // messages.
+ DescKeyWriteInitPermsAllow = "write.init-perms-allow"
+ // DescKeyWriteInitPermsAllowDeny is the text key for write init perms allow
+ // deny messages.
+ DescKeyWriteInitPermsAllowDeny = "write.init-perms-allow-deny"
+ // DescKeyWriteInitPermsDeduped is the text key for write init perms deduped
+ // messages.
+ DescKeyWriteInitPermsDeduped = "write.init-perms-deduped"
+ // DescKeyWriteInitPermsDeny is the text key for write init perms deny
+ // messages.
+ DescKeyWriteInitPermsDeny = "write.init-perms-deny"
+ // DescKeyWriteInitPermsMergedDeduped is the text key for write init perms
+ // merged deduped messages.
DescKeyWriteInitPermsMergedDeduped = "write.init-perms-merged-deduped"
)
// DescKeys for init plugin enablement output.
const (
+ // DescKeyWriteInitPluginAlreadyEnabled is the text key for write init plugin
+ // already enabled messages.
DescKeyWriteInitPluginAlreadyEnabled = "write.init-plugin-already-enabled"
- DescKeyWriteInitPluginEnabled = "write.init-plugin-enabled"
- DescKeyWriteInitPluginSkipped = "write.init-plugin-skipped"
+ // DescKeyWriteInitPluginEnabled is the text key for write init plugin enabled
+ // messages.
+ DescKeyWriteInitPluginEnabled = "write.init-plugin-enabled"
+ // DescKeyWriteInitPluginSkipped is the text key for write init plugin skipped
+ // messages.
+ DescKeyWriteInitPluginSkipped = "write.init-plugin-skipped"
)
// DescKeys for init scratchpad setup output.
const (
+ // DescKeyWriteInitScratchpadKeyCreated is the text key for write init
+ // scratchpad key created messages.
DescKeyWriteInitScratchpadKeyCreated = "write.init-scratchpad-key-created"
- DescKeyWriteInitScratchpadNoKey = "write.init-scratchpad-no-key"
- DescKeyWriteInitScratchpadPlaintext = "write.init-scratchpad-plaintext"
+ // DescKeyWriteInitScratchpadNoKey is the text key for write init scratchpad
+ // no key messages.
+ DescKeyWriteInitScratchpadNoKey = "write.init-scratchpad-no-key"
+ // DescKeyWriteInitScratchpadPlaintext is the text key for write init
+ // scratchpad plaintext messages.
+ DescKeyWriteInitScratchpadPlaintext = "write.init-scratchpad-plaintext"
)
// DescKeys for init skip and directory output.
const (
- DescKeyWriteInitSkippedDir = "write.init-skipped-dir"
+ // DescKeyWriteInitSkippedDir is the text key for write init skipped dir
+ // messages.
+ DescKeyWriteInitSkippedDir = "write.init-skipped-dir"
+ // DescKeyWriteInitSkippedPlain is the text key for write init skipped plain
+ // messages.
DescKeyWriteInitSkippedPlain = "write.init-skipped-plain"
)
// DescKeys for init section update output.
const (
+ // DescKeyWriteInitUpdatedCtxSection is the text key for write init updated
+ // ctx section messages.
DescKeyWriteInitUpdatedCtxSection = "write.init-updated-ctx-section"
)
// DescKeys for init completion output.
const (
- DescKeyWriteInitGettingStartedSaved = "write.init-getting-started-saved"
+ // DescKeyWriteInitGettingStartedSaved is the text key for write init getting
+ // started saved messages.
+ DescKeyWriteInitGettingStartedSaved = "write.init-getting-started-saved"
+ // DescKeyWriteInitSettingUpPermissions is the text key for write init setting
+ // up permissions messages.
DescKeyWriteInitSettingUpPermissions = "write.init-setting-up-permissions"
- DescKeyWriteInitWarnNonFatal = "write.init-warn-non-fatal"
- DescKeyWriteInitialized = "write.initialized"
+ // DescKeyWriteInitWarnNonFatal is the text key for write init warn non fatal
+ // messages.
+ DescKeyWriteInitWarnNonFatal = "write.init-warn-non-fatal"
+ // DescKeyWriteInitialized is the text key for write initialized messages.
+ DescKeyWriteInitialized = "write.initialized"
)
// Init component labels for InfoWarnNonFatal diagnostic output.
const (
+ // DescKeyInitLabelEntryTemplates is the text key for init label entry
+ // templates messages.
DescKeyInitLabelEntryTemplates = "init.label-entry-templates"
- DescKeyInitLabelScratchpad = "init.label-scratchpad"
- DescKeyInitLabelProjectDirs = "init.label-project-dirs"
- DescKeyInitLabelPermissions = "init.label-permissions"
- DescKeyInitLabelPluginEnable = "init.label-plugin-enable"
+ // DescKeyInitLabelScratchpad is the text key for init label scratchpad
+ // messages.
+ DescKeyInitLabelScratchpad = "init.label-scratchpad"
+ // DescKeyInitLabelProjectDirs is the text key for init label project dirs
+ // messages.
+ DescKeyInitLabelProjectDirs = "init.label-project-dirs"
+ // DescKeyInitLabelPermissions is the text key for init label permissions
+ // messages.
+ DescKeyInitLabelPermissions = "init.label-permissions"
+ // DescKeyInitLabelPluginEnable is the text key for init label plugin enable
+ // messages.
+ DescKeyInitLabelPluginEnable = "init.label-plugin-enable"
)
// Init confirmation prompts and mode labels.
const (
+ // DescKeyInitConfirmClaude is the text key for init confirm claude messages.
DescKeyInitConfirmClaude = "init.confirm-claude"
)
diff --git a/internal/config/embed/text/journal.go b/internal/config/embed/text/journal.go
index 77eeff39b..64d656979 100644
--- a/internal/config/embed/text/journal.go
+++ b/internal/config/embed/text/journal.go
@@ -8,52 +8,140 @@ package text
// DescKeys for journal output.
const (
- DescKeyJournalConsolidateCount = "journal.consolidate-count"
- DescKeyJournalProjectLabel = "journal.project-label"
- DescKeyJournalMocBrowseBy = "journal.moc.browse-by"
- DescKeyJournalMocFilePageStats = "journal.moc.file-page-stats"
- DescKeyJournalMocFileStats = "journal.moc.file-stats"
- DescKeyJournalMocFilesDesc = "journal.moc.files-description"
- DescKeyJournalMocNavDescription = "journal.moc.nav-description"
- DescKeyJournalMocSessionLink = "journal.moc.session-link"
- DescKeyJournalMocTopicPageStats = "journal.moc.topic-page-stats"
- DescKeyJournalMocTopicStats = "journal.moc.topic-stats"
- DescKeyJournalMocTopicsDesc = "journal.moc.topics-description"
- DescKeyJournalMocTopicsLabel = "journal.moc.topics-label"
- DescKeyJournalMocTypeLabel = "journal.moc.type-label"
- DescKeyJournalMocTypePageStats = "journal.moc.type-page-stats"
- DescKeyJournalMocTypeStats = "journal.moc.type-stats"
- DescKeyJournalMocTypesDesc = "journal.moc.types-description"
- DescKeyJournalMocBrowseItem = "journal.moc.browse-item"
- DescKeyJournalMocHeadingTopics = "journal.moc.heading-topics"
- DescKeyJournalMocHeadingPopular = "journal.moc.heading-popular"
+ // DescKeyJournalConsolidateCount is the text key for journal consolidate
+ // count messages.
+ DescKeyJournalConsolidateCount = "journal.consolidate-count"
+ // DescKeyJournalProjectLabel is the text key for journal project label
+ // messages.
+ DescKeyJournalProjectLabel = "journal.project-label"
+ // DescKeyJournalMocBrowseBy is the text key for journal moc browse by
+ // messages.
+ DescKeyJournalMocBrowseBy = "journal.moc.browse-by"
+ // DescKeyJournalMocFilePageStats is the text key for journal moc file page
+ // stats messages.
+ DescKeyJournalMocFilePageStats = "journal.moc.file-page-stats"
+ // DescKeyJournalMocFileStats is the text key for journal moc file stats
+ // messages.
+ DescKeyJournalMocFileStats = "journal.moc.file-stats"
+ // DescKeyJournalMocFilesDesc is the text key for journal moc files desc
+ // messages.
+ DescKeyJournalMocFilesDesc = "journal.moc.files-description"
+ // DescKeyJournalMocNavDescription is the text key for journal moc nav
+ // description messages.
+ DescKeyJournalMocNavDescription = "journal.moc.nav-description"
+ // DescKeyJournalMocSessionLink is the text key for journal moc session link
+ // messages.
+ DescKeyJournalMocSessionLink = "journal.moc.session-link"
+ // DescKeyJournalMocTopicPageStats is the text key for journal moc topic page
+ // stats messages.
+ DescKeyJournalMocTopicPageStats = "journal.moc.topic-page-stats"
+ // DescKeyJournalMocTopicStats is the text key for journal moc topic stats
+ // messages.
+ DescKeyJournalMocTopicStats = "journal.moc.topic-stats"
+ // DescKeyJournalMocTopicsDesc is the text key for journal moc topics desc
+ // messages.
+ DescKeyJournalMocTopicsDesc = "journal.moc.topics-description"
+ // DescKeyJournalMocTopicsLabel is the text key for journal moc topics label
+ // messages.
+ DescKeyJournalMocTopicsLabel = "journal.moc.topics-label"
+ // DescKeyJournalMocTypeLabel is the text key for journal moc type label
+ // messages.
+ DescKeyJournalMocTypeLabel = "journal.moc.type-label"
+ // DescKeyJournalMocTypePageStats is the text key for journal moc type page
+ // stats messages.
+ DescKeyJournalMocTypePageStats = "journal.moc.type-page-stats"
+ // DescKeyJournalMocTypeStats is the text key for journal moc type stats
+ // messages.
+ DescKeyJournalMocTypeStats = "journal.moc.type-stats"
+ // DescKeyJournalMocTypesDesc is the text key for journal moc types desc
+ // messages.
+ DescKeyJournalMocTypesDesc = "journal.moc.types-description"
+ // DescKeyJournalMocBrowseItem is the text key for journal moc browse item
+ // messages.
+ DescKeyJournalMocBrowseItem = "journal.moc.browse-item"
+ // DescKeyJournalMocHeadingTopics is the text key for journal moc heading
+ // topics messages.
+ DescKeyJournalMocHeadingTopics = "journal.moc.heading-topics"
+ // DescKeyJournalMocHeadingPopular is the text key for journal moc heading
+ // popular messages.
+ DescKeyJournalMocHeadingPopular = "journal.moc.heading-popular"
+ // DescKeyJournalMocHeadingLongtail is the text key for journal moc heading
+ // longtail messages.
DescKeyJournalMocHeadingLongtail = "journal.moc.heading-longtail"
- DescKeyJournalMocHeadingFiles = "journal.moc.heading-files"
- DescKeyJournalMocHeadingFreq = "journal.moc.heading-frequent"
- DescKeyJournalMocHeadingSingle = "journal.moc.heading-single"
- DescKeyJournalMocHeadingTypes = "journal.moc.heading-types"
- DescKeyJournalMocHeadingMonth = "journal.moc.heading-month"
- DescKeyJournalMocItemSessions = "journal.moc.item-sessions"
- DescKeyJournalMocItemNamed = "journal.moc.item-named"
- DescKeyJournalMocItemFileSess = "journal.moc.item-file-sessions"
- DescKeyJournalMocItemFileNamed = "journal.moc.item-file-named"
- DescKeyJournalMocItemListed = "journal.moc.item-listed"
- DescKeyJournalMocPageTitle = "journal.moc.page-title"
- DescKeyJournalMocCodeTitle = "journal.moc.code-title"
- DescKeyJournalMocTopicsMocLink = "journal.moc.topics-moc-link"
- DescKeyJournalMocTopicSep = "journal.moc.topic-separator"
+ // DescKeyJournalMocHeadingFiles is the text key for journal moc heading files
+ // messages.
+ DescKeyJournalMocHeadingFiles = "journal.moc.heading-files"
+ // DescKeyJournalMocHeadingFreq is the text key for journal moc heading freq
+ // messages.
+ DescKeyJournalMocHeadingFreq = "journal.moc.heading-frequent"
+ // DescKeyJournalMocHeadingSingle is the text key for journal moc heading
+ // single messages.
+ DescKeyJournalMocHeadingSingle = "journal.moc.heading-single"
+ // DescKeyJournalMocHeadingTypes is the text key for journal moc heading types
+ // messages.
+ DescKeyJournalMocHeadingTypes = "journal.moc.heading-types"
+ // DescKeyJournalMocHeadingMonth is the text key for journal moc heading month
+ // messages.
+ DescKeyJournalMocHeadingMonth = "journal.moc.heading-month"
+ // DescKeyJournalMocItemSessions is the text key for journal moc item sessions
+ // messages.
+ DescKeyJournalMocItemSessions = "journal.moc.item-sessions"
+ // DescKeyJournalMocItemNamed is the text key for journal moc item named
+ // messages.
+ DescKeyJournalMocItemNamed = "journal.moc.item-named"
+ // DescKeyJournalMocItemFileSess is the text key for journal moc item file
+ // sess messages.
+ DescKeyJournalMocItemFileSess = "journal.moc.item-file-sessions"
+ // DescKeyJournalMocItemFileNamed is the text key for journal moc item file
+ // named messages.
+ DescKeyJournalMocItemFileNamed = "journal.moc.item-file-named"
+ // DescKeyJournalMocItemListed is the text key for journal moc item listed
+ // messages.
+ DescKeyJournalMocItemListed = "journal.moc.item-listed"
+ // DescKeyJournalMocPageTitle is the text key for journal moc page title
+ // messages.
+ DescKeyJournalMocPageTitle = "journal.moc.page-title"
+ // DescKeyJournalMocCodeTitle is the text key for journal moc code title
+ // messages.
+ DescKeyJournalMocCodeTitle = "journal.moc.code-title"
+ // DescKeyJournalMocTopicsMocLink is the text key for journal moc topics moc
+ // link messages.
+ DescKeyJournalMocTopicsMocLink = "journal.moc.topics-moc-link"
+ // DescKeyJournalMocTopicSep is the text key for journal moc topic sep
+ // messages.
+ DescKeyJournalMocTopicSep = "journal.moc.topic-separator"
)
// DescKeys for journal write output.
const (
- DescKeyWriteJournalOrphanRemoved = "write.journal-orphan-removed"
- DescKeyWriteJournalSiteBuilding = "write.journal-site-building"
+ // DescKeyWriteJournalOrphanRemoved is the text key for write journal orphan
+ // removed messages.
+ DescKeyWriteJournalOrphanRemoved = "write.journal-orphan-removed"
+ // DescKeyWriteJournalSiteBuilding is the text key for write journal site
+ // building messages.
+ DescKeyWriteJournalSiteBuilding = "write.journal-site-building"
+ // DescKeyWriteJournalSiteGeneratedBlock is the text key for write journal
+ // site generated block messages.
DescKeyWriteJournalSiteGeneratedBlock = "write.journal-site-generated-block"
- DescKeyWriteJournalSiteStarting = "write.journal-site-starting"
- DescKeyWriteJournalSyncLocked = "write.journal-sync-locked"
- DescKeyWriteJournalSyncLockedCount = "write.journal-sync-locked-count"
- DescKeyWriteJournalSyncMatch = "write.journal-sync-match"
- DescKeyWriteJournalSyncNone = "write.journal-sync-none"
- DescKeyWriteJournalSyncUnlocked = "write.journal-sync-unlocked"
- DescKeyWriteJournalSyncUnlockedCount = "write.journal-sync-unlocked-count"
+ // DescKeyWriteJournalSiteStarting is the text key for write journal site
+ // starting messages.
+ DescKeyWriteJournalSiteStarting = "write.journal-site-starting"
+ // DescKeyWriteJournalSyncLocked is the text key for write journal sync locked
+ // messages.
+ DescKeyWriteJournalSyncLocked = "write.journal-sync-locked"
+ // DescKeyWriteJournalSyncLockedCount is the text key for write journal sync
+ // locked count messages.
+ DescKeyWriteJournalSyncLockedCount = "write.journal-sync-locked-count"
+ // DescKeyWriteJournalSyncMatch is the text key for write journal sync match
+ // messages.
+ DescKeyWriteJournalSyncMatch = "write.journal-sync-match"
+ // DescKeyWriteJournalSyncNone is the text key for write journal sync none
+ // messages.
+ DescKeyWriteJournalSyncNone = "write.journal-sync-none"
+ // DescKeyWriteJournalSyncUnlocked is the text key for write journal sync
+ // unlocked messages.
+ DescKeyWriteJournalSyncUnlocked = "write.journal-sync-unlocked"
+ // DescKeyWriteJournalSyncUnlockedCount is the text key for write journal sync
+ // unlocked count messages.
+ DescKeyWriteJournalSyncUnlockedCount = "write.journal-sync-unlocked-count"
)
diff --git a/internal/config/embed/text/journal_source.go b/internal/config/embed/text/journal_source.go
index 46c916641..d460a8b4f 100644
--- a/internal/config/embed/text/journal_source.go
+++ b/internal/config/embed/text/journal_source.go
@@ -8,39 +8,101 @@ package text
// DescKeys for journal source display.
const (
- DescKeyJournalSourceMetaSummary = "journal.source.meta-summary"
- DescKeyJournalSourceTokenSummary = "journal.source.token-summary"
+ // DescKeyJournalSourceMetaSummary is the text key for journal source meta
+ // summary messages.
+ DescKeyJournalSourceMetaSummary = "journal.source.meta-summary"
+ // DescKeyJournalSourceTokenSummary is the text key for journal source token
+ // summary messages.
+ DescKeyJournalSourceTokenSummary = "journal.source.token-summary"
+ // DescKeyJournalSourceToolCountLine is the text key for journal source tool
+ // count line messages.
DescKeyJournalSourceToolCountLine = "journal.source.tool-count-line"
)
// DescKeys for journal source display write output.
const (
- DescKeyWriteJournalSourceAborted = "write.journal-source-aborted"
- DescKeyWriteJournalSourceAmbiguousHint = "write.journal-source-ambiguous-hint"
- DescKeyWriteJournalSourceAmbiguousLine = "write.journal-source-ambiguous-line"
- DescKeyWriteJournalSourceAmbiguousMatch = "write.journal-source-ambiguous-match"
- DescKeyWriteJournalSourceAmbiguousMatchStderr = "write.journal-source-ambiguous-match-stderr"
- DescKeyWriteJournalSourceCodeBlock = "write.journal-source-code-block"
- DescKeyWriteJournalSourceConversationTurn = "write.journal-source-conversation-turn"
- DescKeyWriteJournalSourceImportedNew = "write.journal-source-imported-new"
- DescKeyWriteJournalSourceImportedOK = "write.journal-source-imported-ok"
- DescKeyWriteJournalSourceImportedOKSuffix = "write.journal-source-imported-ok-suffix"
- DescKeyWriteJournalSourceFooterLimit = "write.journal-source-footer-limit"
- DescKeyWriteJournalSourceListHeader = "write.journal-source-list-header"
- DescKeyWriteJournalSourceListHeaderFiltered = "write.journal-source-list-header-filtered"
- DescKeyWriteJournalSourceMoreTurns = "write.journal-source-more-turns"
- DescKeyWriteJournalSourceNoFiltersMatch = "write.journal-source-no-filters-match"
- DescKeyWriteJournalSourceNoSessions = "write.journal-source-no-sessions"
- DescKeyWriteJournalSourceNoSessionsHintAll = "write.journal-source-no-sessions-hint-all"
- DescKeyWriteJournalSourceNoSessionsProject = "write.journal-source-no-sessions-project"
+ // DescKeyWriteJournalSourceAborted is the text key for write journal source
+ // aborted messages.
+ DescKeyWriteJournalSourceAborted = "write.journal-source-aborted"
+ // DescKeyWriteJournalSourceAmbiguousHint is the text key for write journal
+ // source ambiguous hint messages.
+ DescKeyWriteJournalSourceAmbiguousHint = "write.journal-source-ambiguous-hint"
+ // DescKeyWriteJournalSourceAmbiguousLine is the text key for write journal
+ // source ambiguous line messages.
+ DescKeyWriteJournalSourceAmbiguousLine = "write.journal-source-ambiguous-line"
+ // DescKeyWriteJournalSourceAmbiguousMatch is the text key for write journal
+ // source ambiguous match messages.
+ DescKeyWriteJournalSourceAmbiguousMatch = "write.journal-source-ambiguous-match"
+ // DescKeyWriteJournalSourceAmbiguousMatchStderr is the text key for write
+ // journal source ambiguous match stderr messages.
+ DescKeyWriteJournalSourceAmbiguousMatchStderr = "write.journal-source-ambiguous-match-stderr"
+ // DescKeyWriteJournalSourceCodeBlock is the text key for write journal source
+ // code block messages.
+ DescKeyWriteJournalSourceCodeBlock = "write.journal-source-code-block"
+ // DescKeyWriteJournalSourceConversationTurn is the text key for write journal
+ // source conversation turn messages.
+ DescKeyWriteJournalSourceConversationTurn = "write.journal-source-conversation-turn"
+ // DescKeyWriteJournalSourceImportedNew is the text key for write journal
+ // source imported new messages.
+ DescKeyWriteJournalSourceImportedNew = "write.journal-source-imported-new"
+ // DescKeyWriteJournalSourceImportedOK is the text key for write journal
+ // source imported ok messages.
+ DescKeyWriteJournalSourceImportedOK = "write.journal-source-imported-ok"
+ // DescKeyWriteJournalSourceImportedOKSuffix is the text key for write journal
+ // source imported ok suffix messages.
+ DescKeyWriteJournalSourceImportedOKSuffix = "write.journal-source-imported-ok-suffix"
+ // DescKeyWriteJournalSourceFooterLimit is the text key for write journal
+ // source footer limit messages.
+ DescKeyWriteJournalSourceFooterLimit = "write.journal-source-footer-limit"
+ // DescKeyWriteJournalSourceListHeader is the text key for write journal
+ // source list header messages.
+ DescKeyWriteJournalSourceListHeader = "write.journal-source-list-header"
+ // DescKeyWriteJournalSourceListHeaderFiltered is the text key for write
+ // journal source list header filtered messages.
+ DescKeyWriteJournalSourceListHeaderFiltered = "write.journal-source-list-header-filtered"
+ // DescKeyWriteJournalSourceMoreTurns is the text key for write journal source
+ // more turns messages.
+ DescKeyWriteJournalSourceMoreTurns = "write.journal-source-more-turns"
+ // DescKeyWriteJournalSourceNoFiltersMatch is the text key for write journal
+ // source no filters match messages.
+ DescKeyWriteJournalSourceNoFiltersMatch = "write.journal-source-no-filters-match"
+ // DescKeyWriteJournalSourceNoSessions is the text key for write journal
+ // source no sessions messages.
+ DescKeyWriteJournalSourceNoSessions = "write.journal-source-no-sessions"
+ // DescKeyWriteJournalSourceNoSessionsHintAll is the text key for write
+ // journal source no sessions hint all messages.
+ DescKeyWriteJournalSourceNoSessionsHintAll = "write.journal-source-no-sessions-hint-all"
+ // DescKeyWriteJournalSourceNoSessionsProject is the text key for write
+ // journal source no sessions project messages.
+ DescKeyWriteJournalSourceNoSessionsProject = "write.journal-source-no-sessions-project"
+ // DescKeyWriteJournalSourceNoSessionsProjectHint is the text key for write
+ // journal source no sessions project hint messages.
DescKeyWriteJournalSourceNoSessionsProjectHint = "write.journal-source-no-sessions-project-hint"
- DescKeyWriteJournalSourceNumberedItem = "write.journal-source-numbered-item"
- DescKeyWriteJournalSourceRenamed = "write.journal-source-renamed"
- DescKeyWriteJournalSourceSkip = "write.journal-source-skip"
- DescKeyWriteJournalSourceSkipped = "write.journal-source-skipped"
- DescKeyWriteJournalSourceStorageHint = "write.journal-source-storage-hint"
- DescKeyWriteJournalSourceUpdated = "write.journal-source-updated"
- DescKeyWriteJournalSourceDetailString = "write.journal-source-detail-string"
- DescKeyWriteJournalSourceDetailInt = "write.journal-source-detail-int"
- DescKeyWriteJournalSourceSectionHeading = "write.journal-source-section-heading"
+ // DescKeyWriteJournalSourceNumberedItem is the text key for write journal
+ // source numbered item messages.
+ DescKeyWriteJournalSourceNumberedItem = "write.journal-source-numbered-item"
+ // DescKeyWriteJournalSourceRenamed is the text key for write journal source
+ // renamed messages.
+ DescKeyWriteJournalSourceRenamed = "write.journal-source-renamed"
+ // DescKeyWriteJournalSourceSkip is the text key for write journal source skip
+ // messages.
+ DescKeyWriteJournalSourceSkip = "write.journal-source-skip"
+ // DescKeyWriteJournalSourceSkipped is the text key for write journal source
+ // skipped messages.
+ DescKeyWriteJournalSourceSkipped = "write.journal-source-skipped"
+ // DescKeyWriteJournalSourceStorageHint is the text key for write journal
+ // source storage hint messages.
+ DescKeyWriteJournalSourceStorageHint = "write.journal-source-storage-hint"
+ // DescKeyWriteJournalSourceUpdated is the text key for write journal source
+ // updated messages.
+ DescKeyWriteJournalSourceUpdated = "write.journal-source-updated"
+ // DescKeyWriteJournalSourceDetailString is the text key for write journal
+ // source detail string messages.
+ DescKeyWriteJournalSourceDetailString = "write.journal-source-detail-string"
+ // DescKeyWriteJournalSourceDetailInt is the text key for write journal source
+ // detail int messages.
+ DescKeyWriteJournalSourceDetailInt = "write.journal-source-detail-int"
+ // DescKeyWriteJournalSourceSectionHeading is the text key for write journal
+ // source section heading messages.
+ DescKeyWriteJournalSourceSectionHeading = "write.journal-source-section-heading"
)
diff --git a/internal/config/embed/text/label.go b/internal/config/embed/text/label.go
index 493632c9f..e40a6cf04 100644
--- a/internal/config/embed/text/label.go
+++ b/internal/config/embed/text/label.go
@@ -8,16 +8,26 @@ package text
// DescKeys for navigation display labels.
const (
- DescKeyLabelHome = "label.home"
+ // DescKeyLabelHome is the text key for label home messages.
+ DescKeyLabelHome = "label.home"
+ // DescKeyLabelTopics is the text key for label topics messages.
DescKeyLabelTopics = "label.topics"
- DescKeyLabelFiles = "label.files"
- DescKeyLabelTypes = "label.types"
+ // DescKeyLabelFiles is the text key for label files messages.
+ DescKeyLabelFiles = "label.files"
+ // DescKeyLabelTypes is the text key for label types messages.
+ DescKeyLabelTypes = "label.types"
)
// DescKeys for UI emphasis labels.
const (
- DescKeyLabelBoldReminder = "label.bold-reminder"
+ // DescKeyLabelBoldReminder is the text key for label bold reminder messages.
+ DescKeyLabelBoldReminder = "label.bold-reminder"
+ // DescKeyLabelBoldReminderFmt is the text key for label bold reminder fmt
+ // messages.
DescKeyLabelBoldReminderFmt = "label.bold-reminder-fmt"
- DescKeyLabelToolOutput = "label.tool-output"
- DescKeyLabelSuggestionMode = "label.suggestion-mode"
+ // DescKeyLabelToolOutput is the text key for label tool output messages.
+ DescKeyLabelToolOutput = "label.tool-output"
+ // DescKeyLabelSuggestionMode is the text key for label suggestion mode
+ // messages.
+ DescKeyLabelSuggestionMode = "label.suggestion-mode"
)
diff --git a/internal/config/embed/text/label_col.go b/internal/config/embed/text/label_col.go
index 4a51dddd7..4b2fc38cc 100644
--- a/internal/config/embed/text/label_col.go
+++ b/internal/config/embed/text/label_col.go
@@ -8,10 +8,16 @@ package text
// Column headers (headings.yaml).
const (
- DescKeyLabelColSlug = "label.col-slug"
- DescKeyLabelColProject = "label.col-project"
- DescKeyLabelColDate = "label.col-date"
+ // DescKeyLabelColSlug is the text key for label col slug messages.
+ DescKeyLabelColSlug = "label.col-slug"
+ // DescKeyLabelColProject is the text key for label col project messages.
+ DescKeyLabelColProject = "label.col-project"
+ // DescKeyLabelColDate is the text key for label col date messages.
+ DescKeyLabelColDate = "label.col-date"
+ // DescKeyLabelColDuration is the text key for label col duration messages.
DescKeyLabelColDuration = "label.col-duration"
- DescKeyLabelColTurns = "label.col-turns"
- DescKeyLabelColUsage = "label.col-usage"
+ // DescKeyLabelColTurns is the text key for label col turns messages.
+ DescKeyLabelColTurns = "label.col-turns"
+ // DescKeyLabelColUsage is the text key for label col usage messages.
+ DescKeyLabelColUsage = "label.col-usage"
)
diff --git a/internal/config/embed/text/label_hint.go b/internal/config/embed/text/label_hint.go
index 606067a06..eff9aecbb 100644
--- a/internal/config/embed/text/label_hint.go
+++ b/internal/config/embed/text/label_hint.go
@@ -8,6 +8,9 @@ package text
// Hints and markers (headings.yaml).
const (
- DescKeyLabelHintUseFull = "label.hint-use-full"
+ // DescKeyLabelHintUseFull is the text key for label hint use full messages.
+ DescKeyLabelHintUseFull = "label.hint-use-full"
+ // DescKeyLabelHintUseAllProjects is the text key for label hint use all
+ // projects messages.
DescKeyLabelHintUseAllProjects = "label.hint-use-all-projects"
)
diff --git a/internal/config/embed/text/label_inline.go b/internal/config/embed/text/label_inline.go
index 3ae493c4e..6061b4a36 100644
--- a/internal/config/embed/text/label_inline.go
+++ b/internal/config/embed/text/label_inline.go
@@ -8,6 +8,8 @@ package text
// DescKeys for inline display labels.
const (
- DescKeyLabelInlineTool = "label.inline-tool"
+ // DescKeyLabelInlineTool is the text key for label inline tool messages.
+ DescKeyLabelInlineTool = "label.inline-tool"
+ // DescKeyLabelInlineError is the text key for label inline error messages.
DescKeyLabelInlineError = "label.inline-error"
)
diff --git a/internal/config/embed/text/label_loop.go b/internal/config/embed/text/label_loop.go
index 4070218da..5a9cc4bdc 100644
--- a/internal/config/embed/text/label_loop.go
+++ b/internal/config/embed/text/label_loop.go
@@ -8,5 +8,6 @@ package text
// DescKeys for loop display labels.
const (
+ // DescKeyLabelLoopComplete is the text key for label loop complete messages.
DescKeyLabelLoopComplete = "label.loop-complete"
)
diff --git a/internal/config/embed/text/label_meta.go b/internal/config/embed/text/label_meta.go
index 9fa848d66..ad6a2ad1a 100644
--- a/internal/config/embed/text/label_meta.go
+++ b/internal/config/embed/text/label_meta.go
@@ -8,32 +8,63 @@ package text
// DescKeys for compact metadata labels.
const (
- DescKeyLabelMetaID = "label.meta-id"
- DescKeyLabelMetaDate = "label.meta-date"
- DescKeyLabelMetaTime = "label.meta-time"
+ // DescKeyLabelMetaID is the text key for label meta id messages.
+ DescKeyLabelMetaID = "label.meta-id"
+ // DescKeyLabelMetaDate is the text key for label meta date messages.
+ DescKeyLabelMetaDate = "label.meta-date"
+ // DescKeyLabelMetaTime is the text key for label meta time messages.
+ DescKeyLabelMetaTime = "label.meta-time"
+ // DescKeyLabelMetaDuration is the text key for label meta duration messages.
DescKeyLabelMetaDuration = "label.meta-duration"
- DescKeyLabelMetaTool = "label.meta-tool"
- DescKeyLabelMetaProject = "label.meta-project"
- DescKeyLabelMetaBranch = "label.meta-branch"
- DescKeyLabelMetaModel = "label.meta-model"
- DescKeyLabelMetaTurns = "label.meta-turns"
- DescKeyLabelMetaTokens = "label.meta-tokens"
- DescKeyLabelMetaParts = "label.meta-parts"
+ // DescKeyLabelMetaTool is the text key for label meta tool messages.
+ DescKeyLabelMetaTool = "label.meta-tool"
+ // DescKeyLabelMetaProject is the text key for label meta project messages.
+ DescKeyLabelMetaProject = "label.meta-project"
+ // DescKeyLabelMetaBranch is the text key for label meta branch messages.
+ DescKeyLabelMetaBranch = "label.meta-branch"
+ // DescKeyLabelMetaModel is the text key for label meta model messages.
+ DescKeyLabelMetaModel = "label.meta-model"
+ // DescKeyLabelMetaTurns is the text key for label meta turns messages.
+ DescKeyLabelMetaTurns = "label.meta-turns"
+ // DescKeyLabelMetaTokens is the text key for label meta tokens messages.
+ DescKeyLabelMetaTokens = "label.meta-tokens"
+ // DescKeyLabelMetaParts is the text key for label meta parts messages.
+ DescKeyLabelMetaParts = "label.meta-parts"
)
// DescKeys for full metadata labels.
const (
- DescKeyLabelMetadataID = "label.metadata-id"
- DescKeyLabelMetadataTime = "label.metadata-time"
- DescKeyLabelMetadataDuration = "label.metadata-duration"
- DescKeyLabelMetadataTool = "label.metadata-tool"
- DescKeyLabelMetadataProject = "label.metadata-project"
- DescKeyLabelMetadataBranch = "label.metadata-branch"
- DescKeyLabelMetadataModel = "label.metadata-model"
- DescKeyLabelMetadataTurns = "label.metadata-turns"
- DescKeyLabelMetadataStarted = "label.metadata-started"
- DescKeyLabelMetadataMessages = "label.metadata-messages"
- DescKeyLabelMetadataInputUsage = "label.metadata-input-usage"
+ // DescKeyLabelMetadataID is the text key for label metadata id messages.
+ DescKeyLabelMetadataID = "label.metadata-id"
+ // DescKeyLabelMetadataTime is the text key for label metadata time messages.
+ DescKeyLabelMetadataTime = "label.metadata-time"
+ // DescKeyLabelMetadataDuration is the text key for label metadata duration
+ // messages.
+ DescKeyLabelMetadataDuration = "label.metadata-duration"
+ // DescKeyLabelMetadataTool is the text key for label metadata tool messages.
+ DescKeyLabelMetadataTool = "label.metadata-tool"
+ // DescKeyLabelMetadataProject is the text key for label metadata project
+ // messages.
+ DescKeyLabelMetadataProject = "label.metadata-project"
+ // DescKeyLabelMetadataBranch is the text key for label metadata branch
+ // messages.
+ DescKeyLabelMetadataBranch = "label.metadata-branch"
+ // DescKeyLabelMetadataModel is the text key for label metadata model messages.
+ DescKeyLabelMetadataModel = "label.metadata-model"
+ // DescKeyLabelMetadataTurns is the text key for label metadata turns messages.
+ DescKeyLabelMetadataTurns = "label.metadata-turns"
+ // DescKeyLabelMetadataStarted is the text key for label metadata started
+ // messages.
+ DescKeyLabelMetadataStarted = "label.metadata-started"
+ // DescKeyLabelMetadataMessages is the text key for label metadata messages
+ // messages.
+ DescKeyLabelMetadataMessages = "label.metadata-messages"
+ // DescKeyLabelMetadataInputUsage is the text key for label metadata input
+ // usage messages.
+ DescKeyLabelMetadataInputUsage = "label.metadata-input-usage"
+ // DescKeyLabelMetadataOutputUsage is the text key for label metadata output
+ // usage messages.
DescKeyLabelMetadataOutputUsage = "label.metadata-output-usage"
- DescKeyLabelMetadataTotal = "label.metadata-total"
+ // DescKeyLabelMetadataTotal is the text key for label metadata total messages.
+ DescKeyLabelMetadataTotal = "label.metadata-total"
)
diff --git a/internal/config/embed/text/label_reason.go b/internal/config/embed/text/label_reason.go
index 5701cd14b..748911762 100644
--- a/internal/config/embed/text/label_reason.go
+++ b/internal/config/embed/text/label_reason.go
@@ -8,6 +8,8 @@ package text
// DescKeys for reason labels.
const (
- DescKeyLabelReasonExists = "label.reason-exists"
+ // DescKeyLabelReasonExists is the text key for label reason exists messages.
+ DescKeyLabelReasonExists = "label.reason-exists"
+ // DescKeyLabelReasonUpdated is the text key for label reason updated messages.
DescKeyLabelReasonUpdated = "label.reason-updated"
)
diff --git a/internal/config/embed/text/label_role.go b/internal/config/embed/text/label_role.go
index 112f2e122..5ffc96bbc 100644
--- a/internal/config/embed/text/label_role.go
+++ b/internal/config/embed/text/label_role.go
@@ -8,6 +8,8 @@ package text
// Role labels (headings.yaml).
const (
- DescKeyLabelRoleUser = "label.role-user"
+ // DescKeyLabelRoleUser is the text key for label role user messages.
+ DescKeyLabelRoleUser = "label.role-user"
+ // DescKeyLabelRoleAssistant is the text key for label role assistant messages.
DescKeyLabelRoleAssistant = "label.role-assistant"
)
diff --git a/internal/config/embed/text/label_section.go b/internal/config/embed/text/label_section.go
index 35c5d4489..ea991f7ec 100644
--- a/internal/config/embed/text/label_section.go
+++ b/internal/config/embed/text/label_section.go
@@ -8,7 +8,13 @@ package text
// Section headers (headings.yaml).
const (
- DescKeyLabelSectionToolUsage = "label.section-tool-usage"
- DescKeyLabelSectionConversation = "label.section-conversation"
+ // DescKeyLabelSectionToolUsage is the text key for label section tool usage
+ // messages.
+ DescKeyLabelSectionToolUsage = "label.section-tool-usage"
+ // DescKeyLabelSectionConversation is the text key for label section
+ // conversation messages.
+ DescKeyLabelSectionConversation = "label.section-conversation"
+ // DescKeyLabelSectionConversationPreview is the text key for label section
+ // conversation preview messages.
DescKeyLabelSectionConversationPreview = "label.section-conversation-preview"
)
diff --git a/internal/config/embed/text/lock.go b/internal/config/embed/text/lock.go
index fb2c22d6e..c4025cc7f 100644
--- a/internal/config/embed/text/lock.go
+++ b/internal/config/embed/text/lock.go
@@ -8,7 +8,13 @@ package text
// DescKeys for lock management write output.
const (
- DescKeyWriteLockUnlockEntry = "write.lock-unlock-entry"
+ // DescKeyWriteLockUnlockEntry is the text key for write lock unlock entry
+ // messages.
+ DescKeyWriteLockUnlockEntry = "write.lock-unlock-entry"
+ // DescKeyWriteLockUnlockNoChanges is the text key for write lock unlock no
+ // changes messages.
DescKeyWriteLockUnlockNoChanges = "write.lock-unlock-no-changes"
- DescKeyWriteLockUnlockSummary = "write.lock-unlock-summary"
+ // DescKeyWriteLockUnlockSummary is the text key for write lock unlock summary
+ // messages.
+ DescKeyWriteLockUnlockSummary = "write.lock-unlock-summary"
)
diff --git a/internal/config/embed/text/loop.go b/internal/config/embed/text/loop.go
index f28b56620..2cbb1206b 100644
--- a/internal/config/embed/text/loop.go
+++ b/internal/config/embed/text/loop.go
@@ -8,7 +8,12 @@ package text
// DescKeys for loop write output.
const (
+ // DescKeyWriteLoopGeneratedBlock is the text key for write loop generated
+ // block messages.
DescKeyWriteLoopGeneratedBlock = "write.loop-generated-block"
- DescKeyWriteLoopMaxIterations = "write.loop-max-iterations"
- DescKeyWriteLoopUnlimited = "write.loop-unlimited"
+ // DescKeyWriteLoopMaxIterations is the text key for write loop max iterations
+ // messages.
+ DescKeyWriteLoopMaxIterations = "write.loop-max-iterations"
+ // DescKeyWriteLoopUnlimited is the text key for write loop unlimited messages.
+ DescKeyWriteLoopUnlimited = "write.loop-unlimited"
)
diff --git a/internal/config/embed/text/mark.go b/internal/config/embed/text/mark.go
index 9401055b4..d0e19eb33 100644
--- a/internal/config/embed/text/mark.go
+++ b/internal/config/embed/text/mark.go
@@ -8,7 +8,11 @@ package text
// DescKeys for marker operations.
const (
- DescKeyMarkJournalChecked = "mark-journal.checked"
- DescKeyMarkJournalMarked = "mark-journal.marked"
+ // DescKeyMarkJournalChecked is the text key for mark journal checked messages.
+ DescKeyMarkJournalChecked = "mark-journal.checked"
+ // DescKeyMarkJournalMarked is the text key for mark journal marked messages.
+ DescKeyMarkJournalMarked = "mark-journal.marked"
+ // DescKeyMarkWrappedUpConfirmed is the text key for mark wrapped up confirmed
+ // messages.
DescKeyMarkWrappedUpConfirmed = "mark-wrapped-up.confirmed"
)
diff --git a/internal/config/embed/text/mcp_compact.go b/internal/config/embed/text/mcp_compact.go
index 18113c007..105bef990 100644
--- a/internal/config/embed/text/mcp_compact.go
+++ b/internal/config/embed/text/mcp_compact.go
@@ -8,7 +8,13 @@ package text
// DescKeys for MCP compact output.
const (
- DescKeyMCPCompactMovedFormat = "mcp.compact-moved-format"
+ // DescKeyMCPCompactMovedFormat is the text key for mcp compact moved format
+ // messages.
+ DescKeyMCPCompactMovedFormat = "mcp.compact-moved-format"
+ // DescKeyMCPCompactArchiveWarning is the text key for mcp compact archive
+ // warning messages.
DescKeyMCPCompactArchiveWarning = "mcp.compact-archive-warning"
+ // DescKeyMCPCompactRemovedSectFmt is the text key for mcp compact removed
+ // sect fmt messages.
DescKeyMCPCompactRemovedSectFmt = "mcp.compact-removed-sections-format"
)
diff --git a/internal/config/embed/text/mcp_context.go b/internal/config/embed/text/mcp_context.go
index 046318c3b..afc4b5b54 100644
--- a/internal/config/embed/text/mcp_context.go
+++ b/internal/config/embed/text/mcp_context.go
@@ -8,5 +8,6 @@ package text
// DescKeys for MCP context rendering.
const (
+ // DescKeyMCPLoadContext is the text key for mcp load context messages.
DescKeyMCPLoadContext = "mcp.load-context"
)
diff --git a/internal/config/embed/text/mcp_drift.go b/internal/config/embed/text/mcp_drift.go
index 3c42dc60d..def250e34 100644
--- a/internal/config/embed/text/mcp_drift.go
+++ b/internal/config/embed/text/mcp_drift.go
@@ -8,10 +8,18 @@ package text
// DescKeys for MCP drift output.
const (
+ // DescKeyMCPDriftStatusFormat is the text key for mcp drift status format
+ // messages.
DescKeyMCPDriftStatusFormat = "mcp.drift-status-format"
- DescKeyMCPDriftViolations = "mcp.drift-violations"
- DescKeyMCPDriftWarnings = "mcp.drift-warnings"
- DescKeyMCPDriftOK = "mcp.drift-ok"
- DescKeyMCPDriftIssueFormat = "mcp.drift-issue-format"
- DescKeyMCPDriftOKFormat = "mcp.drift-ok-format"
+ // DescKeyMCPDriftViolations is the text key for mcp drift violations messages.
+ DescKeyMCPDriftViolations = "mcp.drift-violations"
+ // DescKeyMCPDriftWarnings is the text key for mcp drift warnings messages.
+ DescKeyMCPDriftWarnings = "mcp.drift-warnings"
+ // DescKeyMCPDriftOK is the text key for mcp drift ok messages.
+ DescKeyMCPDriftOK = "mcp.drift-ok"
+ // DescKeyMCPDriftIssueFormat is the text key for mcp drift issue format
+ // messages.
+ DescKeyMCPDriftIssueFormat = "mcp.drift-issue-format"
+ // DescKeyMCPDriftOKFormat is the text key for mcp drift ok format messages.
+ DescKeyMCPDriftOKFormat = "mcp.drift-ok-format"
)
diff --git a/internal/config/embed/text/mcp_err.go b/internal/config/embed/text/mcp_err.go
index 7bd6acaa0..7537dc740 100644
--- a/internal/config/embed/text/mcp_err.go
+++ b/internal/config/embed/text/mcp_err.go
@@ -8,16 +8,36 @@ package text
// DescKeys for MCP error messages.
const (
- DescKeyMCPErrMethodNotFound = "mcp.err-method-not-found"
- DescKeyMCPErrParse = "mcp.err-parse"
- DescKeyMCPErrFileNotFound = "mcp.err-file-not-found"
- DescKeyMCPErrInvalidParams = "mcp.err-invalid-params"
- DescKeyMCPErrUnknownResource = "mcp.err-unknown-resource"
- DescKeyMCPErrUnknownTool = "mcp.err-unknown-tool"
- DescKeyMCPErrFailedMarshal = "mcp.err-failed-marshal"
+ // DescKeyMCPErrMethodNotFound is the text key for mcp err method not found
+ // messages.
+ DescKeyMCPErrMethodNotFound = "mcp.err-method-not-found"
+ // DescKeyMCPErrParse is the text key for mcp err parse messages.
+ DescKeyMCPErrParse = "mcp.err-parse"
+ // DescKeyMCPErrFileNotFound is the text key for mcp err file not found
+ // messages.
+ DescKeyMCPErrFileNotFound = "mcp.err-file-not-found"
+ // DescKeyMCPErrInvalidParams is the text key for mcp err invalid params
+ // messages.
+ DescKeyMCPErrInvalidParams = "mcp.err-invalid-params"
+ // DescKeyMCPErrUnknownResource is the text key for mcp err unknown resource
+ // messages.
+ DescKeyMCPErrUnknownResource = "mcp.err-unknown-resource"
+ // DescKeyMCPErrUnknownTool is the text key for mcp err unknown tool messages.
+ DescKeyMCPErrUnknownTool = "mcp.err-unknown-tool"
+ // DescKeyMCPErrFailedMarshal is the text key for mcp err failed marshal
+ // messages.
+ DescKeyMCPErrFailedMarshal = "mcp.err-failed-marshal"
+ // DescKeyMCPErrTypeContentRequired is the text key for mcp err type content
+ // required messages.
DescKeyMCPErrTypeContentRequired = "mcp.err-type-content-required"
- DescKeyMCPErrQueryRequired = "mcp.err-query-required"
- DescKeyMCPErrSearchRead = "mcp.err-search-read"
- DescKeyMCPErrUnknownPrompt = "mcp.err-unknown-prompt"
- DescKeyMCPErrURIRequired = "mcp.err-uri-required"
+ // DescKeyMCPErrQueryRequired is the text key for mcp err query required
+ // messages.
+ DescKeyMCPErrQueryRequired = "mcp.err-query-required"
+ // DescKeyMCPErrSearchRead is the text key for mcp err search read messages.
+ DescKeyMCPErrSearchRead = "mcp.err-search-read"
+ // DescKeyMCPErrUnknownPrompt is the text key for mcp err unknown prompt
+ // messages.
+ DescKeyMCPErrUnknownPrompt = "mcp.err-unknown-prompt"
+ // DescKeyMCPErrURIRequired is the text key for mcp err uri required messages.
+ DescKeyMCPErrURIRequired = "mcp.err-uri-required"
)
diff --git a/internal/config/embed/text/mcp_event.go b/internal/config/embed/text/mcp_event.go
index 068ccb7b0..d73e23fc3 100644
--- a/internal/config/embed/text/mcp_event.go
+++ b/internal/config/embed/text/mcp_event.go
@@ -8,6 +8,10 @@ package text
// DescKeys for MCP event output.
const (
+ // DescKeyMCPEventTypeRequired is the text key for mcp event type required
+ // messages.
DescKeyMCPEventTypeRequired = "mcp.event-type-required"
- DescKeyMCPUnknownEventType = "mcp.unknown-event-type"
+ // DescKeyMCPUnknownEventType is the text key for mcp unknown event type
+ // messages.
+ DescKeyMCPUnknownEventType = "mcp.unknown-event-type"
)
diff --git a/internal/config/embed/text/mcp_format.go b/internal/config/embed/text/mcp_format.go
index 6a2b5ca5a..85650186f 100644
--- a/internal/config/embed/text/mcp_format.go
+++ b/internal/config/embed/text/mcp_format.go
@@ -8,12 +8,25 @@ package text
// DescKeys for MCP formatting.
const (
- DescKeyMCPFormatSection = "mcp.format-section"
+ // DescKeyMCPFormatSection is the text key for mcp format section messages.
+ DescKeyMCPFormatSection = "mcp.format-section"
+ // DescKeyMCPFormatWatchCompleted is the text key for mcp format watch
+ // completed messages.
DescKeyMCPFormatWatchCompleted = "mcp.format-watch-completed"
- DescKeyMCPFormatWrote = "mcp.format-wrote"
- DescKeyMCPFormatCompacted = "mcp.format-compacted"
- DescKeyMCPFormatSessionStats = "mcp.format-session-stats"
- DescKeyMCPFormatPendingItem = "mcp.format-pending-item"
- DescKeyMCPFormatReminderItem = "mcp.format-reminder-item"
+ // DescKeyMCPFormatWrote is the text key for mcp format wrote messages.
+ DescKeyMCPFormatWrote = "mcp.format-wrote"
+ // DescKeyMCPFormatCompacted is the text key for mcp format compacted messages.
+ DescKeyMCPFormatCompacted = "mcp.format-compacted"
+ // DescKeyMCPFormatSessionStats is the text key for mcp format session stats
+ // messages.
+ DescKeyMCPFormatSessionStats = "mcp.format-session-stats"
+ // DescKeyMCPFormatPendingItem is the text key for mcp format pending item
+ // messages.
+ DescKeyMCPFormatPendingItem = "mcp.format-pending-item"
+ // DescKeyMCPFormatReminderItem is the text key for mcp format reminder item
+ // messages.
+ DescKeyMCPFormatReminderItem = "mcp.format-reminder-item"
+ // DescKeyMCPFormatReminderNotDue is the text key for mcp format reminder not
+ // due messages.
DescKeyMCPFormatReminderNotDue = "mcp.format-reminder-not-due"
)
diff --git a/internal/config/embed/text/mcp_io.go b/internal/config/embed/text/mcp_io.go
index 9c5cd3c16..cc52edbc0 100644
--- a/internal/config/embed/text/mcp_io.go
+++ b/internal/config/embed/text/mcp_io.go
@@ -8,6 +8,8 @@ package text
// DescKeys for MCP I/O messages.
const (
+ // DescKeyRelayPrefixFormat is the text key for relay prefix format messages.
DescKeyRelayPrefixFormat = "relay.prefix-format"
- DescKeyMCPPacketHeader = "mcp.packet-header"
+ // DescKeyMCPPacketHeader is the text key for mcp packet header messages.
+ DescKeyMCPPacketHeader = "mcp.packet-header"
)
diff --git a/internal/config/embed/text/mcp_journal.go b/internal/config/embed/text/mcp_journal.go
index 1189a8714..2e2c6a47e 100644
--- a/internal/config/embed/text/mcp_journal.go
+++ b/internal/config/embed/text/mcp_journal.go
@@ -8,8 +8,16 @@ package text
// DescKeys for MCP journal output.
const (
- DescKeyMCPJournalSourceItemFormat = "mcp.journal-source-item-format"
- DescKeyMCPJournalSourceProjectFormat = "mcp.journal-source-project-format"
+ // DescKeyMCPJournalSourceItemFormat is the text key for mcp journal source
+ // item format messages.
+ DescKeyMCPJournalSourceItemFormat = "mcp.journal-source-item-format"
+ // DescKeyMCPJournalSourceProjectFormat is the text key for mcp journal source
+ // project format messages.
+ DescKeyMCPJournalSourceProjectFormat = "mcp.journal-source-project-format"
+ // DescKeyMCPJournalSourceDurationFormat is the text key for mcp journal
+ // source duration format messages.
DescKeyMCPJournalSourceDurationFormat = "mcp.journal-source-duration-format"
+ // DescKeyMCPJournalSourceFirstMsgFormat is the text key for mcp journal
+ // source first msg format messages.
DescKeyMCPJournalSourceFirstMsgFormat = "mcp.journal-source-first-msg-format"
)
diff --git a/internal/config/embed/text/mcp_pending.go b/internal/config/embed/text/mcp_pending.go
index c25612910..fae6c0cfd 100644
--- a/internal/config/embed/text/mcp_pending.go
+++ b/internal/config/embed/text/mcp_pending.go
@@ -8,7 +8,11 @@ package text
// DescKeys for MCP pending items.
const (
+ // DescKeyMCPPendingUpdatesFormat is the text key for mcp pending updates
+ // format messages.
DescKeyMCPPendingUpdatesFormat = "mcp.pending-updates-format"
- DescKeyMCPReviewPending = "mcp.review-pending"
- DescKeyMCPNoPending = "mcp.no-pending"
+ // DescKeyMCPReviewPending is the text key for mcp review pending messages.
+ DescKeyMCPReviewPending = "mcp.review-pending"
+ // DescKeyMCPNoPending is the text key for mcp no pending messages.
+ DescKeyMCPNoPending = "mcp.no-pending"
)
diff --git a/internal/config/embed/text/mcp_prompt.go b/internal/config/embed/text/mcp_prompt.go
index aa2a020a2..6b51bf5c9 100644
--- a/internal/config/embed/text/mcp_prompt.go
+++ b/internal/config/embed/text/mcp_prompt.go
@@ -8,74 +8,150 @@ package text
// DescKeys for MCP prompt descriptions.
const (
+ // DescKeyMCPPromptSessionStartDesc is the text key for mcp prompt session
+ // start desc messages.
DescKeyMCPPromptSessionStartDesc = "mcp.prompt-session-start-desc"
- DescKeyMCPPromptAddDecisionDesc = "mcp.prompt-add-decision-desc"
- DescKeyMCPPromptAddLearningDesc = "mcp.prompt-add-learning-desc"
- DescKeyMCPPromptReflectDesc = "mcp.prompt-reflect-desc"
- DescKeyMCPPromptCheckpointDesc = "mcp.prompt-checkpoint-desc"
+ // DescKeyMCPPromptAddDecisionDesc is the text key for mcp prompt add decision
+ // desc messages.
+ DescKeyMCPPromptAddDecisionDesc = "mcp.prompt-add-decision-desc"
+ // DescKeyMCPPromptAddLearningDesc is the text key for mcp prompt add learning
+ // desc messages.
+ DescKeyMCPPromptAddLearningDesc = "mcp.prompt-add-learning-desc"
+ // DescKeyMCPPromptReflectDesc is the text key for mcp prompt reflect desc
+ // messages.
+ DescKeyMCPPromptReflectDesc = "mcp.prompt-reflect-desc"
+ // DescKeyMCPPromptCheckpointDesc is the text key for mcp prompt checkpoint
+ // desc messages.
+ DescKeyMCPPromptCheckpointDesc = "mcp.prompt-checkpoint-desc"
)
// DescKeys for MCP prompt arguments.
const (
- DescKeyMCPPromptArgDecisionTitle = "mcp.prompt-arg-decision-title"
- DescKeyMCPPromptArgDecisionCtx = "mcp.prompt-arg-decision-ctx"
- DescKeyMCPPromptArgDecisionRat = "mcp.prompt-arg-decision-rationale"
+ // DescKeyMCPPromptArgDecisionTitle is the text key for mcp prompt arg
+ // decision title messages.
+ DescKeyMCPPromptArgDecisionTitle = "mcp.prompt-arg-decision-title"
+ // DescKeyMCPPromptArgDecisionCtx is the text key for mcp prompt arg decision
+ // ctx messages.
+ DescKeyMCPPromptArgDecisionCtx = "mcp.prompt-arg-decision-ctx"
+ // DescKeyMCPPromptArgDecisionRat is the text key for mcp prompt arg decision
+ // rat messages.
+ DescKeyMCPPromptArgDecisionRat = "mcp.prompt-arg-decision-rationale"
+ // DescKeyMCPPromptArgDecisionConseq is the text key for mcp prompt arg
+ // decision conseq messages.
DescKeyMCPPromptArgDecisionConseq = "mcp.prompt-arg-decision-consequence"
- DescKeyMCPPromptArgLearningTitle = "mcp.prompt-arg-learning-title"
- DescKeyMCPPromptArgLearningCtx = "mcp.prompt-arg-learning-ctx"
+ // DescKeyMCPPromptArgLearningTitle is the text key for mcp prompt arg
+ // learning title messages.
+ DescKeyMCPPromptArgLearningTitle = "mcp.prompt-arg-learning-title"
+ // DescKeyMCPPromptArgLearningCtx is the text key for mcp prompt arg learning
+ // ctx messages.
+ DescKeyMCPPromptArgLearningCtx = "mcp.prompt-arg-learning-ctx"
+ // DescKeyMCPPromptArgLearningLesson is the text key for mcp prompt arg
+ // learning lesson messages.
DescKeyMCPPromptArgLearningLesson = "mcp.prompt-arg-learning-lesson"
- DescKeyMCPPromptArgLearningApp = "mcp.prompt-arg-learning-app"
+ // DescKeyMCPPromptArgLearningApp is the text key for mcp prompt arg learning
+ // app messages.
+ DescKeyMCPPromptArgLearningApp = "mcp.prompt-arg-learning-app"
)
// DescKeys for MCP session-start prompt layout.
const (
- DescKeyMCPPromptSessionStartHeader = "mcp.prompt-session-start-header"
- DescKeyMCPPromptSessionStartFooter = "mcp.prompt-session-start-footer"
+ // DescKeyMCPPromptSessionStartHeader is the text key for mcp prompt session
+ // start header messages.
+ DescKeyMCPPromptSessionStartHeader = "mcp.prompt-session-start-header"
+ // DescKeyMCPPromptSessionStartFooter is the text key for mcp prompt session
+ // start footer messages.
+ DescKeyMCPPromptSessionStartFooter = "mcp.prompt-session-start-footer"
+ // DescKeyMCPPromptSessionStartResultD is the text key for mcp prompt session
+ // start result d messages.
DescKeyMCPPromptSessionStartResultD = "mcp.prompt-session-start-result-desc"
- DescKeyMCPPromptSectionFormat = "mcp.prompt-section-format"
+ // DescKeyMCPPromptSectionFormat is the text key for mcp prompt section format
+ // messages.
+ DescKeyMCPPromptSectionFormat = "mcp.prompt-section-format"
)
// DescKeys for MCP add-decision prompt.
const (
- DescKeyMCPPromptAddDecisionHeader = "mcp.prompt-add-decision-header"
+ // DescKeyMCPPromptAddDecisionHeader is the text key for mcp prompt add
+ // decision header messages.
+ DescKeyMCPPromptAddDecisionHeader = "mcp.prompt-add-decision-header"
+ // DescKeyMCPPromptAddDecisionFieldFmt is the text key for mcp prompt add
+ // decision field fmt messages.
DescKeyMCPPromptAddDecisionFieldFmt = "mcp.prompt-add-decision-field-format"
)
// DescKeys for MCP prompt field labels.
const (
- DescKeyMCPPromptLabelDecision = "mcp.prompt-label-decision"
- DescKeyMCPPromptLabelContext = "mcp.prompt-label-context"
- DescKeyMCPPromptLabelRationale = "mcp.prompt-label-rationale"
+ // DescKeyMCPPromptLabelDecision is the text key for mcp prompt label decision
+ // messages.
+ DescKeyMCPPromptLabelDecision = "mcp.prompt-label-decision"
+ // DescKeyMCPPromptLabelContext is the text key for mcp prompt label context
+ // messages.
+ DescKeyMCPPromptLabelContext = "mcp.prompt-label-context"
+ // DescKeyMCPPromptLabelRationale is the text key for mcp prompt label
+ // rationale messages.
+ DescKeyMCPPromptLabelRationale = "mcp.prompt-label-rationale"
+ // DescKeyMCPPromptLabelConsequence is the text key for mcp prompt label
+ // consequence messages.
DescKeyMCPPromptLabelConsequence = "mcp.prompt-label-consequence"
- DescKeyMCPPromptLabelLearning = "mcp.prompt-label-learning"
- DescKeyMCPPromptLabelLesson = "mcp.prompt-label-lesson"
+ // DescKeyMCPPromptLabelLearning is the text key for mcp prompt label learning
+ // messages.
+ DescKeyMCPPromptLabelLearning = "mcp.prompt-label-learning"
+ // DescKeyMCPPromptLabelLesson is the text key for mcp prompt label lesson
+ // messages.
+ DescKeyMCPPromptLabelLesson = "mcp.prompt-label-lesson"
+ // DescKeyMCPPromptLabelApplication is the text key for mcp prompt label
+ // application messages.
DescKeyMCPPromptLabelApplication = "mcp.prompt-label-application"
)
// DescKeys for MCP add-decision prompt result.
const (
- DescKeyMCPPromptAddDecisionFooter = "mcp.prompt-add-decision-footer"
+ // DescKeyMCPPromptAddDecisionFooter is the text key for mcp prompt add
+ // decision footer messages.
+ DescKeyMCPPromptAddDecisionFooter = "mcp.prompt-add-decision-footer"
+ // DescKeyMCPPromptAddDecisionResultD is the text key for mcp prompt add
+ // decision result d messages.
DescKeyMCPPromptAddDecisionResultD = "mcp.prompt-add-decision-result-desc"
)
// DescKeys for MCP add-learning prompt.
const (
- DescKeyMCPPromptAddLearningHeader = "mcp.prompt-add-learning-header"
+ // DescKeyMCPPromptAddLearningHeader is the text key for mcp prompt add
+ // learning header messages.
+ DescKeyMCPPromptAddLearningHeader = "mcp.prompt-add-learning-header"
+ // DescKeyMCPPromptAddLearningFieldFmt is the text key for mcp prompt add
+ // learning field fmt messages.
DescKeyMCPPromptAddLearningFieldFmt = "mcp.prompt-add-learning-field-format"
- DescKeyMCPPromptAddLearningFooter = "mcp.prompt-add-learning-footer"
- DescKeyMCPPromptAddLearningResultD = "mcp.prompt-add-learning-result-desc"
+ // DescKeyMCPPromptAddLearningFooter is the text key for mcp prompt add
+ // learning footer messages.
+ DescKeyMCPPromptAddLearningFooter = "mcp.prompt-add-learning-footer"
+ // DescKeyMCPPromptAddLearningResultD is the text key for mcp prompt add
+ // learning result d messages.
+ DescKeyMCPPromptAddLearningResultD = "mcp.prompt-add-learning-result-desc"
)
// DescKeys for MCP reflect prompt.
const (
- DescKeyMCPPromptReflectBody = "mcp.prompt-reflect-body"
+ // DescKeyMCPPromptReflectBody is the text key for mcp prompt reflect body
+ // messages.
+ DescKeyMCPPromptReflectBody = "mcp.prompt-reflect-body"
+ // DescKeyMCPPromptReflectResultD is the text key for mcp prompt reflect
+ // result d messages.
DescKeyMCPPromptReflectResultD = "mcp.prompt-reflect-result-desc"
)
// DescKeys for MCP checkpoint prompt.
const (
- DescKeyMCPPromptCheckpointHeader = "mcp.prompt-checkpoint-header"
+ // DescKeyMCPPromptCheckpointHeader is the text key for mcp prompt checkpoint
+ // header messages.
+ DescKeyMCPPromptCheckpointHeader = "mcp.prompt-checkpoint-header"
+ // DescKeyMCPPromptCheckpointStatsFormat is the text key for mcp prompt
+ // checkpoint stats format messages.
DescKeyMCPPromptCheckpointStatsFormat = "mcp.prompt-checkpoint-stats-format"
- DescKeyMCPPromptCheckpointSteps = "mcp.prompt-checkpoint-steps"
- DescKeyMCPPromptCheckpointResultD = "mcp.prompt-checkpoint-result-desc"
+ // DescKeyMCPPromptCheckpointSteps is the text key for mcp prompt checkpoint
+ // steps messages.
+ DescKeyMCPPromptCheckpointSteps = "mcp.prompt-checkpoint-steps"
+ // DescKeyMCPPromptCheckpointResultD is the text key for mcp prompt checkpoint
+ // result d messages.
+ DescKeyMCPPromptCheckpointResultD = "mcp.prompt-checkpoint-result-desc"
)
diff --git a/internal/config/embed/text/mcp_remind.go b/internal/config/embed/text/mcp_remind.go
index 5cf39940b..c717cf905 100644
--- a/internal/config/embed/text/mcp_remind.go
+++ b/internal/config/embed/text/mcp_remind.go
@@ -8,6 +8,8 @@ package text
// DescKeys for MCP reminder output.
const (
- DescKeyMCPNoReminders = "mcp.no-reminders"
+ // DescKeyMCPNoReminders is the text key for mcp no reminders messages.
+ DescKeyMCPNoReminders = "mcp.no-reminders"
+ // DescKeyMCPRemindersFormat is the text key for mcp reminders format messages.
DescKeyMCPRemindersFormat = "mcp.reminders-format"
)
diff --git a/internal/config/embed/text/mcp_res.go b/internal/config/embed/text/mcp_res.go
index 38a17f300..4309a2d7c 100644
--- a/internal/config/embed/text/mcp_res.go
+++ b/internal/config/embed/text/mcp_res.go
@@ -8,13 +8,22 @@ package text
// DescKeys for MCP resource output.
const (
+ // DescKeyMCPResConstitution is the text key for mcp res constitution messages.
DescKeyMCPResConstitution = "mcp.res-constitution"
- DescKeyMCPResTasks = "mcp.res-tasks"
- DescKeyMCPResConventions = "mcp.res-conventions"
+ // DescKeyMCPResTasks is the text key for mcp res tasks messages.
+ DescKeyMCPResTasks = "mcp.res-tasks"
+ // DescKeyMCPResConventions is the text key for mcp res conventions messages.
+ DescKeyMCPResConventions = "mcp.res-conventions"
+ // DescKeyMCPResArchitecture is the text key for mcp res architecture messages.
DescKeyMCPResArchitecture = "mcp.res-architecture"
- DescKeyMCPResDecisions = "mcp.res-decisions"
- DescKeyMCPResLearnings = "mcp.res-learnings"
- DescKeyMCPResGlossary = "mcp.res-glossary"
- DescKeyMCPResPlaybook = "mcp.res-playbook"
- DescKeyMCPResAgent = "mcp.res-agent"
+ // DescKeyMCPResDecisions is the text key for mcp res decisions messages.
+ DescKeyMCPResDecisions = "mcp.res-decisions"
+ // DescKeyMCPResLearnings is the text key for mcp res learnings messages.
+ DescKeyMCPResLearnings = "mcp.res-learnings"
+ // DescKeyMCPResGlossary is the text key for mcp res glossary messages.
+ DescKeyMCPResGlossary = "mcp.res-glossary"
+ // DescKeyMCPResPlaybook is the text key for mcp res playbook messages.
+ DescKeyMCPResPlaybook = "mcp.res-playbook"
+ // DescKeyMCPResAgent is the text key for mcp res agent messages.
+ DescKeyMCPResAgent = "mcp.res-agent"
)
diff --git a/internal/config/embed/text/mcp_session.go b/internal/config/embed/text/mcp_session.go
index f762e97b9..0d6860037 100644
--- a/internal/config/embed/text/mcp_session.go
+++ b/internal/config/embed/text/mcp_session.go
@@ -8,7 +8,12 @@ package text
// DescKeys for MCP session output.
const (
+ // DescKeyMCPSessionStartedCallerFormat is the text key for mcp session
+ // started caller format messages.
DescKeyMCPSessionStartedCallerFormat = "mcp.session-started-caller-format"
- DescKeyMCPSessionStartedFormat = "mcp.session-started-format"
- DescKeyMCPSessionEnding = "mcp.session-ending"
+ // DescKeyMCPSessionStartedFormat is the text key for mcp session started
+ // format messages.
+ DescKeyMCPSessionStartedFormat = "mcp.session-started-format"
+ // DescKeyMCPSessionEnding is the text key for mcp session ending messages.
+ DescKeyMCPSessionEnding = "mcp.session-ending"
)
diff --git a/internal/config/embed/text/mcp_status.go b/internal/config/embed/text/mcp_status.go
index 179254691..2272d9572 100644
--- a/internal/config/embed/text/mcp_status.go
+++ b/internal/config/embed/text/mcp_status.go
@@ -8,29 +8,46 @@ package text
// DescKeys for MCP status format strings.
const (
- DescKeyMCPAddedFormat = "mcp.added-format"
- DescKeyMCPCompletedFormat = "mcp.completed-format"
+ // DescKeyMCPAddedFormat is the text key for mcp added format messages.
+ DescKeyMCPAddedFormat = "mcp.added-format"
+ // DescKeyMCPCompletedFormat is the text key for mcp completed format messages.
+ DescKeyMCPCompletedFormat = "mcp.completed-format"
+ // DescKeyMCPStatusContextFormat is the text key for mcp status context format
+ // messages.
DescKeyMCPStatusContextFormat = "mcp.status-context-format"
- DescKeyMCPStatusFilesFormat = "mcp.status-files-format"
- DescKeyMCPStatusUsageFormat = "mcp.status-usage-format"
- DescKeyMCPStatusFileFormat = "mcp.status-file-format"
+ // DescKeyMCPStatusFilesFormat is the text key for mcp status files format
+ // messages.
+ DescKeyMCPStatusFilesFormat = "mcp.status-files-format"
+ // DescKeyMCPStatusUsageFormat is the text key for mcp status usage format
+ // messages.
+ DescKeyMCPStatusUsageFormat = "mcp.status-usage-format"
+ // DescKeyMCPStatusFileFormat is the text key for mcp status file format
+ // messages.
+ DescKeyMCPStatusFileFormat = "mcp.status-file-format"
)
// DescKeys for MCP status state labels.
const (
- DescKeyMCPStatusOK = "mcp.status-ok"
+ // DescKeyMCPStatusOK is the text key for mcp status ok messages.
+ DescKeyMCPStatusOK = "mcp.status-ok"
+ // DescKeyMCPStatusEmpty is the text key for mcp status empty messages.
DescKeyMCPStatusEmpty = "mcp.status-empty"
)
// DescKeys for MCP review and omission output.
const (
- DescKeyMCPAlsoNoted = "mcp.also-noted"
+ // DescKeyMCPAlsoNoted is the text key for mcp also noted messages.
+ DescKeyMCPAlsoNoted = "mcp.also-noted"
+ // DescKeyMCPOmittedFormat is the text key for mcp omitted format messages.
DescKeyMCPOmittedFormat = "mcp.omitted-format"
- DescKeyMCPReviewStatus = "mcp.review-status"
- DescKeyMCPCompactClean = "mcp.compact-clean"
+ // DescKeyMCPReviewStatus is the text key for mcp review status messages.
+ DescKeyMCPReviewStatus = "mcp.review-status"
+ // DescKeyMCPCompactClean is the text key for mcp compact clean messages.
+ DescKeyMCPCompactClean = "mcp.compact-clean"
)
// DescKeys for MCP confirmation prompts.
const (
+ // DescKeyConfirmProceed is the text key for confirm proceed messages.
DescKeyConfirmProceed = "confirm.proceed"
)
diff --git a/internal/config/embed/text/mcp_task.go b/internal/config/embed/text/mcp_task.go
index d678ebe66..e5a426476 100644
--- a/internal/config/embed/text/mcp_task.go
+++ b/internal/config/embed/text/mcp_task.go
@@ -8,9 +8,16 @@ package text
// DescKeys for MCP task output.
const (
- DescKeyMCPNoTasks = "mcp.no-tasks"
- DescKeyMCPNextTaskFormat = "mcp.next-task-format"
+ // DescKeyMCPNoTasks is the text key for mcp no tasks messages.
+ DescKeyMCPNoTasks = "mcp.no-tasks"
+ // DescKeyMCPNextTaskFormat is the text key for mcp next task format messages.
+ DescKeyMCPNextTaskFormat = "mcp.next-task-format"
+ // DescKeyMCPAllTasksComplete is the text key for mcp all tasks complete
+ // messages.
DescKeyMCPAllTasksComplete = "mcp.all-tasks-complete"
- DescKeyMCPCheckTaskFormat = "mcp.check-task-format"
- DescKeyMCPCheckTaskHint = "mcp.check-task-hint"
+ // DescKeyMCPCheckTaskFormat is the text key for mcp check task format
+ // messages.
+ DescKeyMCPCheckTaskFormat = "mcp.check-task-format"
+ // DescKeyMCPCheckTaskHint is the text key for mcp check task hint messages.
+ DescKeyMCPCheckTaskHint = "mcp.check-task-hint"
)
diff --git a/internal/config/embed/text/mcp_tool.go b/internal/config/embed/text/mcp_tool.go
index c98e9c511..c95db99ec 100644
--- a/internal/config/embed/text/mcp_tool.go
+++ b/internal/config/embed/text/mcp_tool.go
@@ -8,47 +8,107 @@ package text
// DescKeys for MCP tool output.
const (
- DescKeyMCPToolStatusDesc = "mcp.tool-status-desc"
- DescKeyMCPToolAddDesc = "mcp.tool-add-desc"
- DescKeyMCPToolCompleteDesc = "mcp.tool-complete-desc"
- DescKeyMCPToolDriftDesc = "mcp.tool-drift-desc"
+ // DescKeyMCPToolStatusDesc is the text key for mcp tool status desc messages.
+ DescKeyMCPToolStatusDesc = "mcp.tool-status-desc"
+ // DescKeyMCPToolAddDesc is the text key for mcp tool add desc messages.
+ DescKeyMCPToolAddDesc = "mcp.tool-add-desc"
+ // DescKeyMCPToolCompleteDesc is the text key for mcp tool complete desc
+ // messages.
+ DescKeyMCPToolCompleteDesc = "mcp.tool-complete-desc"
+ // DescKeyMCPToolDriftDesc is the text key for mcp tool drift desc messages.
+ DescKeyMCPToolDriftDesc = "mcp.tool-drift-desc"
+ // DescKeyMCPToolJournalSourceDesc is the text key for mcp tool journal source
+ // desc messages.
DescKeyMCPToolJournalSourceDesc = "mcp.tool-journal-source-desc"
- DescKeyMCPToolWatchUpdateDesc = "mcp.tool-watch-update-desc"
- DescKeyMCPToolCompactDesc = "mcp.tool-compact-desc"
- DescKeyMCPToolNextDesc = "mcp.tool-next-desc"
- DescKeyMCPToolCheckTaskDesc = "mcp.tool-check-task-desc"
- DescKeyMCPToolSessionDesc = "mcp.tool-session-desc"
- DescKeyMCPToolRemindDesc = "mcp.tool-remind-desc"
- DescKeyMCPToolPropType = "mcp.tool-prop-type"
- DescKeyMCPToolPropContent = "mcp.tool-prop-content"
- DescKeyMCPToolPropPriority = "mcp.tool-prop-priority"
- DescKeyMCPToolPropContext = "mcp.tool-prop-context"
- DescKeyMCPToolPropRationale = "mcp.tool-prop-rationale"
- DescKeyMCPToolPropConseq = "mcp.tool-prop-consequence"
- DescKeyMCPToolPropLesson = "mcp.tool-prop-lesson"
- DescKeyMCPToolPropApplication = "mcp.tool-prop-application"
- DescKeyMCPToolPropQuery = "mcp.tool-prop-query"
- DescKeyMCPToolPropLimit = "mcp.tool-prop-limit"
- DescKeyMCPToolPropSince = "mcp.tool-prop-since"
- DescKeyMCPToolPropEntryType = "mcp.tool-prop-entry-type"
- DescKeyMCPToolPropMainContent = "mcp.tool-prop-main-content"
- DescKeyMCPToolPropCtxBg = "mcp.tool-prop-ctx-background"
- DescKeyMCPToolPropArchive = "mcp.tool-prop-archive"
- DescKeyMCPToolPropRecentAct = "mcp.tool-prop-recent-action"
- DescKeyMCPToolPropEventType = "mcp.tool-prop-event-type"
- DescKeyMCPToolPropCaller = "mcp.tool-prop-caller"
- DescKeyMCPToolSteeringGetDesc = "mcp.tool-steering-get-desc"
- DescKeyMCPToolSearchDesc = "mcp.tool-search-desc"
- DescKeyMCPToolSessionStartDesc = "mcp.tool-session-start-desc"
- DescKeyMCPToolSessionEndDesc = "mcp.tool-session-end-desc"
- DescKeyMCPToolPropPrompt = "mcp.tool-prop-prompt"
- DescKeyMCPToolPropSearchQuery = "mcp.tool-prop-search-query"
- DescKeyMCPToolPropSummary = "mcp.tool-prop-summary"
+ // DescKeyMCPToolWatchUpdateDesc is the text key for mcp tool watch update
+ // desc messages.
+ DescKeyMCPToolWatchUpdateDesc = "mcp.tool-watch-update-desc"
+ // DescKeyMCPToolCompactDesc is the text key for mcp tool compact desc
+ // messages.
+ DescKeyMCPToolCompactDesc = "mcp.tool-compact-desc"
+ // DescKeyMCPToolNextDesc is the text key for mcp tool next desc messages.
+ DescKeyMCPToolNextDesc = "mcp.tool-next-desc"
+ // DescKeyMCPToolCheckTaskDesc is the text key for mcp tool check task desc
+ // messages.
+ DescKeyMCPToolCheckTaskDesc = "mcp.tool-check-task-desc"
+ // DescKeyMCPToolSessionDesc is the text key for mcp tool session desc
+ // messages.
+ DescKeyMCPToolSessionDesc = "mcp.tool-session-desc"
+ // DescKeyMCPToolRemindDesc is the text key for mcp tool remind desc messages.
+ DescKeyMCPToolRemindDesc = "mcp.tool-remind-desc"
+ // DescKeyMCPToolPropType is the text key for mcp tool prop type messages.
+ DescKeyMCPToolPropType = "mcp.tool-prop-type"
+ // DescKeyMCPToolPropContent is the text key for mcp tool prop content
+ // messages.
+ DescKeyMCPToolPropContent = "mcp.tool-prop-content"
+ // DescKeyMCPToolPropPriority is the text key for mcp tool prop priority
+ // messages.
+ DescKeyMCPToolPropPriority = "mcp.tool-prop-priority"
+ // DescKeyMCPToolPropContext is the text key for mcp tool prop context
+ // messages.
+ DescKeyMCPToolPropContext = "mcp.tool-prop-context"
+ // DescKeyMCPToolPropRationale is the text key for mcp tool prop rationale
+ // messages.
+ DescKeyMCPToolPropRationale = "mcp.tool-prop-rationale"
+ // DescKeyMCPToolPropConseq is the text key for mcp tool prop conseq messages.
+ DescKeyMCPToolPropConseq = "mcp.tool-prop-consequence"
+ // DescKeyMCPToolPropLesson is the text key for mcp tool prop lesson messages.
+ DescKeyMCPToolPropLesson = "mcp.tool-prop-lesson"
+ // DescKeyMCPToolPropApplication is the text key for mcp tool prop application
+ // messages.
+ DescKeyMCPToolPropApplication = "mcp.tool-prop-application"
+ // DescKeyMCPToolPropQuery is the text key for mcp tool prop query messages.
+ DescKeyMCPToolPropQuery = "mcp.tool-prop-query"
+ // DescKeyMCPToolPropLimit is the text key for mcp tool prop limit messages.
+ DescKeyMCPToolPropLimit = "mcp.tool-prop-limit"
+ // DescKeyMCPToolPropSince is the text key for mcp tool prop since messages.
+ DescKeyMCPToolPropSince = "mcp.tool-prop-since"
+ // DescKeyMCPToolPropEntryType is the text key for mcp tool prop entry type
+ // messages.
+ DescKeyMCPToolPropEntryType = "mcp.tool-prop-entry-type"
+ // DescKeyMCPToolPropMainContent is the text key for mcp tool prop main
+ // content messages.
+ DescKeyMCPToolPropMainContent = "mcp.tool-prop-main-content"
+ // DescKeyMCPToolPropCtxBg is the text key for mcp tool prop ctx bg messages.
+ DescKeyMCPToolPropCtxBg = "mcp.tool-prop-ctx-background"
+ // DescKeyMCPToolPropArchive is the text key for mcp tool prop archive
+ // messages.
+ DescKeyMCPToolPropArchive = "mcp.tool-prop-archive"
+ // DescKeyMCPToolPropRecentAct is the text key for mcp tool prop recent act
+ // messages.
+ DescKeyMCPToolPropRecentAct = "mcp.tool-prop-recent-action"
+ // DescKeyMCPToolPropEventType is the text key for mcp tool prop event type
+ // messages.
+ DescKeyMCPToolPropEventType = "mcp.tool-prop-event-type"
+ // DescKeyMCPToolPropCaller is the text key for mcp tool prop caller messages.
+ DescKeyMCPToolPropCaller = "mcp.tool-prop-caller"
+ // DescKeyMCPToolSteeringGetDesc is the text key for mcp tool steering get
+ // desc messages.
+ DescKeyMCPToolSteeringGetDesc = "mcp.tool-steering-get-desc"
+ // DescKeyMCPToolSearchDesc is the text key for mcp tool search desc messages.
+ DescKeyMCPToolSearchDesc = "mcp.tool-search-desc"
+ // DescKeyMCPToolSessionStartDesc is the text key for mcp tool session start
+ // desc messages.
+ DescKeyMCPToolSessionStartDesc = "mcp.tool-session-start-desc"
+ // DescKeyMCPToolSessionEndDesc is the text key for mcp tool session end desc
+ // messages.
+ DescKeyMCPToolSessionEndDesc = "mcp.tool-session-end-desc"
+ // DescKeyMCPToolPropPrompt is the text key for mcp tool prop prompt messages.
+ DescKeyMCPToolPropPrompt = "mcp.tool-prop-prompt"
+ // DescKeyMCPToolPropSearchQuery is the text key for mcp tool prop search
+ // query messages.
+ DescKeyMCPToolPropSearchQuery = "mcp.tool-prop-search-query"
+ // DescKeyMCPToolPropSummary is the text key for mcp tool prop summary
+ // messages.
+ DescKeyMCPToolPropSummary = "mcp.tool-prop-summary"
)
// DescKeys for MCP handler steering/search output.
const (
+ // DescKeyMCPSteeringSection is the text key for mcp steering section messages.
DescKeyMCPSteeringSection = "mcp.steering-section"
- DescKeyMCPSearchHitLine = "mcp.search-hit-line"
- DescKeyMCPSearchNoMatch = "mcp.search-no-match"
+ // DescKeyMCPSearchHitLine is the text key for mcp search hit line messages.
+ DescKeyMCPSearchHitLine = "mcp.search-hit-line"
+ // DescKeyMCPSearchNoMatch is the text key for mcp search no match messages.
+ DescKeyMCPSearchNoMatch = "mcp.search-no-match"
)
diff --git a/internal/config/embed/text/mcp_validate.go b/internal/config/embed/text/mcp_validate.go
index 8b852f94f..5c2a36f25 100644
--- a/internal/config/embed/text/mcp_validate.go
+++ b/internal/config/embed/text/mcp_validate.go
@@ -8,7 +8,12 @@ package text
// DescKeys for MCP validation output.
const (
- DescKeyMCPInvalidSinceDate = "mcp.invalid-since-date"
- DescKeyMCPNoSessions = "mcp.no-sessions"
+ // DescKeyMCPInvalidSinceDate is the text key for mcp invalid since date
+ // messages.
+ DescKeyMCPInvalidSinceDate = "mcp.invalid-since-date"
+ // DescKeyMCPNoSessions is the text key for mcp no sessions messages.
+ DescKeyMCPNoSessions = "mcp.no-sessions"
+ // DescKeyMCPSessionsFoundFormat is the text key for mcp sessions found format
+ // messages.
DescKeyMCPSessionsFoundFormat = "mcp.sessions-found-format"
)
diff --git a/internal/config/embed/text/memory.go b/internal/config/embed/text/memory.go
index 179dab899..f6dde3763 100644
--- a/internal/config/embed/text/memory.go
+++ b/internal/config/embed/text/memory.go
@@ -8,40 +8,79 @@ package text
// DescKeys for memory diff and import.
const (
+ // DescKeyMemoryDiffOldFormat is the text key for memory diff old format
+ // messages.
DescKeyMemoryDiffOldFormat = "memory.diff-old-format"
+ // DescKeyMemoryDiffNewFormat is the text key for memory diff new format
+ // messages.
DescKeyMemoryDiffNewFormat = "memory.diff-new-format"
- DescKeyMemoryImportSource = "memory.import-source"
+ // DescKeyMemoryImportSource is the text key for memory import source messages.
+ DescKeyMemoryImportSource = "memory.import-source"
)
// DescKeys for memory publish sections.
const (
+ // DescKeyMemoryPublishTitle is the text key for memory publish title messages.
DescKeyMemoryPublishTitle = "memory.publish-title"
+ // DescKeyMemoryPublishTasks is the text key for memory publish tasks messages.
DescKeyMemoryPublishTasks = "memory.publish-tasks"
- DescKeyMemoryPublishDec = "memory.publish-decisions"
- DescKeyMemoryPublishConv = "memory.publish-conventions"
- DescKeyMemoryPublishLrn = "memory.publish-learnings"
+ // DescKeyMemoryPublishDec is the text key for memory publish dec messages.
+ DescKeyMemoryPublishDec = "memory.publish-decisions"
+ // DescKeyMemoryPublishConv is the text key for memory publish conv messages.
+ DescKeyMemoryPublishConv = "memory.publish-conventions"
+ // DescKeyMemoryPublishLrn is the text key for memory publish lrn messages.
+ DescKeyMemoryPublishLrn = "memory.publish-learnings"
)
// DescKeys for memory import review.
const (
+ // DescKeyMemoryImportReview is the text key for memory import review messages.
DescKeyMemoryImportReview = "memory.import-review"
)
// DescKeys for memory operations write output.
const (
- DescKeyWriteMemoryArchives = "write.memory-archives"
- DescKeyWriteMemoryBridgeHeader = "write.memory-bridge-header"
- DescKeyWriteMemoryDriftDetected = "write.memory-drift-detected"
- DescKeyWriteMemoryDriftNone = "write.memory-drift-none"
- DescKeyWriteMemoryLastSync = "write.memory-last-sync"
- DescKeyWriteMemoryLastSyncNever = "write.memory-last-sync-never"
- DescKeyWriteMemoryMirror = "write.memory-mirror"
- DescKeyWriteMemoryMirrorLines = "write.memory-mirror-lines"
- DescKeyWriteMemoryMirrorNotSynced = "write.memory-mirror-not-synced"
- DescKeyWriteMemoryNoChanges = "write.memory-no-changes"
- DescKeyWriteMemorySource = "write.memory-source"
- DescKeyWriteMemorySourceLines = "write.memory-source-lines"
- DescKeyWriteMemorySourceLinesDrift = "write.memory-source-lines-drift"
- DescKeyWriteMemorySourceNotActive = "write.memory-source-not-active"
+ // DescKeyWriteMemoryArchives is the text key for write memory archives
+ // messages.
+ DescKeyWriteMemoryArchives = "write.memory-archives"
+ // DescKeyWriteMemoryBridgeHeader is the text key for write memory bridge
+ // header messages.
+ DescKeyWriteMemoryBridgeHeader = "write.memory-bridge-header"
+ // DescKeyWriteMemoryDriftDetected is the text key for write memory drift
+ // detected messages.
+ DescKeyWriteMemoryDriftDetected = "write.memory-drift-detected"
+ // DescKeyWriteMemoryDriftNone is the text key for write memory drift none
+ // messages.
+ DescKeyWriteMemoryDriftNone = "write.memory-drift-none"
+ // DescKeyWriteMemoryLastSync is the text key for write memory last sync
+ // messages.
+ DescKeyWriteMemoryLastSync = "write.memory-last-sync"
+ // DescKeyWriteMemoryLastSyncNever is the text key for write memory last sync
+ // never messages.
+ DescKeyWriteMemoryLastSyncNever = "write.memory-last-sync-never"
+ // DescKeyWriteMemoryMirror is the text key for write memory mirror messages.
+ DescKeyWriteMemoryMirror = "write.memory-mirror"
+ // DescKeyWriteMemoryMirrorLines is the text key for write memory mirror lines
+ // messages.
+ DescKeyWriteMemoryMirrorLines = "write.memory-mirror-lines"
+ // DescKeyWriteMemoryMirrorNotSynced is the text key for write memory mirror
+ // not synced messages.
+ DescKeyWriteMemoryMirrorNotSynced = "write.memory-mirror-not-synced"
+ // DescKeyWriteMemoryNoChanges is the text key for write memory no changes
+ // messages.
+ DescKeyWriteMemoryNoChanges = "write.memory-no-changes"
+ // DescKeyWriteMemorySource is the text key for write memory source messages.
+ DescKeyWriteMemorySource = "write.memory-source"
+ // DescKeyWriteMemorySourceLines is the text key for write memory source lines
+ // messages.
+ DescKeyWriteMemorySourceLines = "write.memory-source-lines"
+ // DescKeyWriteMemorySourceLinesDrift is the text key for write memory source
+ // lines drift messages.
+ DescKeyWriteMemorySourceLinesDrift = "write.memory-source-lines-drift"
+ // DescKeyWriteMemorySourceNotActive is the text key for write memory source
+ // not active messages.
+ DescKeyWriteMemorySourceNotActive = "write.memory-source-not-active"
+ // DescKeyWriteMemorySourceNotActiveErr is the text key for write memory
+ // source not active err messages.
DescKeyWriteMemorySourceNotActiveErr = "write.memory-source-not-active-err"
)
diff --git a/internal/config/embed/text/message.go b/internal/config/embed/text/message.go
index cbc31fa7d..8356b8b64 100644
--- a/internal/config/embed/text/message.go
+++ b/internal/config/embed/text/message.go
@@ -8,34 +8,60 @@ package text
// DescKeys for message edit hints.
const (
+ // DescKeyMessageCtxSpecificWarning is the text key for message ctx specific
+ // warning messages.
DescKeyMessageCtxSpecificWarning = "message.ctx-specific-warning"
- DescKeyMessageEditHint = "message.edit-hint"
+ // DescKeyMessageEditHint is the text key for message edit hint messages.
+ DescKeyMessageEditHint = "message.edit-hint"
)
// DescKeys for message list table headers.
const (
+ // DescKeyMessageListHeaderCategory is the text key for message list header
+ // category messages.
DescKeyMessageListHeaderCategory = "message.list-header-category"
- DescKeyMessageListHeaderHook = "message.list-header-hook"
+ // DescKeyMessageListHeaderHook is the text key for message list header hook
+ // messages.
+ DescKeyMessageListHeaderHook = "message.list-header-hook"
+ // DescKeyMessageListHeaderOverride is the text key for message list header
+ // override messages.
DescKeyMessageListHeaderOverride = "message.list-header-override"
- DescKeyMessageListHeaderVariant = "message.list-header-variant"
+ // DescKeyMessageListHeaderVariant is the text key for message list header
+ // variant messages.
+ DescKeyMessageListHeaderVariant = "message.list-header-variant"
)
// DescKeys for message override operations.
const (
- DescKeyMessageNoOverride = "message.no-override"
+ // DescKeyMessageNoOverride is the text key for message no override messages.
+ DescKeyMessageNoOverride = "message.no-override"
+ // DescKeyMessageOverrideCreated is the text key for message override created
+ // messages.
DescKeyMessageOverrideCreated = "message.override-created"
- DescKeyMessageOverrideLabel = "message.override-label"
+ // DescKeyMessageOverrideLabel is the text key for message override label
+ // messages.
+ DescKeyMessageOverrideLabel = "message.override-label"
+ // DescKeyMessageOverrideRemoved is the text key for message override removed
+ // messages.
DescKeyMessageOverrideRemoved = "message.override-removed"
)
// DescKeys for message source labels.
const (
- DescKeyMessageSourceDefault = "message.source-default"
+ // DescKeyMessageSourceDefault is the text key for message source default
+ // messages.
+ DescKeyMessageSourceDefault = "message.source-default"
+ // DescKeyMessageSourceOverride is the text key for message source override
+ // messages.
DescKeyMessageSourceOverride = "message.source-override"
)
// DescKeys for message template variables.
const (
+ // DescKeyMessageTemplateVarsLabel is the text key for message template vars
+ // label messages.
DescKeyMessageTemplateVarsLabel = "message.template-vars-label"
- DescKeyMessageTemplateVarsNone = "message.template-vars-none"
+ // DescKeyMessageTemplateVarsNone is the text key for message template vars
+ // none messages.
+ DescKeyMessageTemplateVarsNone = "message.template-vars-none"
)
diff --git a/internal/config/embed/text/nudge.go b/internal/config/embed/text/nudge.go
index 04c391a0e..c305d41a8 100644
--- a/internal/config/embed/text/nudge.go
+++ b/internal/config/embed/text/nudge.go
@@ -8,12 +8,18 @@ package text
// DescKeys for specs nudge messages.
const (
- DescKeySpecsNudgeFallback = "specs-nudge.fallback"
+ // DescKeySpecsNudgeFallback is the text key for specs nudge fallback messages.
+ DescKeySpecsNudgeFallback = "specs-nudge.fallback"
+ // DescKeySpecsNudgeNudgeMessage is the text key for specs nudge nudge message
+ // messages.
DescKeySpecsNudgeNudgeMessage = "specs-nudge.nudge-message"
)
// DescKeys for QA reminder messages.
const (
- DescKeyQaReminderFallback = "qa-reminder.fallback"
+ // DescKeyQaReminderFallback is the text key for qa reminder fallback messages.
+ DescKeyQaReminderFallback = "qa-reminder.fallback"
+ // DescKeyQaReminderRelayMessage is the text key for qa reminder relay message
+ // messages.
DescKeyQaReminderRelayMessage = "qa-reminder.relay-message"
)
diff --git a/internal/config/embed/text/obsidian.go b/internal/config/embed/text/obsidian.go
index c066877dc..af8b769ea 100644
--- a/internal/config/embed/text/obsidian.go
+++ b/internal/config/embed/text/obsidian.go
@@ -8,13 +8,23 @@ package text
// Obsidian vault headings and labels (headings.yaml).
const (
+ // DescKeyHeadingObsidianRelated is the text key for heading obsidian related
+ // messages.
DescKeyHeadingObsidianRelated = "heading.obsidian-related"
- DescKeyLabelObsidianSeeAlso = "label.obsidian-see-also"
+ // DescKeyLabelObsidianSeeAlso is the text key for label obsidian see also
+ // messages.
+ DescKeyLabelObsidianSeeAlso = "label.obsidian-see-also"
)
// DescKeys for Obsidian vault write output.
const (
- DescKeyWriteObsidianGenerated = "write.obsidian-generated"
+ // DescKeyWriteObsidianGenerated is the text key for write obsidian generated
+ // messages.
+ DescKeyWriteObsidianGenerated = "write.obsidian-generated"
+ // DescKeyWriteObsidianNextStepsHeading is the text key for write obsidian
+ // next steps heading messages.
DescKeyWriteObsidianNextStepsHeading = "write.obsidian-next-steps-heading"
- DescKeyWriteObsidianNextSteps = "write.obsidian-next-steps"
+ // DescKeyWriteObsidianNextSteps is the text key for write obsidian next steps
+ // messages.
+ DescKeyWriteObsidianNextSteps = "write.obsidian-next-steps"
)
diff --git a/internal/config/embed/text/pad.go b/internal/config/embed/text/pad.go
index d8f08e2f0..f6368cab9 100644
--- a/internal/config/embed/text/pad.go
+++ b/internal/config/embed/text/pad.go
@@ -8,65 +8,135 @@ package text
// DescKeys for scratchpad merge output.
const (
- DescKeyWritePadMergeAdded = "write.pad-merge-added"
- DescKeyWritePadMergeBinaryWarning = "write.pad-merge-binary-warning"
- DescKeyWritePadMergeBlobConflict = "write.pad-merge-blob-conflict"
- DescKeyWritePadMergeDone1Entry = "write.pad-merge-done-1-entry"
- DescKeyWritePadMergeDoneNEntries = "write.pad-merge-done-n-entries"
- DescKeyWritePadMergeDryRun1Entry = "write.pad-merge-dry-run-1-entry"
+ // DescKeyWritePadMergeAdded is the text key for write pad merge added
+ // messages.
+ DescKeyWritePadMergeAdded = "write.pad-merge-added"
+ // DescKeyWritePadMergeBinaryWarning is the text key for write pad merge
+ // binary warning messages.
+ DescKeyWritePadMergeBinaryWarning = "write.pad-merge-binary-warning"
+ // DescKeyWritePadMergeBlobConflict is the text key for write pad merge blob
+ // conflict messages.
+ DescKeyWritePadMergeBlobConflict = "write.pad-merge-blob-conflict"
+ // DescKeyWritePadMergeDone1Entry is the text key for write pad merge
+ // done1entry messages.
+ DescKeyWritePadMergeDone1Entry = "write.pad-merge-done-1-entry"
+ // DescKeyWritePadMergeDoneNEntries is the text key for write pad merge done n
+ // entries messages.
+ DescKeyWritePadMergeDoneNEntries = "write.pad-merge-done-n-entries"
+ // DescKeyWritePadMergeDryRun1Entry is the text key for write pad merge dry
+ // run1entry messages.
+ DescKeyWritePadMergeDryRun1Entry = "write.pad-merge-dry-run-1-entry"
+ // DescKeyWritePadMergeDryRunNEntries is the text key for write pad merge dry
+ // run n entries messages.
DescKeyWritePadMergeDryRunNEntries = "write.pad-merge-dry-run-n-entries"
- DescKeyWritePadMergeDupe = "write.pad-merge-dupe"
- DescKeyWritePadMergeNone = "write.pad-merge-none"
- DescKeyWritePadMergeNoneNew = "write.pad-merge-none-new"
- DescKeyWritePadMergeSkipped1 = "write.pad-merge-skipped-1"
- DescKeyWritePadMergeSkippedN = "write.pad-merge-skipped-n"
+ // DescKeyWritePadMergeDupe is the text key for write pad merge dupe messages.
+ DescKeyWritePadMergeDupe = "write.pad-merge-dupe"
+ // DescKeyWritePadMergeNone is the text key for write pad merge none messages.
+ DescKeyWritePadMergeNone = "write.pad-merge-none"
+ // DescKeyWritePadMergeNoneNew is the text key for write pad merge none new
+ // messages.
+ DescKeyWritePadMergeNoneNew = "write.pad-merge-none-new"
+ // DescKeyWritePadMergeSkipped1 is the text key for write pad merge skipped1
+ // messages.
+ DescKeyWritePadMergeSkipped1 = "write.pad-merge-skipped-1"
+ // DescKeyWritePadMergeSkippedN is the text key for write pad merge skipped n
+ // messages.
+ DescKeyWritePadMergeSkippedN = "write.pad-merge-skipped-n"
)
// DescKeys for scratchpad blob import output.
const (
- DescKeyWritePadImportBlobAdded = "write.pad-import-blob-added"
- DescKeyWritePadImportBlobNone = "write.pad-import-blob-none"
- DescKeyWritePadImportBlobSkipped = "write.pad-import-blob-skipped"
- DescKeyWritePadImportBlobSummary = "write.pad-import-blob-summary"
+ // DescKeyWritePadImportBlobAdded is the text key for write pad import blob
+ // added messages.
+ DescKeyWritePadImportBlobAdded = "write.pad-import-blob-added"
+ // DescKeyWritePadImportBlobNone is the text key for write pad import blob
+ // none messages.
+ DescKeyWritePadImportBlobNone = "write.pad-import-blob-none"
+ // DescKeyWritePadImportBlobSkipped is the text key for write pad import blob
+ // skipped messages.
+ DescKeyWritePadImportBlobSkipped = "write.pad-import-blob-skipped"
+ // DescKeyWritePadImportBlobSummary is the text key for write pad import blob
+ // summary messages.
+ DescKeyWritePadImportBlobSummary = "write.pad-import-blob-summary"
+ // DescKeyWritePadImportBlobTooLarge is the text key for write pad import blob
+ // too large messages.
DescKeyWritePadImportBlobTooLarge = "write.pad-import-blob-too-large"
+ // DescKeyWritePadImportCloseWarning is the text key for write pad import
+ // close warning messages.
DescKeyWritePadImportCloseWarning = "write.pad-import-close-warning"
- DescKeyWritePadImportDone = "write.pad-import-done"
- DescKeyWritePadImportNone = "write.pad-import-none"
+ // DescKeyWritePadImportDone is the text key for write pad import done
+ // messages.
+ DescKeyWritePadImportDone = "write.pad-import-done"
+ // DescKeyWritePadImportNone is the text key for write pad import none
+ // messages.
+ DescKeyWritePadImportNone = "write.pad-import-none"
)
// DescKeys for scratchpad entry mutation output.
const (
- DescKeyWritePadEntryAdded = "write.pad-entry-added"
- DescKeyWritePadEntryMoved = "write.pad-entry-moved"
+ // DescKeyWritePadEntryAdded is the text key for write pad entry added
+ // messages.
+ DescKeyWritePadEntryAdded = "write.pad-entry-added"
+ // DescKeyWritePadEntryMoved is the text key for write pad entry moved
+ // messages.
+ DescKeyWritePadEntryMoved = "write.pad-entry-moved"
+ // DescKeyWritePadEntryRemoved is the text key for write pad entry removed
+ // messages.
DescKeyWritePadEntryRemoved = "write.pad-entry-removed"
+ // DescKeyWritePadEntryUpdated is the text key for write pad entry updated
+ // messages.
DescKeyWritePadEntryUpdated = "write.pad-entry-updated"
)
// DescKeys for scratchpad export output.
const (
- DescKeyWritePadExportDone = "write.pad-export-done"
- DescKeyWritePadExportNone = "write.pad-export-none"
- DescKeyWritePadExportPlan = "write.pad-export-plan"
- DescKeyWritePadExportSummary = "write.pad-export-summary"
- DescKeyWritePadExportVerbDone = "write.pad-export-verb-done"
- DescKeyWritePadExportVerbDryRun = "write.pad-export-verb-dry-run"
+ // DescKeyWritePadExportDone is the text key for write pad export done
+ // messages.
+ DescKeyWritePadExportDone = "write.pad-export-done"
+ // DescKeyWritePadExportNone is the text key for write pad export none
+ // messages.
+ DescKeyWritePadExportNone = "write.pad-export-none"
+ // DescKeyWritePadExportPlan is the text key for write pad export plan
+ // messages.
+ DescKeyWritePadExportPlan = "write.pad-export-plan"
+ // DescKeyWritePadExportSummary is the text key for write pad export summary
+ // messages.
+ DescKeyWritePadExportSummary = "write.pad-export-summary"
+ // DescKeyWritePadExportVerbDone is the text key for write pad export verb
+ // done messages.
+ DescKeyWritePadExportVerbDone = "write.pad-export-verb-done"
+ // DescKeyWritePadExportVerbDryRun is the text key for write pad export verb
+ // dry run messages.
+ DescKeyWritePadExportVerbDryRun = "write.pad-export-verb-dry-run"
+ // DescKeyWritePadExportWriteFailed is the text key for write pad export write
+ // failed messages.
DescKeyWritePadExportWriteFailed = "write.pad-export-write-failed"
)
// DescKeys for scratchpad list and blob output.
const (
+ // DescKeyWritePadBlobWritten is the text key for write pad blob written
+ // messages.
DescKeyWritePadBlobWritten = "write.pad-blob-written"
- DescKeyWritePadEmpty = "write.pad-empty"
- DescKeyWritePadListItem = "write.pad-list-item"
+ // DescKeyWritePadEmpty is the text key for write pad empty messages.
+ DescKeyWritePadEmpty = "write.pad-empty"
+ // DescKeyWritePadListItem is the text key for write pad list item messages.
+ DescKeyWritePadListItem = "write.pad-list-item"
)
// DescKeys for scratchpad conflict resolution.
const (
- DescKeyWritePadResolveEntry = "write.pad-resolve-entry"
+ // DescKeyWritePadResolveEntry is the text key for write pad resolve entry
+ // messages.
+ DescKeyWritePadResolveEntry = "write.pad-resolve-entry"
+ // DescKeyWritePadResolveHeader is the text key for write pad resolve header
+ // messages.
DescKeyWritePadResolveHeader = "write.pad-resolve-header"
)
// DescKeys for scratchpad operations.
const (
+ // DescKeyWritePadKeyCreated is the text key for write pad key created
+ // messages.
DescKeyWritePadKeyCreated = "write.pad-key-created"
)
diff --git a/internal/config/embed/text/pause.go b/internal/config/embed/text/pause.go
index 9f85b3642..055360a47 100644
--- a/internal/config/embed/text/pause.go
+++ b/internal/config/embed/text/pause.go
@@ -8,9 +8,14 @@ package text
// DescKeys for pause/resume output.
const (
- DescKeyWritePaused = "write.paused"
+ // DescKeyWritePaused is the text key for write paused messages.
+ DescKeyWritePaused = "write.paused"
+ // DescKeyWritePausedMessage is the text key for write paused message messages.
DescKeyWritePausedMessage = "write.paused-message"
- DescKeyWriteResumed = "write.resumed"
- DescKeyWriteSessionEvent = "write.session-event"
- DescKeyPauseConfirmed = "pause.confirmed"
+ // DescKeyWriteResumed is the text key for write resumed messages.
+ DescKeyWriteResumed = "write.resumed"
+ // DescKeyWriteSessionEvent is the text key for write session event messages.
+ DescKeyWriteSessionEvent = "write.session-event"
+ // DescKeyPauseConfirmed is the text key for pause confirmed messages.
+ DescKeyPauseConfirmed = "pause.confirmed"
)
diff --git a/internal/config/embed/text/philosophy.go b/internal/config/embed/text/philosophy.go
index a1c2e0521..e8039281c 100644
--- a/internal/config/embed/text/philosophy.go
+++ b/internal/config/embed/text/philosophy.go
@@ -8,15 +8,29 @@ package text
// DescKeys for philosophy display.
const (
+ // DescKeyWhyAdmonitionFormat is the text key for why admonition format
+ // messages.
DescKeyWhyAdmonitionFormat = "why.admonition-format"
- DescKeyWhyBanner = "why.banner"
+ // DescKeyWhyBanner is the text key for why banner messages.
+ DescKeyWhyBanner = "why.banner"
+ // DescKeyWhyBlockquotePrefix is the text key for why blockquote prefix
+ // messages.
DescKeyWhyBlockquotePrefix = "why.blockquote-prefix"
- DescKeyWhyBoldFormat = "why.bold-format"
- DescKeyWhyMenuItemFormat = "why.menu-item-format"
- DescKeyWhyMenuPrompt = "why.menu-prompt"
+ // DescKeyWhyBoldFormat is the text key for why bold format messages.
+ DescKeyWhyBoldFormat = "why.bold-format"
+ // DescKeyWhyMenuItemFormat is the text key for why menu item format messages.
+ DescKeyWhyMenuItemFormat = "why.menu-item-format"
+ // DescKeyWhyMenuPrompt is the text key for why menu prompt messages.
+ DescKeyWhyMenuPrompt = "why.menu-prompt"
// Why menu labels.
- DescKeyWriteWhyLabelManifesto = "write.why-label-manifesto"
- DescKeyWriteWhyLabelAbout = "write.why-label-about"
+ // DescKeyWriteWhyLabelManifesto is the text key for write why label manifesto
+ // messages.
+ DescKeyWriteWhyLabelManifesto = "write.why-label-manifesto"
+ // DescKeyWriteWhyLabelAbout is the text key for write why label about
+ // messages.
+ DescKeyWriteWhyLabelAbout = "write.why-label-about"
+ // DescKeyWriteWhyLabelInvariants is the text key for write why label
+ // invariants messages.
DescKeyWriteWhyLabelInvariants = "write.why-label-invariants"
)
diff --git a/internal/config/embed/text/post_commit.go b/internal/config/embed/text/post_commit.go
index 993867440..e20cae79c 100644
--- a/internal/config/embed/text/post_commit.go
+++ b/internal/config/embed/text/post_commit.go
@@ -8,16 +8,39 @@ package text
// DescKeys for post-commit hooks.
const (
- DescKeyPostCommitFallback = "post-commit.fallback"
- DescKeyPostCommitRelayMessage = "post-commit.relay-message"
- DescKeyPostCommitRelayPrefix = "post-commit.relay-prefix"
- DescKeyPostCommitMissingSpec = "post-commit.missing-spec"
- DescKeyPostCommitMissingSignoff = "post-commit.missing-signoff"
- DescKeyPostCommitMissingBody = "post-commit.missing-body"
- DescKeyPostCommitMissingTaskRef = "post-commit.missing-task-ref"
+ // DescKeyPostCommitFallback is the text key for post commit fallback messages.
+ DescKeyPostCommitFallback = "post-commit.fallback"
+ // DescKeyPostCommitRelayMessage is the text key for post commit relay message
+ // messages.
+ DescKeyPostCommitRelayMessage = "post-commit.relay-message"
+ // DescKeyPostCommitRelayPrefix is the text key for post commit relay prefix
+ // messages.
+ DescKeyPostCommitRelayPrefix = "post-commit.relay-prefix"
+ // DescKeyPostCommitMissingSpec is the text key for post commit missing spec
+ // messages.
+ DescKeyPostCommitMissingSpec = "post-commit.missing-spec"
+ // DescKeyPostCommitMissingSignoff is the text key for post commit missing
+ // signoff messages.
+ DescKeyPostCommitMissingSignoff = "post-commit.missing-signoff"
+ // DescKeyPostCommitMissingBody is the text key for post commit missing body
+ // messages.
+ DescKeyPostCommitMissingBody = "post-commit.missing-body"
+ // DescKeyPostCommitMissingTaskRef is the text key for post commit missing
+ // task ref messages.
+ DescKeyPostCommitMissingTaskRef = "post-commit.missing-task-ref"
+ // DescKeyPostCommitMissingTaskUpdate is the text key for post commit missing
+ // task update messages.
DescKeyPostCommitMissingTaskUpdate = "post-commit.missing-task-update"
- DescKeyPostCommitSeverityInformal = "post-commit.severity-informal"
- DescKeyPostCommitSeveritySkipped = "post-commit.severity-bypassed"
- DescKeyPostCommitAuditTitle = "post-commit.audit-title"
- DescKeyPostCommitAuditContent = "post-commit.audit-content"
+ // DescKeyPostCommitSeverityInformal is the text key for post commit severity
+ // informal messages.
+ DescKeyPostCommitSeverityInformal = "post-commit.severity-informal"
+ // DescKeyPostCommitSeveritySkipped is the text key for post commit severity
+ // skipped messages.
+ DescKeyPostCommitSeveritySkipped = "post-commit.severity-bypassed"
+ // DescKeyPostCommitAuditTitle is the text key for post commit audit title
+ // messages.
+ DescKeyPostCommitAuditTitle = "post-commit.audit-title"
+ // DescKeyPostCommitAuditContent is the text key for post commit audit content
+ // messages.
+ DescKeyPostCommitAuditContent = "post-commit.audit-content"
)
diff --git a/internal/config/embed/text/prune.go b/internal/config/embed/text/prune.go
index dbeff33d7..d17938fd5 100644
--- a/internal/config/embed/text/prune.go
+++ b/internal/config/embed/text/prune.go
@@ -8,8 +8,13 @@ package text
// DescKeys for prune operations.
const (
- DescKeyPruneDryRunLine = "prune.dry-run-line"
+ // DescKeyPruneDryRunLine is the text key for prune dry run line messages.
+ DescKeyPruneDryRunLine = "prune.dry-run-line"
+ // DescKeyPruneDryRunSummary is the text key for prune dry run summary
+ // messages.
DescKeyPruneDryRunSummary = "prune.dry-run-summary"
- DescKeyPruneErrorLine = "prune.error-line"
- DescKeyPruneSummary = "prune.summary"
+ // DescKeyPruneErrorLine is the text key for prune error line messages.
+ DescKeyPruneErrorLine = "prune.error-line"
+ // DescKeyPruneSummary is the text key for prune summary messages.
+ DescKeyPruneSummary = "prune.summary"
)
diff --git a/internal/config/embed/text/publish.go b/internal/config/embed/text/publish.go
index 8c53f0a2b..67039d396 100644
--- a/internal/config/embed/text/publish.go
+++ b/internal/config/embed/text/publish.go
@@ -8,21 +8,40 @@ package text
// DescKeys for publish write output.
const (
- DescKeyWritePublishBlock = "write.publish-block"
- DescKeyWritePublishBudget = "write.publish-budget"
+ // DescKeyWritePublishBlock is the text key for write publish block messages.
+ DescKeyWritePublishBlock = "write.publish-block"
+ // DescKeyWritePublishBudget is the text key for write publish budget messages.
+ DescKeyWritePublishBudget = "write.publish-budget"
+ // DescKeyWritePublishConventions is the text key for write publish
+ // conventions messages.
DescKeyWritePublishConventions = "write.publish-conventions"
- DescKeyWritePublishDecisions = "write.publish-decisions"
- DescKeyWritePublishDone = "write.publish-done"
- DescKeyWritePublishDryRun = "write.publish-dry-run"
- DescKeyWritePublishHeader = "write.publish-header"
- DescKeyWritePublishLearnings = "write.publish-learnings"
+ // DescKeyWritePublishDecisions is the text key for write publish decisions
+ // messages.
+ DescKeyWritePublishDecisions = "write.publish-decisions"
+ // DescKeyWritePublishDone is the text key for write publish done messages.
+ DescKeyWritePublishDone = "write.publish-done"
+ // DescKeyWritePublishDryRun is the text key for write publish dry run
+ // messages.
+ DescKeyWritePublishDryRun = "write.publish-dry-run"
+ // DescKeyWritePublishHeader is the text key for write publish header messages.
+ DescKeyWritePublishHeader = "write.publish-header"
+ // DescKeyWritePublishLearnings is the text key for write publish learnings
+ // messages.
+ DescKeyWritePublishLearnings = "write.publish-learnings"
+ // DescKeyWritePublishSourceFiles is the text key for write publish source
+ // files messages.
DescKeyWritePublishSourceFiles = "write.publish-source-files"
- DescKeyWritePublishTasks = "write.publish-tasks"
- DescKeyWritePublishTotal = "write.publish-total"
+ // DescKeyWritePublishTasks is the text key for write publish tasks messages.
+ DescKeyWritePublishTasks = "write.publish-tasks"
+ // DescKeyWritePublishTotal is the text key for write publish total messages.
+ DescKeyWritePublishTotal = "write.publish-total"
)
// DescKeys for unpublish write output.
const (
- DescKeyWriteUnpublishDone = "write.unpublish-done"
+ // DescKeyWriteUnpublishDone is the text key for write unpublish done messages.
+ DescKeyWriteUnpublishDone = "write.unpublish-done"
+ // DescKeyWriteUnpublishNotFound is the text key for write unpublish not found
+ // messages.
DescKeyWriteUnpublishNotFound = "write.unpublish-not-found"
)
diff --git a/internal/config/embed/text/reminder.go b/internal/config/embed/text/reminder.go
index 05f8a385e..7d28242ae 100644
--- a/internal/config/embed/text/reminder.go
+++ b/internal/config/embed/text/reminder.go
@@ -8,11 +8,22 @@ package text
// DescKeys for reminder display write output.
const (
- DescKeyWriteReminderAdded = "write.reminder-added"
- DescKeyWriteReminderAfterSuffix = "write.reminder-after-suffix"
- DescKeyWriteReminderDismissed = "write.reminder-dismissed"
+ // DescKeyWriteReminderAdded is the text key for write reminder added messages.
+ DescKeyWriteReminderAdded = "write.reminder-added"
+ // DescKeyWriteReminderAfterSuffix is the text key for write reminder after
+ // suffix messages.
+ DescKeyWriteReminderAfterSuffix = "write.reminder-after-suffix"
+ // DescKeyWriteReminderDismissed is the text key for write reminder dismissed
+ // messages.
+ DescKeyWriteReminderDismissed = "write.reminder-dismissed"
+ // DescKeyWriteReminderDismissedAll is the text key for write reminder
+ // dismissed all messages.
DescKeyWriteReminderDismissedAll = "write.reminder-dismissed-all"
- DescKeyWriteReminderItem = "write.reminder-item"
- DescKeyWriteReminderNone = "write.reminder-none"
- DescKeyWriteReminderNotDue = "write.reminder-not-due"
+ // DescKeyWriteReminderItem is the text key for write reminder item messages.
+ DescKeyWriteReminderItem = "write.reminder-item"
+ // DescKeyWriteReminderNone is the text key for write reminder none messages.
+ DescKeyWriteReminderNone = "write.reminder-none"
+ // DescKeyWriteReminderNotDue is the text key for write reminder not due
+ // messages.
+ DescKeyWriteReminderNotDue = "write.reminder-not-due"
)
diff --git a/internal/config/embed/text/resource.go b/internal/config/embed/text/resource.go
index 54a4120c5..913667a63 100644
--- a/internal/config/embed/text/resource.go
+++ b/internal/config/embed/text/resource.go
@@ -8,24 +8,52 @@ package text
// DescKeys for resource display.
const (
- DescKeyResourcesAlertDisk = "resources.alert-disk"
- DescKeyResourcesAlertLoad = "resources.alert-load"
- DescKeyResourcesAlertMemory = "resources.alert-memory"
- DescKeyResourcesAlertSwap = "resources.alert-swap"
- DescKeyResourcesAlertDanger = "resources.alert-danger"
+ // DescKeyResourcesAlertDisk is the text key for resources alert disk messages.
+ DescKeyResourcesAlertDisk = "resources.alert-disk"
+ // DescKeyResourcesAlertLoad is the text key for resources alert load messages.
+ DescKeyResourcesAlertLoad = "resources.alert-load"
+ // DescKeyResourcesAlertMemory is the text key for resources alert memory
+ // messages.
+ DescKeyResourcesAlertMemory = "resources.alert-memory"
+ // DescKeyResourcesAlertSwap is the text key for resources alert swap messages.
+ DescKeyResourcesAlertSwap = "resources.alert-swap"
+ // DescKeyResourcesAlertDanger is the text key for resources alert danger
+ // messages.
+ DescKeyResourcesAlertDanger = "resources.alert-danger"
+ // DescKeyResourcesAlertWarning is the text key for resources alert warning
+ // messages.
DescKeyResourcesAlertWarning = "resources.alert-warning"
- DescKeyResourcesAlerts = "resources.alerts"
- DescKeyResourcesAllClear = "resources.all-clear"
- DescKeyResourcesHeader = "resources.header"
- DescKeyResourcesSeparator = "resources.separator"
- DescKeyResourcesLabelDisk = "resources.label-disk"
- DescKeyResourcesLabelLoad = "resources.label-load"
- DescKeyResourcesLabelMemory = "resources.label-memory"
- DescKeyResourcesLabelSwap = "resources.label-swap"
- DescKeyResourcesLoadFormat = "resources.load-format"
- DescKeyResourcesValueFormat = "resources.value-format"
+ // DescKeyResourcesAlerts is the text key for resources alerts messages.
+ DescKeyResourcesAlerts = "resources.alerts"
+ // DescKeyResourcesAllClear is the text key for resources all clear messages.
+ DescKeyResourcesAllClear = "resources.all-clear"
+ // DescKeyResourcesHeader is the text key for resources header messages.
+ DescKeyResourcesHeader = "resources.header"
+ // DescKeyResourcesSeparator is the text key for resources separator messages.
+ DescKeyResourcesSeparator = "resources.separator"
+ // DescKeyResourcesLabelDisk is the text key for resources label disk messages.
+ DescKeyResourcesLabelDisk = "resources.label-disk"
+ // DescKeyResourcesLabelLoad is the text key for resources label load messages.
+ DescKeyResourcesLabelLoad = "resources.label-load"
+ // DescKeyResourcesLabelMemory is the text key for resources label memory
+ // messages.
+ DescKeyResourcesLabelMemory = "resources.label-memory"
+ // DescKeyResourcesLabelSwap is the text key for resources label swap messages.
+ DescKeyResourcesLabelSwap = "resources.label-swap"
+ // DescKeyResourcesLoadFormat is the text key for resources load format
+ // messages.
+ DescKeyResourcesLoadFormat = "resources.load-format"
+ // DescKeyResourcesValueFormat is the text key for resources value format
+ // messages.
+ DescKeyResourcesValueFormat = "resources.value-format"
+ // DescKeyResourcesStatusDanger is the text key for resources status danger
+ // messages.
DescKeyResourcesStatusDanger = "resources.status-danger"
- DescKeyResourcesStatusOk = "resources.status-ok"
- DescKeyResourcesStatusWarn = "resources.status-warn"
- DescKeyResourcesRowFormat = "resources.row-format"
+ // DescKeyResourcesStatusOk is the text key for resources status ok messages.
+ DescKeyResourcesStatusOk = "resources.status-ok"
+ // DescKeyResourcesStatusWarn is the text key for resources status warn
+ // messages.
+ DescKeyResourcesStatusWarn = "resources.status-warn"
+ // DescKeyResourcesRowFormat is the text key for resources row format messages.
+ DescKeyResourcesRowFormat = "resources.row-format"
)
diff --git a/internal/config/embed/text/restore.go b/internal/config/embed/text/restore.go
index 06aecaf82..7f7c397fa 100644
--- a/internal/config/embed/text/restore.go
+++ b/internal/config/embed/text/restore.go
@@ -8,14 +8,31 @@ package text
// DescKeys for restore operations write output.
const (
- DescKeyWriteRestoreAdded = "write.restore-added"
- DescKeyWriteRestoreDenyDroppedHeader = "write.restore-deny-dropped-header"
+ // DescKeyWriteRestoreAdded is the text key for write restore added messages.
+ DescKeyWriteRestoreAdded = "write.restore-added"
+ // DescKeyWriteRestoreDenyDroppedHeader is the text key for write restore deny
+ // dropped header messages.
+ DescKeyWriteRestoreDenyDroppedHeader = "write.restore-deny-dropped-header"
+ // DescKeyWriteRestoreDenyRestoredHeader is the text key for write restore
+ // deny restored header messages.
DescKeyWriteRestoreDenyRestoredHeader = "write.restore-deny-restored-header"
- DescKeyWriteRestoreDone = "write.restore-done"
- DescKeyWriteRestoreDroppedHeader = "write.restore-dropped-header"
- DescKeyWriteRestoreMatch = "write.restore-match"
- DescKeyWriteRestoreNoLocal = "write.restore-no-local"
- DescKeyWriteRestorePermMatch = "write.restore-perm-match"
- DescKeyWriteRestoreRemoved = "write.restore-removed"
- DescKeyWriteRestoreRestoredHeader = "write.restore-restored-header"
+ // DescKeyWriteRestoreDone is the text key for write restore done messages.
+ DescKeyWriteRestoreDone = "write.restore-done"
+ // DescKeyWriteRestoreDroppedHeader is the text key for write restore dropped
+ // header messages.
+ DescKeyWriteRestoreDroppedHeader = "write.restore-dropped-header"
+ // DescKeyWriteRestoreMatch is the text key for write restore match messages.
+ DescKeyWriteRestoreMatch = "write.restore-match"
+ // DescKeyWriteRestoreNoLocal is the text key for write restore no local
+ // messages.
+ DescKeyWriteRestoreNoLocal = "write.restore-no-local"
+ // DescKeyWriteRestorePermMatch is the text key for write restore perm match
+ // messages.
+ DescKeyWriteRestorePermMatch = "write.restore-perm-match"
+ // DescKeyWriteRestoreRemoved is the text key for write restore removed
+ // messages.
+ DescKeyWriteRestoreRemoved = "write.restore-removed"
+ // DescKeyWriteRestoreRestoredHeader is the text key for write restore
+ // restored header messages.
+ DescKeyWriteRestoreRestoredHeader = "write.restore-restored-header"
)
diff --git a/internal/config/embed/text/setup.go b/internal/config/embed/text/setup.go
index bc21b5589..c9e742192 100644
--- a/internal/config/embed/text/setup.go
+++ b/internal/config/embed/text/setup.go
@@ -8,13 +8,29 @@ package text
// DescKeys for setup wizard write output.
const (
- DescKeyWriteSetupDone = "write.setup-done"
- DescKeyWriteSetupPrompt = "write.setup-prompt"
- DescKeyWriteSetupDeployComplete = "write.setup-deploy-complete"
- DescKeyWriteSetupDeployMCP = "write.setup-deploy-mcp"
- DescKeyWriteSetupDeploySteering = "write.setup-deploy-steering"
- DescKeyWriteSetupDeployExists = "write.setup-deploy-exists"
- DescKeyWriteSetupDeployCreated = "write.setup-deploy-created"
- DescKeyWriteSetupDeploySynced = "write.setup-deploy-synced"
+ // DescKeyWriteSetupDone is the text key for write setup done messages.
+ DescKeyWriteSetupDone = "write.setup-done"
+ // DescKeyWriteSetupPrompt is the text key for write setup prompt messages.
+ DescKeyWriteSetupPrompt = "write.setup-prompt"
+ // DescKeyWriteSetupDeployComplete is the text key for write setup deploy
+ // complete messages.
+ DescKeyWriteSetupDeployComplete = "write.setup-deploy-complete"
+ // DescKeyWriteSetupDeployMCP is the text key for write setup deploy mcp
+ // messages.
+ DescKeyWriteSetupDeployMCP = "write.setup-deploy-mcp"
+ // DescKeyWriteSetupDeploySteering is the text key for write setup deploy
+ // steering messages.
+ DescKeyWriteSetupDeploySteering = "write.setup-deploy-steering"
+ // DescKeyWriteSetupDeployExists is the text key for write setup deploy exists
+ // messages.
+ DescKeyWriteSetupDeployExists = "write.setup-deploy-exists"
+ // DescKeyWriteSetupDeployCreated is the text key for write setup deploy
+ // created messages.
+ DescKeyWriteSetupDeployCreated = "write.setup-deploy-created"
+ // DescKeyWriteSetupDeploySynced is the text key for write setup deploy synced
+ // messages.
+ DescKeyWriteSetupDeploySynced = "write.setup-deploy-synced"
+ // DescKeyWriteSetupDeploySkipSteer is the text key for write setup deploy
+ // skip steer messages.
DescKeyWriteSetupDeploySkipSteer = "write.setup-deploy-skip-steer"
)
diff --git a/internal/config/embed/text/site.go b/internal/config/embed/text/site.go
index bef4bc5ea..a9f684b95 100644
--- a/internal/config/embed/text/site.go
+++ b/internal/config/embed/text/site.go
@@ -8,20 +8,38 @@ package text
// DescKeys for site feed generation.
const (
+ // DescKeySiteFeedGenerated is the text key for site feed generated messages.
DescKeySiteFeedGenerated = "site.feed-generated"
- DescKeySiteFeedSkipped = "site.feed-skipped"
- DescKeySiteFeedWarnings = "site.feed-warnings"
- DescKeySiteFeedItem = "site.feed-item"
+ // DescKeySiteFeedSkipped is the text key for site feed skipped messages.
+ DescKeySiteFeedSkipped = "site.feed-skipped"
+ // DescKeySiteFeedWarnings is the text key for site feed warnings messages.
+ DescKeySiteFeedWarnings = "site.feed-warnings"
+ // DescKeySiteFeedItem is the text key for site feed item messages.
+ DescKeySiteFeedItem = "site.feed-item"
)
// DescKeys for site generation skip reasons.
const (
- DescKeySiteSkipCannotRead = "site.skip-cannot-read"
+ // DescKeySiteSkipCannotRead is the text key for site skip cannot read
+ // messages.
+ DescKeySiteSkipCannotRead = "site.skip-cannot-read"
+ // DescKeySiteSkipNoFrontmatter is the text key for site skip no frontmatter
+ // messages.
DescKeySiteSkipNoFrontmatter = "site.skip-no-frontmatter"
- DescKeySiteSkipMalformed = "site.skip-malformed"
- DescKeySiteSkipParseError = "site.skip-parse-error"
- DescKeySiteSkipNotFinalized = "site.skip-not-finalized"
- DescKeySiteSkipMissingTitle = "site.skip-missing-title"
- DescKeySiteSkipMissingDate = "site.skip-missing-date"
- DescKeySiteWarnNoSummary = "site.warn-no-summary"
+ // DescKeySiteSkipMalformed is the text key for site skip malformed messages.
+ DescKeySiteSkipMalformed = "site.skip-malformed"
+ // DescKeySiteSkipParseError is the text key for site skip parse error
+ // messages.
+ DescKeySiteSkipParseError = "site.skip-parse-error"
+ // DescKeySiteSkipNotFinalized is the text key for site skip not finalized
+ // messages.
+ DescKeySiteSkipNotFinalized = "site.skip-not-finalized"
+ // DescKeySiteSkipMissingTitle is the text key for site skip missing title
+ // messages.
+ DescKeySiteSkipMissingTitle = "site.skip-missing-title"
+ // DescKeySiteSkipMissingDate is the text key for site skip missing date
+ // messages.
+ DescKeySiteSkipMissingDate = "site.skip-missing-date"
+ // DescKeySiteWarnNoSummary is the text key for site warn no summary messages.
+ DescKeySiteWarnNoSummary = "site.warn-no-summary"
)
diff --git a/internal/config/embed/text/skill.go b/internal/config/embed/text/skill.go
index 103338cbe..c5110aedb 100644
--- a/internal/config/embed/text/skill.go
+++ b/internal/config/embed/text/skill.go
@@ -8,11 +8,20 @@ package text
// DescKeys for skill display write output.
const (
- DescKeyWriteSkillLine = "write.skill-line"
- DescKeyWriteSkillsHeader = "write.skills-header"
+ // DescKeyWriteSkillLine is the text key for write skill line messages.
+ DescKeyWriteSkillLine = "write.skill-line"
+ // DescKeyWriteSkillsHeader is the text key for write skills header messages.
+ DescKeyWriteSkillsHeader = "write.skills-header"
+ // DescKeyWriteSkillInstalled is the text key for write skill installed
+ // messages.
DescKeyWriteSkillInstalled = "write.skill-installed"
+ // DescKeyWriteSkillEntryDesc is the text key for write skill entry desc
+ // messages.
DescKeyWriteSkillEntryDesc = "write.skill-entry-desc"
- DescKeyWriteSkillEntry = "write.skill-entry"
- DescKeyWriteSkillCount = "write.skill-count"
- DescKeyWriteSkillRemoved = "write.skill-removed"
+ // DescKeyWriteSkillEntry is the text key for write skill entry messages.
+ DescKeyWriteSkillEntry = "write.skill-entry"
+ // DescKeyWriteSkillCount is the text key for write skill count messages.
+ DescKeyWriteSkillCount = "write.skill-count"
+ // DescKeyWriteSkillRemoved is the text key for write skill removed messages.
+ DescKeyWriteSkillRemoved = "write.skill-removed"
)
diff --git a/internal/config/embed/text/stat.go b/internal/config/embed/text/stat.go
index ef24be82b..a93a75e25 100644
--- a/internal/config/embed/text/stat.go
+++ b/internal/config/embed/text/stat.go
@@ -8,7 +8,10 @@ package text
// DescKeys for statistics display.
const (
- DescKeyStatsEmpty = "stats.empty"
+ // DescKeyStatsEmpty is the text key for stats empty messages.
+ DescKeyStatsEmpty = "stats.empty"
+ // DescKeyStatsHeaderFormat is the text key for stats header format messages.
DescKeyStatsHeaderFormat = "stats.header-format"
- DescKeyStatsLineFormat = "stats.line-format"
+ // DescKeyStatsLineFormat is the text key for stats line format messages.
+ DescKeyStatsLineFormat = "stats.line-format"
)
diff --git a/internal/config/embed/text/status.go b/internal/config/embed/text/status.go
index 6a7c77f06..a71aa1490 100644
--- a/internal/config/embed/text/status.go
+++ b/internal/config/embed/text/status.go
@@ -8,13 +8,29 @@ package text
// DescKeys for status write output.
const (
- DescKeyWriteStatusEmpty = "write.status-empty"
+ // DescKeyWriteStatusEmpty is the text key for write status empty messages.
+ DescKeyWriteStatusEmpty = "write.status-empty"
+ // DescKeyWriteStatusActivityHeader is the text key for write status activity
+ // header messages.
DescKeyWriteStatusActivityHeader = "write.status-activity-header"
- DescKeyWriteStatusActivityItem = "write.status-activity-item"
- DescKeyWriteStatusDrift = "write.status-drift"
- DescKeyWriteStatusFileCompact = "write.status-file-compact"
- DescKeyWriteStatusFileVerbose = "write.status-file-verbose"
- DescKeyWriteStatusHeaderBlock = "write.status-header-block"
- DescKeyWriteStatusNoDrift = "write.status-no-drift"
- DescKeyWriteStatusPreviewLine = "write.status-preview-line"
+ // DescKeyWriteStatusActivityItem is the text key for write status activity
+ // item messages.
+ DescKeyWriteStatusActivityItem = "write.status-activity-item"
+ // DescKeyWriteStatusDrift is the text key for write status drift messages.
+ DescKeyWriteStatusDrift = "write.status-drift"
+ // DescKeyWriteStatusFileCompact is the text key for write status file compact
+ // messages.
+ DescKeyWriteStatusFileCompact = "write.status-file-compact"
+ // DescKeyWriteStatusFileVerbose is the text key for write status file verbose
+ // messages.
+ DescKeyWriteStatusFileVerbose = "write.status-file-verbose"
+ // DescKeyWriteStatusHeaderBlock is the text key for write status header block
+ // messages.
+ DescKeyWriteStatusHeaderBlock = "write.status-header-block"
+ // DescKeyWriteStatusNoDrift is the text key for write status no drift
+ // messages.
+ DescKeyWriteStatusNoDrift = "write.status-no-drift"
+ // DescKeyWriteStatusPreviewLine is the text key for write status preview line
+ // messages.
+ DescKeyWriteStatusPreviewLine = "write.status-preview-line"
)
diff --git a/internal/config/embed/text/steering.go b/internal/config/embed/text/steering.go
index 9e583d48a..2febc1a58 100644
--- a/internal/config/embed/text/steering.go
+++ b/internal/config/embed/text/steering.go
@@ -8,16 +8,40 @@ package text
// DescKeys for steering write output.
const (
- DescKeyWriteSteeringCreated = "write.steering-created"
- DescKeyWriteSteeringSkipped = "write.steering-skipped"
- DescKeyWriteSteeringInitSummary = "write.steering-init-summary"
- DescKeyWriteSteeringFileEntry = "write.steering-file-entry"
- DescKeyWriteSteeringFileCount = "write.steering-file-count"
- DescKeyWriteSteeringPreviewHead = "write.steering-preview-head"
+ // DescKeyWriteSteeringCreated is the text key for write steering created
+ // messages.
+ DescKeyWriteSteeringCreated = "write.steering-created"
+ // DescKeyWriteSteeringSkipped is the text key for write steering skipped
+ // messages.
+ DescKeyWriteSteeringSkipped = "write.steering-skipped"
+ // DescKeyWriteSteeringInitSummary is the text key for write steering init
+ // summary messages.
+ DescKeyWriteSteeringInitSummary = "write.steering-init-summary"
+ // DescKeyWriteSteeringFileEntry is the text key for write steering file entry
+ // messages.
+ DescKeyWriteSteeringFileEntry = "write.steering-file-entry"
+ // DescKeyWriteSteeringFileCount is the text key for write steering file count
+ // messages.
+ DescKeyWriteSteeringFileCount = "write.steering-file-count"
+ // DescKeyWriteSteeringPreviewHead is the text key for write steering preview
+ // head messages.
+ DescKeyWriteSteeringPreviewHead = "write.steering-preview-head"
+ // DescKeyWriteSteeringPreviewEntry is the text key for write steering preview
+ // entry messages.
DescKeyWriteSteeringPreviewEntry = "write.steering-preview-entry"
+ // DescKeyWriteSteeringPreviewCount is the text key for write steering preview
+ // count messages.
DescKeyWriteSteeringPreviewCount = "write.steering-preview-count"
- DescKeyWriteSteeringSyncWritten = "write.steering-sync-written"
- DescKeyWriteSteeringSyncSkipped = "write.steering-sync-skipped"
- DescKeyWriteSteeringSyncError = "write.steering-sync-error"
- DescKeyWriteSteeringSyncSummary = "write.steering-sync-summary"
+ // DescKeyWriteSteeringSyncWritten is the text key for write steering sync
+ // written messages.
+ DescKeyWriteSteeringSyncWritten = "write.steering-sync-written"
+ // DescKeyWriteSteeringSyncSkipped is the text key for write steering sync
+ // skipped messages.
+ DescKeyWriteSteeringSyncSkipped = "write.steering-sync-skipped"
+ // DescKeyWriteSteeringSyncError is the text key for write steering sync error
+ // messages.
+ DescKeyWriteSteeringSyncError = "write.steering-sync-error"
+ // DescKeyWriteSteeringSyncSummary is the text key for write steering sync
+ // summary messages.
+ DescKeyWriteSteeringSyncSummary = "write.steering-sync-summary"
)
diff --git a/internal/config/embed/text/summary.go b/internal/config/embed/text/summary.go
index 1f1558205..de7a09543 100644
--- a/internal/config/embed/text/summary.go
+++ b/internal/config/embed/text/summary.go
@@ -8,13 +8,22 @@ package text
// DescKeys for summary display.
const (
- DescKeySummaryActive = "summary.active"
- DescKeySummaryCompleted = "summary.completed"
- DescKeySummaryDecision = "summary.decision"
- DescKeySummaryDecisions = "summary.decisions"
- DescKeySummaryEmpty = "summary.empty"
+ // DescKeySummaryActive is the text key for summary active messages.
+ DescKeySummaryActive = "summary.active"
+ // DescKeySummaryCompleted is the text key for summary completed messages.
+ DescKeySummaryCompleted = "summary.completed"
+ // DescKeySummaryDecision is the text key for summary decision messages.
+ DescKeySummaryDecision = "summary.decision"
+ // DescKeySummaryDecisions is the text key for summary decisions messages.
+ DescKeySummaryDecisions = "summary.decisions"
+ // DescKeySummaryEmpty is the text key for summary empty messages.
+ DescKeySummaryEmpty = "summary.empty"
+ // DescKeySummaryInvariants is the text key for summary invariants messages.
DescKeySummaryInvariants = "summary.invariants"
- DescKeySummaryLoaded = "summary.loaded"
- DescKeySummaryTerm = "summary.term"
- DescKeySummaryTerms = "summary.terms"
+ // DescKeySummaryLoaded is the text key for summary loaded messages.
+ DescKeySummaryLoaded = "summary.loaded"
+ // DescKeySummaryTerm is the text key for summary term messages.
+ DescKeySummaryTerm = "summary.term"
+ // DescKeySummaryTerms is the text key for summary terms messages.
+ DescKeySummaryTerms = "summary.terms"
)
diff --git a/internal/config/embed/text/sync.go b/internal/config/embed/text/sync.go
index dd40511aa..f6dd68961 100644
--- a/internal/config/embed/text/sync.go
+++ b/internal/config/embed/text/sync.go
@@ -8,33 +8,61 @@ package text
// DescKeys for sync operations write output.
const (
- DescKeyWriteSynced = "write.synced"
- DescKeyWriteSyncAction = "write.sync-action"
- DescKeyWriteSyncDryRun = "write.sync-dry-run"
+ // DescKeyWriteSynced is the text key for write synced messages.
+ DescKeyWriteSynced = "write.synced"
+ // DescKeyWriteSyncAction is the text key for write sync action messages.
+ DescKeyWriteSyncAction = "write.sync-action"
+ // DescKeyWriteSyncDryRun is the text key for write sync dry run messages.
+ DescKeyWriteSyncDryRun = "write.sync-dry-run"
+ // DescKeyWriteSyncDryRunSummary is the text key for write sync dry run
+ // summary messages.
DescKeyWriteSyncDryRunSummary = "write.sync-dry-run-summary"
- DescKeyWriteSyncHeader = "write.sync-header"
- DescKeyWriteSyncInSync = "write.sync-in-sync"
- DescKeyWriteSyncSeparator = "write.sync-separator"
- DescKeyWriteSyncSuggestion = "write.sync-suggestion"
- DescKeyWriteSyncSummary = "write.sync-summary"
+ // DescKeyWriteSyncHeader is the text key for write sync header messages.
+ DescKeyWriteSyncHeader = "write.sync-header"
+ // DescKeyWriteSyncInSync is the text key for write sync in sync messages.
+ DescKeyWriteSyncInSync = "write.sync-in-sync"
+ // DescKeyWriteSyncSeparator is the text key for write sync separator messages.
+ DescKeyWriteSyncSeparator = "write.sync-separator"
+ // DescKeyWriteSyncSuggestion is the text key for write sync suggestion
+ // messages.
+ DescKeyWriteSyncSuggestion = "write.sync-suggestion"
+ // DescKeyWriteSyncSummary is the text key for write sync summary messages.
+ DescKeyWriteSyncSummary = "write.sync-summary"
)
// DescKeys for sync topic names.
const (
- DescKeySyncTopicEslint = "sync.topic.eslint"
- DescKeySyncTopicPrettier = "sync.topic.prettier"
- DescKeySyncTopicTSConfig = "sync.topic.tsconfig"
+ // DescKeySyncTopicEslint is the text key for sync topic eslint messages.
+ DescKeySyncTopicEslint = "sync.topic.eslint"
+ // DescKeySyncTopicPrettier is the text key for sync topic prettier messages.
+ DescKeySyncTopicPrettier = "sync.topic.prettier"
+ // DescKeySyncTopicTSConfig is the text key for sync topic ts config messages.
+ DescKeySyncTopicTSConfig = "sync.topic.tsconfig"
+ // DescKeySyncTopicEditorConfig is the text key for sync topic editor config
+ // messages.
DescKeySyncTopicEditorConfig = "sync.topic.editorconfig"
- DescKeySyncTopicMakefile = "sync.topic.makefile"
- DescKeySyncTopicDockerfile = "sync.topic.dockerfile"
+ // DescKeySyncTopicMakefile is the text key for sync topic makefile messages.
+ DescKeySyncTopicMakefile = "sync.topic.makefile"
+ // DescKeySyncTopicDockerfile is the text key for sync topic dockerfile
+ // messages.
+ DescKeySyncTopicDockerfile = "sync.topic.dockerfile"
)
// DescKeys for sync rule descriptions.
const (
- DescKeySyncDepsDescription = "sync.deps.description"
- DescKeySyncDepsSuggestion = "sync.deps.suggestion"
+ // DescKeySyncDepsDescription is the text key for sync deps description
+ // messages.
+ DescKeySyncDepsDescription = "sync.deps.description"
+ // DescKeySyncDepsSuggestion is the text key for sync deps suggestion messages.
+ DescKeySyncDepsSuggestion = "sync.deps.suggestion"
+ // DescKeySyncConfigDescription is the text key for sync config description
+ // messages.
DescKeySyncConfigDescription = "sync.config.description"
- DescKeySyncConfigSuggestion = "sync.config.suggestion"
- DescKeySyncDirDescription = "sync.dir.description"
- DescKeySyncDirSuggestion = "sync.dir.suggestion"
+ // DescKeySyncConfigSuggestion is the text key for sync config suggestion
+ // messages.
+ DescKeySyncConfigSuggestion = "sync.config.suggestion"
+ // DescKeySyncDirDescription is the text key for sync dir description messages.
+ DescKeySyncDirDescription = "sync.dir.description"
+ // DescKeySyncDirSuggestion is the text key for sync dir suggestion messages.
+ DescKeySyncDirSuggestion = "sync.dir.suggestion"
)
diff --git a/internal/config/embed/text/task.go b/internal/config/embed/text/task.go
index 04d42a6b2..e71622ec0 100644
--- a/internal/config/embed/text/task.go
+++ b/internal/config/embed/text/task.go
@@ -8,30 +8,54 @@ package text
// DescKeys for task archive output.
const (
- DescKeyTaskArchiveDryRunBlock = "task-archive.dry-run-block"
- DescKeyTaskArchiveNoCompleted = "task-archive.no-completed"
- DescKeyTaskArchivePendingRemain = "task-archive.pending-remain"
+ // DescKeyTaskArchiveDryRunBlock is the text key for task archive dry run
+ // block messages.
+ DescKeyTaskArchiveDryRunBlock = "task-archive.dry-run-block"
+ // DescKeyTaskArchiveNoCompleted is the text key for task archive no completed
+ // messages.
+ DescKeyTaskArchiveNoCompleted = "task-archive.no-completed"
+ // DescKeyTaskArchivePendingRemain is the text key for task archive pending
+ // remain messages.
+ DescKeyTaskArchivePendingRemain = "task-archive.pending-remain"
+ // DescKeyTaskArchiveSkipIncomplete is the text key for task archive skip
+ // incomplete messages.
DescKeyTaskArchiveSkipIncomplete = "task-archive.skip-incomplete"
- DescKeyTaskArchiveSkipping = "task-archive.skipping"
- DescKeyTaskArchiveSuccess = "task-archive.success"
+ // DescKeyTaskArchiveSkipping is the text key for task archive skipping
+ // messages.
+ DescKeyTaskArchiveSkipping = "task-archive.skipping"
+ // DescKeyTaskArchiveSuccess is the text key for task archive success messages.
+ DescKeyTaskArchiveSuccess = "task-archive.success"
+ // DescKeyTaskArchiveSuccessWithAge is the text key for task archive success
+ // with age messages.
DescKeyTaskArchiveSuccessWithAge = "task-archive.success-with-age"
)
// DescKeys for task snapshot output.
const (
- DescKeyTaskSnapshotHeaderFormat = "task-snapshot.header-format"
+ // DescKeyTaskSnapshotHeaderFormat is the text key for task snapshot header
+ // format messages.
+ DescKeyTaskSnapshotHeaderFormat = "task-snapshot.header-format"
+ // DescKeyTaskSnapshotCreatedFormat is the text key for task snapshot created
+ // format messages.
DescKeyTaskSnapshotCreatedFormat = "task-snapshot.created-format"
- DescKeyTaskSnapshotSaved = "task-snapshot.saved"
+ // DescKeyTaskSnapshotSaved is the text key for task snapshot saved messages.
+ DescKeyTaskSnapshotSaved = "task-snapshot.saved"
)
// DescKeys for task completion check nudge.
const (
- DescKeyCheckTaskCompletionFallback = "check-task-completion.fallback"
+ // DescKeyCheckTaskCompletionFallback is the text key for check task
+ // completion fallback messages.
+ DescKeyCheckTaskCompletionFallback = "check-task-completion.fallback"
+ // DescKeyCheckTaskCompletionNudgeMessage is the text key for check task
+ // completion nudge message messages.
DescKeyCheckTaskCompletionNudgeMessage = "check-task-completion.nudge-message"
)
// DescKeys for task management write output.
const (
+ // DescKeyWriteCompletedTask is the text key for write completed task messages.
DescKeyWriteCompletedTask = "write.completed-task"
- DescKeyWriteMovingTask = "write.moving-task"
+ // DescKeyWriteMovingTask is the text key for write moving task messages.
+ DescKeyWriteMovingTask = "write.moving-task"
)
diff --git a/internal/config/embed/text/test.go b/internal/config/embed/text/test.go
index 81f1e6ab2..a14b42723 100644
--- a/internal/config/embed/text/test.go
+++ b/internal/config/embed/text/test.go
@@ -8,8 +8,13 @@ package text
// DescKeys for test write output.
const (
- DescKeyWriteTestFiltered = "write.test-filtered"
+ // DescKeyWriteTestFiltered is the text key for write test filtered messages.
+ DescKeyWriteTestFiltered = "write.test-filtered"
+ // DescKeyWriteTestNoWebhook is the text key for write test no webhook
+ // messages.
DescKeyWriteTestNoWebhook = "write.test-no-webhook"
- DescKeyWriteTestResult = "write.test-result"
- DescKeyWriteTestWorking = "write.test-working"
+ // DescKeyWriteTestResult is the text key for write test result messages.
+ DescKeyWriteTestResult = "write.test-result"
+ // DescKeyWriteTestWorking is the text key for write test working messages.
+ DescKeyWriteTestWorking = "write.test-working"
)
diff --git a/internal/config/embed/text/text.go b/internal/config/embed/text/text.go
index ac7c48556..45ccc8dcd 100644
--- a/internal/config/embed/text/text.go
+++ b/internal/config/embed/text/text.go
@@ -8,5 +8,6 @@ package text
// DescKeys for text processing.
const (
+ // DescKeyStopwords is the text key for stopwords messages.
DescKeyStopwords = "stopwords"
)
diff --git a/internal/config/embed/text/time.go b/internal/config/embed/text/time.go
index c8d9f1bc0..410639b56 100644
--- a/internal/config/embed/text/time.go
+++ b/internal/config/embed/text/time.go
@@ -8,25 +8,44 @@ package text
// DescKeys for time formatting write output.
const (
- DescKeyWriteTimeDayAgo = "write.time-day-ago"
- DescKeyWriteTimeDaysAgo = "write.time-days-ago"
- DescKeyWriteTimeHourAgo = "write.time-hour-ago"
- DescKeyWriteTimeHoursAgo = "write.time-hours-ago"
- DescKeyWriteTimeJustNow = "write.time-just-now"
- DescKeyWriteTimeMinuteAgo = "write.time-minute-ago"
+ // DescKeyWriteTimeDayAgo is the text key for write time day ago messages.
+ DescKeyWriteTimeDayAgo = "write.time-day-ago"
+ // DescKeyWriteTimeDaysAgo is the text key for write time days ago messages.
+ DescKeyWriteTimeDaysAgo = "write.time-days-ago"
+ // DescKeyWriteTimeHourAgo is the text key for write time hour ago messages.
+ DescKeyWriteTimeHourAgo = "write.time-hour-ago"
+ // DescKeyWriteTimeHoursAgo is the text key for write time hours ago messages.
+ DescKeyWriteTimeHoursAgo = "write.time-hours-ago"
+ // DescKeyWriteTimeJustNow is the text key for write time just now messages.
+ DescKeyWriteTimeJustNow = "write.time-just-now"
+ // DescKeyWriteTimeMinuteAgo is the text key for write time minute ago
+ // messages.
+ DescKeyWriteTimeMinuteAgo = "write.time-minute-ago"
+ // DescKeyWriteTimeMinutesAgo is the text key for write time minutes ago
+ // messages.
DescKeyWriteTimeMinutesAgo = "write.time-minutes-ago"
)
// DescKeys for time formatting.
const (
- DescKeyTimeAgo = "time.ago"
- DescKeyTimeCommitCount = "time.commit-count"
+ // DescKeyTimeAgo is the text key for time ago messages.
+ DescKeyTimeAgo = "time.ago"
+ // DescKeyTimeCommitCount is the text key for time commit count messages.
+ DescKeyTimeCommitCount = "time.commit-count"
+ // DescKeyTimeCommitsCount is the text key for time commits count messages.
DescKeyTimeCommitsCount = "time.commits-count"
- DescKeyTimeDayCount = "time.day-count"
- DescKeyTimeDaysCount = "time.days-count"
- DescKeyTimeHourCount = "time.hour-count"
- DescKeyTimeHoursCount = "time.hours-count"
- DescKeyTimeJustNow = "time.just-now"
- DescKeyTimeMinuteCount = "time.minute-count"
+ // DescKeyTimeDayCount is the text key for time day count messages.
+ DescKeyTimeDayCount = "time.day-count"
+ // DescKeyTimeDaysCount is the text key for time days count messages.
+ DescKeyTimeDaysCount = "time.days-count"
+ // DescKeyTimeHourCount is the text key for time hour count messages.
+ DescKeyTimeHourCount = "time.hour-count"
+ // DescKeyTimeHoursCount is the text key for time hours count messages.
+ DescKeyTimeHoursCount = "time.hours-count"
+ // DescKeyTimeJustNow is the text key for time just now messages.
+ DescKeyTimeJustNow = "time.just-now"
+ // DescKeyTimeMinuteCount is the text key for time minute count messages.
+ DescKeyTimeMinuteCount = "time.minute-count"
+ // DescKeyTimeMinutesCount is the text key for time minutes count messages.
DescKeyTimeMinutesCount = "time.minutes-count"
)
diff --git a/internal/config/embed/text/trace.go b/internal/config/embed/text/trace.go
index 745bc97b9..6e753cc8c 100644
--- a/internal/config/embed/text/trace.go
+++ b/internal/config/embed/text/trace.go
@@ -8,21 +8,53 @@ package text
// DescKeys for trace write output.
const (
- DescKeyWriteTraceDetailDate = "write.trace-detail-date"
- DescKeyWriteTraceDetailStatus = "write.trace-detail-status"
- DescKeyWriteTraceCommitHeader = "write.trace-commit-header"
- DescKeyWriteTraceCommitMessage = "write.trace-commit-message"
- DescKeyWriteTraceCommitDate = "write.trace-commit-date"
- DescKeyWriteTraceCommitContext = "write.trace-commit-context"
+ // DescKeyWriteTraceDetailDate is the text key for write trace detail date
+ // messages.
+ DescKeyWriteTraceDetailDate = "write.trace-detail-date"
+ // DescKeyWriteTraceDetailStatus is the text key for write trace detail status
+ // messages.
+ DescKeyWriteTraceDetailStatus = "write.trace-detail-status"
+ // DescKeyWriteTraceCommitHeader is the text key for write trace commit header
+ // messages.
+ DescKeyWriteTraceCommitHeader = "write.trace-commit-header"
+ // DescKeyWriteTraceCommitMessage is the text key for write trace commit
+ // message messages.
+ DescKeyWriteTraceCommitMessage = "write.trace-commit-message"
+ // DescKeyWriteTraceCommitDate is the text key for write trace commit date
+ // messages.
+ DescKeyWriteTraceCommitDate = "write.trace-commit-date"
+ // DescKeyWriteTraceCommitContext is the text key for write trace commit
+ // context messages.
+ DescKeyWriteTraceCommitContext = "write.trace-commit-context"
+ // DescKeyWriteTraceCommitNoContext is the text key for write trace commit no
+ // context messages.
DescKeyWriteTraceCommitNoContext = "write.trace-commit-no-context"
- DescKeyWriteTraceFileEntry = "write.trace-file-entry"
- DescKeyWriteTraceHooksEnabled = "write.trace-hooks-enabled"
- DescKeyWriteTraceHooksDisabled = "write.trace-hooks-disabled"
- DescKeyWriteTraceLastEntry = "write.trace-last-entry"
- DescKeyWriteTraceNoRefs = "write.trace-no-refs"
- DescKeyWriteTraceRefsPrefix = "write.trace-refs-prefix"
- DescKeyWriteTraceResolvedFull = "write.trace-resolved-full"
- DescKeyWriteTraceResolvedTitle = "write.trace-resolved-title"
- DescKeyWriteTraceResolvedRaw = "write.trace-resolved-raw"
- DescKeyWriteTraceTagged = "write.trace-tagged"
+ // DescKeyWriteTraceFileEntry is the text key for write trace file entry
+ // messages.
+ DescKeyWriteTraceFileEntry = "write.trace-file-entry"
+ // DescKeyWriteTraceHooksEnabled is the text key for write trace hooks enabled
+ // messages.
+ DescKeyWriteTraceHooksEnabled = "write.trace-hooks-enabled"
+ // DescKeyWriteTraceHooksDisabled is the text key for write trace hooks
+ // disabled messages.
+ DescKeyWriteTraceHooksDisabled = "write.trace-hooks-disabled"
+ // DescKeyWriteTraceLastEntry is the text key for write trace last entry
+ // messages.
+ DescKeyWriteTraceLastEntry = "write.trace-last-entry"
+ // DescKeyWriteTraceNoRefs is the text key for write trace no refs messages.
+ DescKeyWriteTraceNoRefs = "write.trace-no-refs"
+ // DescKeyWriteTraceRefsPrefix is the text key for write trace refs prefix
+ // messages.
+ DescKeyWriteTraceRefsPrefix = "write.trace-refs-prefix"
+ // DescKeyWriteTraceResolvedFull is the text key for write trace resolved full
+ // messages.
+ DescKeyWriteTraceResolvedFull = "write.trace-resolved-full"
+ // DescKeyWriteTraceResolvedTitle is the text key for write trace resolved
+ // title messages.
+ DescKeyWriteTraceResolvedTitle = "write.trace-resolved-title"
+ // DescKeyWriteTraceResolvedRaw is the text key for write trace resolved raw
+ // messages.
+ DescKeyWriteTraceResolvedRaw = "write.trace-resolved-raw"
+ // DescKeyWriteTraceTagged is the text key for write trace tagged messages.
+ DescKeyWriteTraceTagged = "write.trace-tagged"
)
diff --git a/internal/config/embed/text/trigger.go b/internal/config/embed/text/trigger.go
index efe7025a8..a5fd49862 100644
--- a/internal/config/embed/text/trigger.go
+++ b/internal/config/embed/text/trigger.go
@@ -8,22 +8,45 @@ package text
// DescKeys for trigger (hook runner) output.
const (
- DescKeyTriggerWarn = "trigger.warn"
+ // DescKeyTriggerWarn is the text key for trigger warn messages.
+ DescKeyTriggerWarn = "trigger.warn"
+ // DescKeyTriggerErrorItem is the text key for trigger error item messages.
DescKeyTriggerErrorItem = "trigger.error-item"
- DescKeyTriggerSkipWarn = "trigger.skip-warn"
+ // DescKeyTriggerSkipWarn is the text key for trigger skip warn messages.
+ DescKeyTriggerSkipWarn = "trigger.skip-warn"
)
// DescKeys for write/trigger display output.
const (
- DescKeyWriteTriggerCreated = "write.trigger-created"
- DescKeyWriteTriggerDisabled = "write.trigger-disabled"
- DescKeyWriteTriggerEnabled = "write.trigger-enabled"
- DescKeyWriteTriggerTypeHdr = "write.trigger-type-hdr"
- DescKeyWriteTriggerEntry = "write.trigger-entry"
- DescKeyWriteTriggerCount = "write.trigger-count"
- DescKeyWriteTriggerTestHdr = "write.trigger-test-hdr"
+ // DescKeyWriteTriggerCreated is the text key for write trigger created
+ // messages.
+ DescKeyWriteTriggerCreated = "write.trigger-created"
+ // DescKeyWriteTriggerDisabled is the text key for write trigger disabled
+ // messages.
+ DescKeyWriteTriggerDisabled = "write.trigger-disabled"
+ // DescKeyWriteTriggerEnabled is the text key for write trigger enabled
+ // messages.
+ DescKeyWriteTriggerEnabled = "write.trigger-enabled"
+ // DescKeyWriteTriggerTypeHdr is the text key for write trigger type hdr
+ // messages.
+ DescKeyWriteTriggerTypeHdr = "write.trigger-type-hdr"
+ // DescKeyWriteTriggerEntry is the text key for write trigger entry messages.
+ DescKeyWriteTriggerEntry = "write.trigger-entry"
+ // DescKeyWriteTriggerCount is the text key for write trigger count messages.
+ DescKeyWriteTriggerCount = "write.trigger-count"
+ // DescKeyWriteTriggerTestHdr is the text key for write trigger test hdr
+ // messages.
+ DescKeyWriteTriggerTestHdr = "write.trigger-test-hdr"
+ // DescKeyWriteTriggerTestInput is the text key for write trigger test input
+ // messages.
DescKeyWriteTriggerTestInput = "write.trigger-test-input"
+ // DescKeyWriteTriggerCancelled is the text key for write trigger cancelled
+ // messages.
DescKeyWriteTriggerCancelled = "write.trigger-cancelled"
- DescKeyWriteTriggerContext = "write.trigger-context"
- DescKeyWriteTriggerErrLine = "write.trigger-err-line"
+ // DescKeyWriteTriggerContext is the text key for write trigger context
+ // messages.
+ DescKeyWriteTriggerContext = "write.trigger-context"
+ // DescKeyWriteTriggerErrLine is the text key for write trigger err line
+ // messages.
+ DescKeyWriteTriggerErrLine = "write.trigger-err-line"
)
diff --git a/internal/config/embed/text/vscode.go b/internal/config/embed/text/vscode.go
index ee6e7369d..2b50ac3d5 100644
--- a/internal/config/embed/text/vscode.go
+++ b/internal/config/embed/text/vscode.go
@@ -14,11 +14,17 @@ const (
DescKeyWriteVscodeExistsSkipped = "write.vscode-exists-skipped"
// DescKeyWriteVscodeRecommendationExists reports the extension
// recommendation already exists.
+ // DescKeyWriteVscodeRecommendationExists is the text key for write vscode
+ // recommendation exists messages.
DescKeyWriteVscodeRecommendationExists = "write.vscode-recommendation-exists"
// DescKeyWriteVscodeAddManually reports the file exists but lacks
// the ctx recommendation.
+ // DescKeyWriteVscodeAddManually is the text key for write vscode add manually
+ // messages.
DescKeyWriteVscodeAddManually = "write.vscode-add-manually"
// DescKeyWriteVscodeWarnNonFatal reports a non-fatal error during
// artifact creation.
+ // DescKeyWriteVscodeWarnNonFatal is the text key for write vscode warn non
+ // fatal messages.
DescKeyWriteVscodeWarnNonFatal = "write.vscode-warn-non-fatal"
)
diff --git a/internal/config/embed/text/watch.go b/internal/config/embed/text/watch.go
index 69d00da45..564ef9f2a 100644
--- a/internal/config/embed/text/watch.go
+++ b/internal/config/embed/text/watch.go
@@ -8,15 +8,24 @@ package text
// DescKeys for watch apply and preview output.
const (
- DescKeyWatchApplyFailed = "watch.apply-failed"
- DescKeyWatchApplySuccess = "watch.apply-success"
+ // DescKeyWatchApplyFailed is the text key for watch apply failed messages.
+ DescKeyWatchApplyFailed = "watch.apply-failed"
+ // DescKeyWatchApplySuccess is the text key for watch apply success messages.
+ DescKeyWatchApplySuccess = "watch.apply-success"
+ // DescKeyWatchDryRunPreview is the text key for watch dry run preview
+ // messages.
DescKeyWatchDryRunPreview = "watch.dry-run-preview"
- DescKeyWatchWatching = "watch.watching"
+ // DescKeyWatchWatching is the text key for watch watching messages.
+ DescKeyWatchWatching = "watch.watching"
)
// DescKeys for watch lifecycle messages.
const (
+ // DescKeyWatchCloseLogError is the text key for watch close log error
+ // messages.
DescKeyWatchCloseLogError = "watch.close-log-error"
- DescKeyWatchDryRun = "watch.dry-run-banner"
- DescKeyWatchStopHint = "watch.stop-hint"
+ // DescKeyWatchDryRun is the text key for watch dry run messages.
+ DescKeyWatchDryRun = "watch.dry-run-banner"
+ // DescKeyWatchStopHint is the text key for watch stop hint messages.
+ DescKeyWatchStopHint = "watch.stop-hint"
)
diff --git a/internal/config/embed/text/write.go b/internal/config/embed/text/write.go
index 3521e94db..843c61af0 100644
--- a/internal/config/embed/text/write.go
+++ b/internal/config/embed/text/write.go
@@ -8,19 +8,32 @@ package text
// DescKeys for common write confirmations.
const (
- DescKeyWriteAddedTo = "write.added-to"
- DescKeyWriteArchived = "write.archived"
+ // DescKeyWriteAddedTo is the text key for write added to messages.
+ DescKeyWriteAddedTo = "write.added-to"
+ // DescKeyWriteArchived is the text key for write archived messages.
+ DescKeyWriteArchived = "write.archived"
+ // DescKeyWriteSpecNudgeTip is the text key for write spec nudge tip messages.
DescKeyWriteSpecNudgeTip = "write.spec-nudge-tip"
)
// DescKeys for general write formatting.
const (
- DescKeyWriteDryRun = "write.dry-run"
+ // DescKeyWriteDryRun is the text key for write dry run messages.
+ DescKeyWriteDryRun = "write.dry-run"
+ // DescKeyWriteExistsWritingAsAlternative is the text key for write exists
+ // writing as alternative messages.
DescKeyWriteExistsWritingAsAlternative = "write.exists-writing-as-alternative"
- DescKeyWriteLines = "write.lines"
- DescKeyWriteLogLineFormat = "write.log-line-format"
- DescKeyWriteLinesPrevious = "write.lines-previous"
- DescKeyWriteMirror = "write.mirror"
- DescKeyWriteNewContent = "write.new-content"
- DescKeyWriteSource = "write.source"
+ // DescKeyWriteLines is the text key for write lines messages.
+ DescKeyWriteLines = "write.lines"
+ // DescKeyWriteLogLineFormat is the text key for write log line format
+ // messages.
+ DescKeyWriteLogLineFormat = "write.log-line-format"
+ // DescKeyWriteLinesPrevious is the text key for write lines previous messages.
+ DescKeyWriteLinesPrevious = "write.lines-previous"
+ // DescKeyWriteMirror is the text key for write mirror messages.
+ DescKeyWriteMirror = "write.mirror"
+ // DescKeyWriteNewContent is the text key for write new content messages.
+ DescKeyWriteNewContent = "write.new-content"
+ // DescKeyWriteSource is the text key for write source messages.
+ DescKeyWriteSource = "write.source"
)
diff --git a/internal/config/embed/text/zensical.go b/internal/config/embed/text/zensical.go
index 1ba0ed834..1f6f9a2a7 100644
--- a/internal/config/embed/text/zensical.go
+++ b/internal/config/embed/text/zensical.go
@@ -8,5 +8,7 @@ package text
// DescKeys for zensical site output.
const (
+ // DescKeyErrSiteZensicalNotFound is the text key for err site zensical not
+ // found messages.
DescKeyErrSiteZensicalNotFound = "err.site.zensical-not-found"
)
From 31cd6023a4b52ede70b49c85d5534587174ad86f Mon Sep 17 00:00:00 2001
From: Jose Alekhinne
Date: Sat, 4 Apr 2026 01:39:29 -0700
Subject: [PATCH 05/13] fix: migrate 156 string constants to config/, tighten
TestNoMagicStrings
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Remove blanket isConstDef exemption from TestNoMagicStrings — const
definitions outside config/ are now caught as magic strings. All 156
violations resolved:
- Typed string enums (IssueType, TriggerType, InclusionMode,
ResourceStatus, ResourceKind, CheckName, StatusType) moved to
config/{drift,trigger,steering,sysinfo}/ with types.go convention.
- Path/label constants moved to config/{io,setup,skill,steering,
trigger,hook,rc,bootstrap,dep}/.
- Templates (bash script scaffold, steering foundation files) moved
to assets/tpl/{tpl_trigger,tpl_steering}.go.
- User-facing messages moved to YAML + desc.Text().
- Duplicate paramSummary replaced with field.Summary.
- Dead LabelAllTools in config/trigger deleted.
- Mixed visibility fix: hooks() moved to separate file.
- config/README.md updated with config/ vs entity/ type guidance.
Spec: specs/ast-audit-contributor-guide.md
Signed-off-by: Jose Alekhinne
---
.context/ARCHITECTURE.md | 7 +-
.context/DECISIONS.md | 293 ++++----------
.context/LEARNINGS.md | 380 ++++--------------
.context/TASKS.md | 64 +--
.../decisions-consolidated-2026-04-03.md | 229 +++++++++++
.../learnings-consolidated-2026-04-03.md | 295 ++++++++++++++
.context/archive/tasks-2026-04-03.md | 64 +++
internal/assets/commands/text/ui.yaml | 54 +++
internal/assets/hooks/messages/doc.go | 4 +-
internal/assets/hooks/messages/hooks.go | 24 ++
internal/assets/hooks/messages/registry.go | 24 --
.../assets/hooks/messages/registry_test.go | 16 +-
internal/assets/tpl/tpl_steering.go | 55 +++
internal/assets/tpl/tpl_trigger.go | 32 ++
internal/audit/dead_exports_test.go | 2 +-
internal/audit/magic_strings_test.go | 12 +-
internal/bootstrap/cmd.go | 3 +-
internal/cli/dep/core/python/python.go | 9 +-
internal/cli/dep/core/rust/rust.go | 7 +-
internal/cli/doctor/core/check/check.go | 9 +-
internal/cli/drift/core/fix/fix.go | 11 +-
internal/cli/drift/core/out/out.go | 19 +-
internal/cli/drift/core/sanitize/sanitize.go | 18 +-
internal/cli/setup/core/cline/cline.go | 17 +-
internal/cli/setup/core/cursor/cursor.go | 21 +-
internal/cli/setup/core/cursor/deploy.go | 7 +-
internal/cli/setup/core/kiro/deploy.go | 7 +-
internal/cli/setup/core/kiro/kiro.go | 23 +-
internal/cli/steering/cmd/add/cmd.go | 3 +-
internal/cli/steering/cmd/initcmd/cmd.go | 3 +-
internal/cli/steering/cmd/list/cmd.go | 7 +-
internal/cli/steering/cmd/preview/cmd.go | 7 +-
.../cli/system/cmd/message/cmd/edit/run.go | 3 +-
internal/cli/trigger/cmd/add/cmd.go | 29 +-
internal/cli/trigger/cmd/list/cmd.go | 15 +-
internal/cli/trigger/cmd/test/cmd.go | 23 +-
internal/compat/compat_test.go | 3 +-
internal/config/README.md | 23 ++
internal/config/bootstrap/bootstrap.go | 4 +
internal/config/dep/dep.go | 8 +
internal/config/dir/dir.go | 6 +
internal/config/drift/doc.go | 13 +
internal/config/drift/types.go | 112 ++++++
internal/config/embed/text/mcp_tool.go | 23 ++
internal/config/embed/text/setup.go | 31 ++
internal/config/embed/text/skill.go | 3 +
internal/config/embed/text/steering.go | 6 +
internal/config/embed/text/trigger.go | 8 +
internal/config/hook/category.go | 17 +
internal/config/hook/hook.go | 1 +
internal/config/io/doc.go | 13 +
internal/config/io/prefix.go | 37 ++
internal/config/mcp/schema/schema.go | 3 +
internal/config/rc/doc.go | 13 +
internal/config/setup/doc.go | 14 +
internal/config/setup/setup.go | 54 +++
internal/config/skill/doc.go | 14 +
internal/config/skill/skill.go | 11 +
internal/config/steering/doc.go | 14 +
internal/config/steering/steering.go | 28 ++
internal/config/steering/types.go | 21 +
internal/config/sysinfo/resource.go | 33 ++
internal/config/token/delim.go | 5 +
internal/config/token/whitespace.go | 6 +
internal/config/trigger/doc.go | 14 +
internal/config/trigger/trigger.go | 25 ++
internal/config/trigger/types.go | 26 ++
internal/drift/check.go | 35 +-
internal/drift/check_ext.go | 39 +-
internal/drift/check_ext_test.go | 35 +-
internal/drift/detector.go | 19 +-
internal/drift/detector_test.go | 43 +-
internal/drift/types.go | 100 +----
internal/entity/trigger.go | 19 +-
internal/io/prefix.go | 24 --
internal/io/validate.go | 3 +-
internal/mcp/handler/session_hooks.go | 37 +-
internal/mcp/handler/steering.go | 14 +-
internal/mcp/proto/schema.go | 3 -
.../mcp/server/route/initialize/dispatch.go | 3 +-
internal/mcp/server/server_test.go | 7 +-
internal/rc/default.go | 9 +-
internal/skill/install.go | 7 +-
internal/skill/install_test.go | 14 +-
internal/skill/load.go | 13 +-
internal/skill/load_test.go | 4 +-
internal/skill/manifest.go | 10 +-
internal/steering/filter_test.go | 29 +-
internal/steering/format.go | 45 ++-
internal/steering/frontmatter.go | 10 +-
internal/steering/match.go | 8 +-
internal/steering/parse.go | 14 +-
internal/steering/parse_prop_test.go | 5 +-
internal/steering/parse_test.go | 17 +-
internal/steering/sync.go | 22 -
internal/steering/types.go | 60 +--
internal/sysinfo/threshold.go | 19 +-
internal/sysinfo/types.go | 30 +-
internal/trigger/discover.go | 4 +-
internal/trigger/discover_test.go | 26 +-
internal/trigger/runner_test.go | 18 +-
internal/trigger/security.go | 7 +-
internal/trigger/types.go | 37 +-
internal/write/resource/format.go | 9 +-
internal/write/setup/hook.go | 66 +--
internal/write/skill/skill.go | 5 +-
internal/write/steering/steering.go | 12 +-
internal/write/trigger/trigger.go | 16 +-
108 files changed, 1936 insertions(+), 1346 deletions(-)
create mode 100644 .context/archive/decisions-consolidated-2026-04-03.md
create mode 100644 .context/archive/learnings-consolidated-2026-04-03.md
create mode 100644 .context/archive/tasks-2026-04-03.md
create mode 100644 internal/assets/hooks/messages/hooks.go
create mode 100644 internal/assets/tpl/tpl_steering.go
create mode 100644 internal/assets/tpl/tpl_trigger.go
create mode 100644 internal/config/drift/doc.go
create mode 100644 internal/config/drift/types.go
create mode 100644 internal/config/hook/category.go
create mode 100644 internal/config/io/doc.go
create mode 100644 internal/config/io/prefix.go
create mode 100644 internal/config/rc/doc.go
create mode 100644 internal/config/setup/doc.go
create mode 100644 internal/config/setup/setup.go
create mode 100644 internal/config/skill/doc.go
create mode 100644 internal/config/skill/skill.go
create mode 100644 internal/config/steering/doc.go
create mode 100644 internal/config/steering/steering.go
create mode 100644 internal/config/steering/types.go
create mode 100644 internal/config/sysinfo/resource.go
create mode 100644 internal/config/trigger/doc.go
create mode 100644 internal/config/trigger/trigger.go
create mode 100644 internal/config/trigger/types.go
delete mode 100644 internal/io/prefix.go
diff --git a/.context/ARCHITECTURE.md b/.context/ARCHITECTURE.md
index 26bc2b5ca..367576c14 100644
--- a/.context/ARCHITECTURE.md
+++ b/.context/ARCHITECTURE.md
@@ -284,9 +284,10 @@ Six defense layers (innermost to outermost):
### Layered Package Taxonomy
Every CLI package follows `cmd/root + core/` taxonomy (Decision
-2026-03-06). `cmd/root/cmd.go` defines the Cobra command;
-`cmd/root/run.go` implements the handler. Shared logic lives in
-`core/`. Grouping commands use `internal/cli/parent.Cmd()` factory.
+2026-03-06). Each feature's `cmd/root/cmd.go` defines the Cobra
+command; `cmd/root/run.go` implements the handler. Shared logic
+lives in `core/`. Grouping commands use `internal/cli/parent.Cmd()`
+factory.
### Config Explosion
diff --git a/.context/DECISIONS.md b/.context/DECISIONS.md
index 59cc4667b..aa1d39f0d 100644
--- a/.context/DECISIONS.md
+++ b/.context/DECISIONS.md
@@ -3,8 +3,12 @@
| Date | Decision |
|----|--------|
+| 2026-04-03 | Output functions belong in write/ (consolidated) |
+| 2026-04-03 | YAML text externalization pipeline (consolidated) |
+| 2026-04-03 | Package taxonomy and code placement (consolidated) |
+| 2026-04-03 | Eager init over lazy loading (consolidated) |
+| 2026-04-03 | Pure logic separation of concerns (consolidated) |
| 2026-04-03 | config/ explosion is correct — fix is documentation, not restructuring |
-| 2026-04-03 | YAML text externalization justification is agent legibility, not i18n |
| 2026-04-01 | IRC to Discord as primary community channel |
| 2026-04-01 | AST audit tests live in internal/audit/, one file per check |
| 2026-04-01 | Split assets/hooks/ into assets/integrations/ + assets/hooks/messages/ |
@@ -20,37 +24,23 @@
| 2026-03-25 | Prompt templates removed — skills are the single agent instruction mechanism |
| 2026-03-24 | Write-once baseline with explicit end-consolidation for consolidation lifecycle |
| 2026-03-23 | Pre/pre HTML tags promoted to shared constants in config/marker |
-| 2026-03-23 | Pure-data param structs in entity — replace function pointers with text keys |
-| 2026-03-22 | No runtime pluralization — use singular/plural text key pairs |
| 2026-03-22 | Output functions belong in write/, never in core/ or cmd/ |
-| 2026-03-21 | Output functions belong in write/, logic and types in core/ |
| 2026-03-20 | Shared formatting utilities belong in internal/format |
| 2026-03-20 | Go-YAML linkage check added to lint-drift as check 5 |
-| 2026-03-18 | Eager Init() for static embedded data instead of per-accessor sync.Once |
| 2026-03-18 | Singular command names for all CLI entities |
| 2026-03-17 | Pre-compute-then-print for write package output blocks |
-| 2026-03-16 | Server methods only handle dispatch and I/O, not struct construction |
-| 2026-03-16 | Explicit Init over package-level init() for resource lookup |
| 2026-03-16 | Resource name constants in config/mcp/resource, mapping in server/resource |
| 2026-03-16 | Rename --consequences flag to --consequence for singular consistency |
-| 2026-03-15 | Pure-logic CompactContext with no I/O — callers own file writes and reporting |
-| 2026-03-15 | TextDescKey exhaustive test verifies all 879 constants resolve to non-empty YAML values |
-| 2026-03-15 | Split text.yaml into 6 domain files loaded via loadYAMLDir |
| 2026-03-14 | Error package taxonomy: 22 domain files replace monolithic errors.go |
| 2026-03-14 | Session prefixes are parser vocabulary, not i18n text |
| 2026-03-14 | System path deny-list as safety net, not security boundary |
| 2026-03-14 | Config-driven freshness check with per-file review URLs |
| 2026-03-13 | Delete ctx-context-monitor skill — hook output is self-sufficient |
| 2026-03-13 | build target depends on sync-why to prevent embedded doc drift |
-| 2026-03-13 | Templates and user-facing text live in assets, structural constants stay in config |
| 2026-03-12 | Recommend companion RAGs as peer MCP servers not bridge through ctx |
-| 2026-03-12 | Split commands.yaml into 4 domain files |
| 2026-03-12 | Rename ctx-map skill to ctx-architecture |
| 2026-03-07 | Use composite directory path constants for multi-segment paths |
| 2026-03-06 | Drop fatih/color dependency — Unicode symbols are sufficient for terminal output, color was redundant |
-| 2026-03-06 | Externalize all command descriptions to embedded YAML for i18n readiness — commands.yaml holds Short/Long for 105 commands plus flag descriptions, loaded via assets.CommandDesc() and assets.FlagDesc() |
-| 2026-03-06 | cmd/root + core taxonomy for all CLI packages — single-command packages use cmd/root/{cmd.go,run.go}, multi-subcommand packages use cmd//{cmd.go,run.go}, shared helpers in core/ |
-| 2026-03-06 | Shared entry types and API live in internal/entry, not in CLI packages — domain types that multiple packages consume (mcp, watch, memory) belong in a domain package, not a CLI subpackage |
| 2026-03-06 | PR #27 (MCP server) meets v0.1 spec requirements — merge-ready pending 3 compliance fixes |
| 2026-03-06 | Skills stay CLI-based; MCP Prompts are the protocol equivalent |
| 2026-03-06 | Peer MCP model for external tool integration |
@@ -124,31 +114,87 @@ For significant decisions:
-->
-## [2026-04-03-133244] config/ explosion is correct — fix is documentation, not restructuring
+## [2026-04-03-180000] Output functions belong in write/ (consolidated)
**Status**: Accepted
-**Context**: Architecture analysis flagged 60+ config sub-packages as a bottleneck. Evaluation showed the alternative (8-10 domain packages) trades granular imports for fat dependency units. Current structure gives zero internal dependencies, surgical dependency tracking, and minimal recompile scope.
+**Consolidated from**: 2 entries (2026-03-21 to 2026-03-22)
-**Decision**: config/ explosion is correct — fix is documentation, not restructuring
+**Decision**: Output functions belong in write/, logic and types in core/, orchestration in cmd/
-**Rationale**: Go's compilation unit is the package. Granular packages mean precise dependency tracking. The developer experience cost (IDE noise, package discovery) is real but solvable with a README decision tree, not restructuring. Restructuring would be massive mechanical churn for cosmetic benefit.
+**Rationale**: The write/ taxonomy is flat by domain — each CLI feature gets its own write/ package. core/ owns domain logic and types. cmd/ owns Cobra orchestration. Functions that call cmd.Print/Println/Printf belong in write/. core/ never imports cobra for output purposes.
-**Consequence**: config/README.md written with organizational guide and decision tree. No restructuring planned. embed/text/ file count will shrink naturally when tpl/ migrates to text/template.
+**Consequence**: All new CLI output must go through a write/ package. No cmd.Print* calls in internal/cli/ outside of internal/write/.
+
+---
+
+## [2026-04-03-180000] YAML text externalization pipeline (consolidated)
+
+**Status**: Accepted
+
+**Consolidated from**: 5 entries (2026-03-06 to 2026-04-03)
+
+**Decision**: All user-facing text externalized to embedded YAML domain files, justified by agent legibility and drift prevention — not i18n
+
+**Rationale**: The real justification is agent legibility (named DescKey constants as traversable graphs) and drift prevention (TestDescKeyYAMLLinkage catches orphans mechanically). i18n is a free downstream consequence. The exhaustive test verifies all constants resolve to non-empty YAML values — new keys are automatically covered.
+
+**Consequence**: commands.yaml split into 4 domain files (commands, flags, text, examples) loaded via dedicated loaders. text.yaml split into 6 domain files loaded via loadYAMLDir. The 3-file ceremony (DescKey + YAML + write/err function) is the cost of agent-legible, drift-proof output.
+
+---
+
+## [2026-04-03-180000] Package taxonomy and code placement (consolidated)
+
+**Status**: Accepted
+
+**Consolidated from**: 3 entries (2026-03-06 to 2026-03-13)
+
+**Decision**: Three-zone taxonomy: cmd/ for Cobra wiring (cmd.go + run.go), core/ for logic and types, assets/ for templates and user-facing text. config/ for structural constants only.
+
+**Rationale**: Taxonomical symmetry makes navigation instant and agent-friendly. Domain types that multiple packages consume belong in domain packages (internal/entry), not CLI subpackages. Templates and user-facing text live in assets/ for i18n readiness; structural constants (paths, limits, regexes) stay in config/.
+
+**Consequence**: Every CLI package has the same predictable shape. Shared entry types live in internal/entry. Template files (tpl_*.go) moved from config/ to assets/. 474 files changed in initial restructuring.
+
+---
+
+## [2026-04-03-180000] Eager init over lazy loading (consolidated)
+
+**Status**: Accepted
+
+**Consolidated from**: 2 entries (2026-03-16 to 2026-03-18)
+
+**Decision**: Explicit Init() called eagerly at startup for static embedded data and resource lookups, instead of per-accessor sync.Once or package-level init()
+
+**Rationale**: Static embedded data is required at startup — sync.Once per accessor is cargo cult. Package-level init() hides startup dependencies and makes ordering unclear. Explicit Init() called from main.go / NewServer makes the dependency visible and testable.
+
+**Consequence**: Maps unexported, accessors are plain lookups. Tests call Init() in TestMain. res.Init() called from NewServer before ToList(). No package-level side effects, zero sync.Once in the lookup pipeline.
---
-## [2026-04-03-133236] YAML text externalization justification is agent legibility, not i18n
+## [2026-04-03-180000] Pure logic separation of concerns (consolidated)
**Status**: Accepted
-**Context**: Principal analysis initially framed 879-key YAML externalization as a bet on i18n. Blog post review (v0.8.0) revealed the real justification: agent legibility (named DescKey constants as traversable graphs), drift prevention (TestDescKeyYAMLLinkage catches orphans mechanically), and completing the archaeology of finding all 879 scattered strings.
+**Consolidated from**: 3 entries (2026-03-15 to 2026-03-23)
-**Decision**: YAML text externalization justification is agent legibility, not i18n
+**Decision**: Pure-logic functions return data structs; callers own I/O, file writes, and reporting. Function pointers in param structs replaced with text keys.
-**Rationale**: The v0.8.0 blog makes it explicit: finding strings is the hard part, translation is mechanical. The externalization already pays for itself through agent legibility and mechanical verification. i18n is a free downstream consequence, not the justification.
+**Rationale**: Pure logic with no I/O lets both MCP (JSON-RPC) and CLI (cobra) callers control output independently. Methods that don't access receiver state hide their true dependencies — make them free functions. If all callers of a callback vary only by a string key, the callback is data in disguise.
-**Consequence**: Future architecture analysis should frame externalization as already-justified investment. The 3-file ceremony (DescKey + YAML + write/err function) is the cost of agent-legible, drift-proof output — not speculative i18n prep.
+**Consequence**: CompactContext returns CompactResult; callers iterate FileUpdates. Server response helpers in server/out, prompt builders in server/prompt. All cross-cutting param structs in entity are function-pointer-free.
+
+---
+
+## [2026-04-03-133244] config/ explosion is correct — fix is documentation, not restructuring
+
+**Status**: Accepted
+
+**Context**: Architecture analysis flagged 60+ config sub-packages as a bottleneck. Evaluation showed the alternative (8-10 domain packages) trades granular imports for fat dependency units. Current structure gives zero internal dependencies, surgical dependency tracking, and minimal recompile scope.
+
+**Decision**: config/ explosion is correct — fix is documentation, not restructuring
+
+**Rationale**: Go's compilation unit is the package. Granular packages mean precise dependency tracking. The developer experience cost (IDE noise, package discovery) is real but solvable with a README decision tree, not restructuring. Restructuring would be massive mechanical churn for cosmetic benefit.
+
+**Consequence**: config/README.md written with organizational guide and decision tree. No restructuring planned. embed/text/ file count will shrink naturally when tpl/ migrates to text/template.
---
@@ -362,34 +408,6 @@ For significant decisions:
---
-## [2026-03-23-003346] Pure-data param structs in entity — replace function pointers with text keys
-
-**Status**: Accepted
-
-**Context**: MergeParams had UpdateFn callback, DeployParams had ListErr/ReadErr function pointers — both smuggled side effects into data structs
-
-**Decision**: Pure-data param structs in entity — replace function pointers with text keys
-
-**Rationale**: Text keys are pure data, keep entity dependency-free, and the consuming function can do the dispatch itself
-
-**Consequence**: All cross-cutting param structs in entity must be function-pointer-free; I/O functions passed as direct parameters
-
----
-
-## [2026-03-22-084509] No runtime pluralization — use singular/plural text key pairs
-
-**Status**: Accepted
-
-**Context**: Hardcoded English plural rules (+ s, y → ies) were scattered across format.Pluralize, padPluralize, and inline code — all i18n dead-ends
-
-**Decision**: No runtime pluralization — use singular/plural text key pairs
-
-**Rationale**: Different languages have vastly different plural rules. Complete sentence templates with embedded counts (time.minute-count '1 minute', time.minutes-count '%d minutes') let each locale define its own plural forms.
-
-**Consequence**: format.Pluralize and format.PluralWord are deleted. All plural output uses paired text keys with the count embedded in the template.
-
----
-
## [2026-03-22-084316] Output functions belong in write/, never in core/ or cmd/
**Status**: Accepted
@@ -404,20 +422,6 @@ For significant decisions:
---
-## [2026-03-21-084020] Output functions belong in write/, logic and types in core/
-
-**Status**: Accepted
-
-**Context**: PrintFeedReport was initially placed in cli/site/core/ but it calls cmd.Println — that's output formatting, not business logic
-
-**Decision**: Output functions belong in write/, logic and types in core/
-
-**Rationale**: The project taxonomy separates concerns: core/ owns domain logic, types, and helpers; write/ owns CLI output formatting that takes *cobra.Command for Println. Mixing them blurs the boundary and makes testing harder.
-
-**Consequence**: All functions that call cmd.Print/Println/Printf belong in the write/ package tree. core/ never imports cobra for output purposes.
-
----
-
## [2026-03-20-232506] Shared formatting utilities belong in internal/format
**Status**: Accepted
@@ -446,20 +450,6 @@ For significant decisions:
---
-## [2026-03-18-193631] Eager Init() for static embedded data instead of per-accessor sync.Once
-
-**Status**: Accepted
-
-**Context**: 4 sync.Once guards + 4 exported maps + 4 Load functions + a wrapper package for YAML that never mutates.
-
-**Decision**: Eager Init() for static embedded data instead of per-accessor sync.Once
-
-**Rationale**: Data is static and required at startup. sync.Once per accessor is cargo cult. One Init() in main.go is sufficient. Tests call Init() in TestMain.
-
-**Consequence**: Maps unexported, accessors are plain lookups, permissions and stopwords also loaded eagerly. Zero sync.Once remains in the lookup pipeline.
-
----
-
## [2026-03-18-193623] Singular command names for all CLI entities
**Status**: Accepted
@@ -488,34 +478,6 @@ For significant decisions:
---
-## [2026-03-16-122033] Server methods only handle dispatch and I/O, not struct construction
-
-**Status**: Accepted
-
-**Context**: MCP server had ok/error/writeError as methods plus prompt builders that didn't use Server state — they just constructed response structs
-
-**Decision**: Server methods only handle dispatch and I/O, not struct construction
-
-**Rationale**: Methods that don't access receiver state hide their true dependencies and inflate the Server interface. Free functions make the dependency graph explicit and are independently testable.
-
-**Consequence**: New response helpers go in server/out, prompt builders in server/prompt. Server methods are limited to dispatch (handlePromptsGet) and I/O (writeJSON, emitNotification). Same principle applies to future tool/resource builders.
-
----
-
-## [2026-03-16-104143] Explicit Init over package-level init() for resource lookup
-
-**Status**: Accepted
-
-**Context**: server/resource package used init() to silently build the URI lookup map
-
-**Decision**: Explicit Init over package-level init() for resource lookup
-
-**Rationale**: Implicit init hides startup dependencies, makes ordering unclear, and is harder to test. Explicit Init called from NewServer makes the dependency visible.
-
-**Consequence**: res.Init() called explicitly from NewServer before ToList(); no package-level side effects
-
----
-
## [2026-03-16-104142] Resource name constants in config/mcp/resource, mapping in server/resource
**Status**: Accepted
@@ -544,48 +506,6 @@ For significant decisions:
---
-## [2026-03-15-230640] Pure-logic CompactContext with no I/O — callers own file writes and reporting
-
-**Status**: Accepted
-
-**Context**: MCP server and CLI compact command both implemented task compaction independently, with the MCP handler using a local WriteContextFile wrapper
-
-**Decision**: Pure-logic CompactContext with no I/O — callers own file writes and reporting
-
-**Rationale**: Separating pure logic from I/O lets both MCP (JSON-RPC responses) and CLI (cobra cmd.Println) callers control output and file writes. Eliminates duplication and the unnecessary mcp/server/fs package
-
-**Consequence**: tidy.CompactContext returns a CompactResult struct; callers iterate FileUpdates and write them. Archive logic stays in callers since MCP and CLI have different archive policies
-
----
-
-## [2026-03-15-101336] TextDescKey exhaustive test verifies all 879 constants resolve to non-empty YAML values
-
-**Status**: Accepted
-
-**Context**: PR #42 merged with ~80 new MCP text keys but no test coverage for key-to-YAML mapping
-
-**Decision**: TextDescKey exhaustive test verifies all 879 constants resolve to non-empty YAML values
-
-**Rationale**: A single table-driven test parsing embed.go source catches typos and missing YAML entries at test time — no manual key list to maintain
-
-**Consequence**: New TextDescKey constants are automatically covered; orphaned keys fail CI
-
----
-
-## [2026-03-15-040638] Split text.yaml into 6 domain files loaded via loadYAMLDir
-
-**Status**: Accepted
-
-**Context**: text.yaml grew to 1812 lines covering write, errors, mcp, doctor, hooks, and ui domains
-
-**Decision**: Split text.yaml into 6 domain files loaded via loadYAMLDir
-
-**Rationale**: Matches existing split pattern (commands.yaml, flags.yaml, examples.yaml); loadYAMLDir merges all files in commands/text/ transparently so TextDesc() API stays unchanged
-
-**Consequence**: New domain files must go into commands/text/; loadYAMLDir reads all .yaml in the directory at init time
-
----
-
## [2026-03-14-180905] Error package taxonomy: 22 domain files replace monolithic errors.go
**Status**: Accepted
@@ -670,20 +590,6 @@ For significant decisions:
---
-## [2026-03-13-151954] Templates and user-facing text live in assets, structural constants stay in config
-
-**Status**: Accepted
-
-**Context**: Ongoing refactoring session moving Tpl* constants out of config/
-
-**Decision**: Templates and user-facing text live in assets, structural constants stay in config
-
-**Rationale**: config/ is for structural constants (paths, limits, regexes); assets/ is for templates, labels, and text that would need i18n. Clean separation of concerns
-
-**Consequence**: All tpl_entry.go, tpl_journal.go, tpl_loop.go, tpl_recall.go moved to assets/
-
----
-
## [2026-03-12-133007] Recommend companion RAGs as peer MCP servers not bridge through ctx
**Status**: Accepted
@@ -698,20 +604,6 @@ For significant decisions:
---
-## [2026-03-12-133007] Split commands.yaml into 4 domain files
-
-**Status**: Accepted
-
-**Context**: Single 2373-line YAML mixed commands, flags, text, and examples with inconsistent quoting
-
-**Decision**: Split commands.yaml into 4 domain files
-
-**Rationale**: Context is for humans — localization files should be human-readable block scalars. Separate files eliminate the underscore prefix namespace hack
-
-**Consequence**: 4 files (commands.yaml, flags.yaml, text.yaml, examples.yaml) with dedicated loaders in embed.go
-
----
-
## [2026-03-12-133007] Rename ctx-map skill to ctx-architecture
**Status**: Accepted
@@ -754,48 +646,6 @@ For significant decisions:
---
-## [2026-03-06-200257] Externalize all command descriptions to embedded YAML for i18n readiness — commands.yaml holds Short/Long for 105 commands plus flag descriptions, loaded via assets.CommandDesc() and assets.FlagDesc()
-
-**Status**: Accepted
-
-**Context**: Command descriptions were inline strings scattered across 105 cobra.Command definitions
-
-**Decision**: Externalize all command descriptions to embedded YAML for i18n readiness — commands.yaml holds Short/Long for 105 commands plus flag descriptions, loaded via assets.CommandDesc() and assets.FlagDesc()
-
-**Rationale**: Centralizing user-facing text in a single translatable file prepares for i18n without runtime cost (embedded at compile time)
-
-**Consequence**: System's 30 hidden hook subcommands excluded (not user-facing); flag descriptions use _flags.scope.name convention
-
----
-
-## [2026-03-06-200247] cmd/root + core taxonomy for all CLI packages — single-command packages use cmd/root/{cmd.go,run.go}, multi-subcommand packages use cmd//{cmd.go,run.go}, shared helpers in core/
-
-**Status**: Accepted
-
-**Context**: 35 CLI packages had inconsistent flat structures mixing Cmd(), run logic, helpers, and types in the same directory
-
-**Decision**: cmd/root + core taxonomy for all CLI packages — single-command packages use cmd/root/{cmd.go,run.go}, multi-subcommand packages use cmd//{cmd.go,run.go}, shared helpers in core/
-
-**Rationale**: Taxonomical symmetry: every package has the same predictable shape, making navigation instant and agent-friendly
-
-**Consequence**: cmd/ contains only cmd.go + run.go; helpers go to core/; 474 files changed in initial restructuring
-
----
-
-## [2026-03-06-200227] Shared entry types and API live in internal/entry, not in CLI packages — domain types that multiple packages consume (mcp, watch, memory) belong in a domain package, not a CLI subpackage
-
-**Status**: Accepted
-
-**Context**: External consumers were importing cli/add for EntryParams/ValidateEntry/WriteEntry, creating a leaky abstraction
-
-**Decision**: Shared entry types and API live in internal/entry, not in CLI packages — domain types that multiple packages consume (mcp, watch, memory) belong in a domain package, not a CLI subpackage
-
-**Rationale**: Domain types in CLI packages force consumers to depend on CLI internals; internal/entry provides a clean boundary
-
-**Consequence**: entry aliases Params from add/core to avoid import cycle (entry imports add/core for insert logic); future work may move insert logic to entry to eliminate the cycle
-
----
-
## [2026-03-06-141507] PR #27 (MCP server) meets v0.1 spec requirements — merge-ready pending 3 compliance fixes
**Status**: Accepted
@@ -1161,5 +1011,6 @@ See: `specs/injection-oversize-nudge.md`.
---
+
*Module-specific, already-shipped, and historical decisions:
[decisions-reference.md](decisions-reference.md)*
diff --git a/.context/LEARNINGS.md b/.context/LEARNINGS.md
index 19e6fc2a1..7fc00b22b 100644
--- a/.context/LEARNINGS.md
+++ b/.context/LEARNINGS.md
@@ -17,6 +17,13 @@ DO NOT UPDATE FOR:
| Date | Learning |
|----|--------|
+| 2026-04-03 | Subagent scope creep and cleanup (consolidated) |
+| 2026-04-03 | Bulk rename and replace_all hazards (consolidated) |
+| 2026-04-03 | Import cycles and package splits (consolidated) |
+| 2026-04-03 | Lint suppression and gosec patterns (consolidated) |
+| 2026-04-03 | Skill lifecycle and promotion (consolidated) |
+| 2026-04-03 | Cross-cutting change ripple (consolidated) |
+| 2026-04-03 | Dead code detection (consolidated) |
| 2026-04-03 | desc.Text() is the single highest-connectivity symbol in the codebase |
| 2026-04-01 | Raw I/O migration unlocks downstream checks for free |
| 2026-04-01 | go/packages respects build tags — darwin-only violations invisible on Linux |
@@ -29,73 +36,47 @@ DO NOT UPDATE FOR:
| 2026-03-31 | JSON Schema default fields cause linter errors with some validators |
| 2026-03-30 | Architecture diagrams drift silently during feature additions |
| 2026-03-30 | Python-generated doc.go files need gofmt — formatter strips bare // padding lines |
-| 2026-03-30 | internal/cli/recall/ was dead code — never registered in bootstrap |
| 2026-03-30 | lint-docstrings.sh greedy sed hid all return-type violations |
| 2026-03-25 | Machine-generated CLAUDE.md content consumes per-turn budget without proportional value |
-| 2026-03-25 | Dead files accumulate when nothing consumes them |
| 2026-03-25 | Template improvements don't propagate to existing projects |
| 2026-03-24 | lint-drift false positives from conflating constant namespaces |
| 2026-03-24 | git describe --tags follows ancestry, not global tag list |
| 2026-03-23 | Typography detection script needs exclusion lists for intentional uses |
-| 2026-03-23 | Subagents rename functions and restructure code beyond their scope |
| 2026-03-23 | Splitting core/ into subpackages reveals hidden structure |
| 2026-03-23 | Higher-order callbacks in param structs are a code smell |
-| 2026-03-22 | Types in god-object files create circular dependencies |
-| 2026-03-20 | replace_all on short tokens like function names will also replace their definitions |
| 2026-03-20 | Commit messages containing script paths trigger PreToolUse hooks |
-| 2026-03-19 | Rename constants to avoid gosec G101 false positives |
-| 2026-03-18 | Tests in package X cannot import X/sub packages that import X back |
-| 2026-03-18 | Bulk sed on imports displaces aliased imports |
| 2026-03-18 | Lazy sync.Once per-accessor is a code smell for static embedded data |
| 2026-03-17 | Write package output census: 69 trivial/simple, 38 consolidation candidates, 18 complex |
| 2026-03-16 | Docstring tasks require reading CONVENTIONS.md Documentation section first |
| 2026-03-16 | Convention enforcement needs mechanical verification, not behavioral repetition |
| 2026-03-16 | One-liner method wrappers hide dependencies without adding value |
| 2026-03-16 | Agents reliably introduce gofmt issues during bulk renames |
-| 2026-03-15 | replace_all on short tokens like core. mangles aliased imports |
-| 2026-03-15 | Delete legacy code instead of maintaining it — MigrateKeyFile had 5 callers and test coverage but zero users |
| 2026-03-15 | Contributor PRs need post-merge follow-up commits for convention alignment |
| 2026-03-15 | Grep for callers must cover entire working tree before deleting functions |
| 2026-03-14 | Stderr error messages are user-facing text that belongs in assets |
-| 2026-03-14 | Subagents rename packages and modify unrelated files without being asked |
| 2026-03-14 | Hardcoded _alt suffixes create implicit language favoritism |
-| 2026-03-14 | Subagents reorganize file structure without being asked |
-| 2026-03-14 | Internal skill rename requires updates across 6+ layers |
-| 2026-03-13 | Skills without a trigger mechanism are dead code |
-| 2026-03-13 | Variable shadowing causes cascading test failures after package splits |
| 2026-03-13 | sync-why mechanism existed but was not wired to build |
-| 2026-03-13 | Linter reverts import-only edits when references still use old package |
| 2026-03-12 | Project-root files vs context files are distinct categories |
| 2026-03-12 | Constants belong in their domain package not in god objects |
| 2026-03-07 | Always search for existing constants before adding new ones |
| 2026-03-07 | SafeReadFile requires split base+filename paths |
-| 2026-03-06 | Spawned agents reliably create new files but consistently fail to delete old ones — always audit for stale files, duplicate function definitions, and orphaned imports after agent-driven refactoring |
-| 2026-03-06 | Import cycle avoidance: when package A imports package B for logic, B must own shared types — A aliases them. entry imports add/core for insert logic, so add/core owns EntryParams and entry aliases it as entry.Params |
| 2026-03-06 | Stale directory inodes cause invisible files over SSH |
| 2026-03-06 | Stats sort uses string comparison on RFC3339 timestamps with mixed timezones |
| 2026-03-06 | Claude Code supports PreCompact and SessionStart hooks that ctx does not use |
-| 2026-03-06 | nolint:goconst for trivial values normalizes magic strings |
| 2026-03-06 | Package-local err.go files invite broken windows from future agents |
| 2026-03-05 | State directory accumulates silently without auto-prune |
| 2026-03-05 | Global tombstones suppress hooks across all sessions |
| 2026-03-05 | Claude Code has two separate memory systems behind feature flags |
| 2026-03-05 | Blog post editorial feedback is higher-leverage than drafting |
| 2026-03-04 | CONSTITUTION hook compliance is non-negotiable — don't work around it |
-| 2026-03-04 | nolint:errcheck in tests normalizes unchecked errors for agents |
-| 2026-03-04 | golangci-lint v2 ignores inline nolint directives for some linters |
| 2026-03-02 | Hook message registry test enforces exhaustive coverage of embedded templates |
| 2026-03-02 | Existing Projects is ambiguous framing for migration notes |
| 2026-03-02 | Claude Code JSONL model ID does not distinguish 200k from 1M context |
| 2026-03-01 | Gosec G306 flags test file WriteFile with 0644 permissions |
| 2026-03-01 | Converting PersistentPreRun to PersistentPreRunE changes exit behavior |
-| 2026-03-01 | Key path changes ripple across 15+ doc files and 2 skills |
| 2026-03-01 | Test HOME isolation is required for user-level path functions |
-| 2026-03-01 | Skill enhancement is a documentation-heavy operation across 10+ files |
| 2026-03-01 | Task descriptions can be stale in reverse — implementation done but task not marked complete |
-| 2026-03-01 | Elevating private skills requires synchronized updates across 6 layers |
| 2026-03-01 | Model-to-window mapping requires ordered prefix matching |
-| 2026-03-01 | Removing embedded asset directories requires synchronized cleanup across 5+ layers |
-| 2026-03-01 | Absorbing shell scripts into Go commands creates a discoverability gap |
| 2026-03-01 | TASKS.md template checkbox syntax inside HTML comments is parsed by RegExTaskMultiline |
| 2026-03-01 | Hook logs had no rotation; event log already did |
| 2026-02-28 | ctx pad import, ctx pad export, and ctx system resources make three hack scripts redundant |
@@ -120,12 +101,86 @@ DO NOT UPDATE FOR:
| 2026-02-22 | Linting and static analysis (consolidated) |
| 2026-02-22 | Permission and settings drift (consolidated) |
| 2026-02-22 | Gitignore and filesystem hygiene (consolidated) |
-| 2026-02-19 | Feature can be code-complete but invisible to users |
| 2026-01-28 | IDE is already the UI |
---
+## [2026-04-03-180000] Subagent scope creep and cleanup (consolidated)
+
+**Consolidated from**: 4 entries (2026-03-06 to 2026-03-23)
+
+- Subagents reliably rename functions, restructure files, change import aliases, and modify function signatures beyond their stated scope — even narrowly scoped tasks like fixing em-dashes in comments
+- Subagents create new files during refactors but consistently fail to delete the originals — always audit for stale files, duplicate definitions, and orphaned imports afterward
+- After any agent-driven refactor: run `git diff --stat` and `git diff --name-only HEAD`, revert anything outside the intended scope, and check for stale package declarations before building
+
+---
+
+## [2026-04-03-180000] Bulk rename and replace_all hazards (consolidated)
+
+**Consolidated from**: 3 entries (2026-03-15 to 2026-03-20)
+
+- `replace_all` on short tokens (e.g. `core.`, function names) matches inside longer identifiers and function definitions — `remindcore.` becomes `remindtidy.`, `func HumanAgo` becomes `func format.DurationAgo` (invalid Go)
+- `sed` insert-before-first-match does not understand Go import aliases — the alias attaches to whatever line sed inserts, not the original target
+- For function renames: delete the old definition separately rather than using replace_all. For bulk import additions: check for aliased imports first and handle them separately, or use goimports
+
+---
+
+## [2026-04-03-180000] Import cycles and package splits (consolidated)
+
+**Consolidated from**: 5 entries (2026-03-06 to 2026-03-22)
+
+- Types in god-object files (e.g. hook/types.go with 15+ types from 8 domains) create circular dependencies — move types to their owning domain package
+- Tests in parent package X cannot import X/sub packages that import X back — move tests to the sub-package they exercise
+- Variable shadowing causes cascading failures after splits: `dir`, `file`, `entry` are common Go variable names that collide with new sub-package names — run `go test ./...` before committing splits
+- When moving constants between packages, change imports and all references in a single atomic write so the linter never sees an inconsistent state
+- Import cycle rule: the package providing implementation logic must own the shared types; the facade package aliases them (e.g. `entry.Params` aliases `add/core.EntryParams`)
+
+---
+
+## [2026-04-03-180000] Lint suppression and gosec patterns (consolidated)
+
+**Consolidated from**: 4 entries (2026-03-04 to 2026-03-19)
+
+- Rename constants to avoid gosec G101 false positives (Tokens->Usage, Passed->OK) instead of adding nolint/nosec/path exclusions — exclusions break on file reorganization
+- `nolint:goconst` for trivial values normalizes magic strings — use config constants instead of suppressing the linter
+- `nolint:errcheck` in tests teaches agents to spread the pattern to production code — use `t.Fatal(err)` for setup, `defer func() { _ = f.Close() }()` for cleanup
+- golangci-lint v2 ignores inline nolint directives for some linters — use config-level `exclusions.rules` for gosec patterns, fix the code instead of suppressing errcheck
+
+---
+
+## [2026-04-03-180000] Skill lifecycle and promotion (consolidated)
+
+**Consolidated from**: 4 entries (2026-03-01 to 2026-03-14)
+
+- Internal skill renames and promotions require synchronized updates across 6+ layers: SKILL.md frontmatter, internal cross-references, external docs, embed_test.go expected list, recipe/reference docs, and plugin cache rebuild + session restart
+- Skill behavior changes ripple through hook messages, fallback strings in Go code, doc descriptions, and Makefile hints — grep for the skill name across the entire repo
+- Skills without a trigger mechanism (no user invocation, no hook loading) are dead code — audit skills for reachability
+- After promoting skills: grep -r for the old name across the whole tree, run plugin-reload.sh, restart session to verify autocomplete, and clean stale Skill() entries from settings.local.json
+
+---
+
+## [2026-04-03-180000] Cross-cutting change ripple (consolidated)
+
+**Consolidated from**: 4 entries (2026-02-19 to 2026-03-01)
+
+- Path changes (e.g. key file location) ripple across 15+ doc files and 2 skills — grep broadly (not just code) and budget for 15+ file touches
+- Removing embedded asset directories requires synchronized cleanup across 5+ layers: embed directive, accessor functions, callers, tests, config constants, build targets, documentation — work outward from the embed
+- Absorbing shell scripts into Go commands creates a discoverability gap — update contributing.md, common-workflows.md, and CLI index as part of the absorption checklist
+- A feature without docs is invisible to users: always check feature page, cli-reference.md, relevant recipes, and zensical.toml nav after implementing a new CLI subcommand
+
+---
+
+## [2026-04-03-180000] Dead code detection (consolidated)
+
+**Consolidated from**: 3 entries (2026-03-15 to 2026-03-30)
+
+- Dead packages can build and test green while being completely unreachable — detection requires checking bootstrap registration, not just build success (e.g. internal/cli/recall/ existed with tests but was never wired into the command tree)
+- Files created by `ctx init` that no agent, hook, or skill ever reads are dead on arrival — verify there is at least one consumer before adding to init scaffolding
+- When touching legacy compat code, first ask whether the legacy path has real users — if not, delete it entirely rather than improving it (MigrateKeyFile had 5 callers and test coverage but zero users)
+
+---
+
## [2026-04-03-133244] desc.Text() is the single highest-connectivity symbol in the codebase
**Context**: GitNexus enrichment during architecture analysis revealed desc.Text() (internal/assets/read/desc/desc.go:75) has 30+ direct callers spanning every architectural layer (MCP handler, format, index, tidy, trace, memory, sysinfo, io) and participates in 53 execution flows.
@@ -246,16 +301,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-30-003720] internal/cli/recall/ was dead code — never registered in bootstrap
-
-**Context**: The entire recall CLI package existed with tests but was never wired into the command tree. Journal consumed it but nobody deleted the ghost
-
-**Lesson**: Dead package detection requires checking bootstrap registration, not just build success. A package can build and test green while being completely unreachable
-
-**Application**: Add a compliance test that verifies all cli/ packages are registered in bootstrap
-
----
-
## [2026-03-30-003707] lint-docstrings.sh greedy sed hid all return-type violations
**Context**: sed 's/.*) //' consumed return type parens, leaving { — functions with return types were invisible to the script for months
@@ -276,16 +321,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-25-173339] Dead files accumulate when nothing consumes them
-
-**Context**: IMPLEMENTATION_PLAN.md and PROMPT.md were created by ctx init but no agent, hook, or skill ever read them
-
-**Lesson**: Before adding a file to init scaffolding, verify there is at least one consumer. Periodically audit what init creates vs what the system reads.
-
-**Application**: The prompt deprecation spec documents the reasoning as a papertrail for future removals.
-
----
-
## [2026-03-25-173338] Template improvements don't propagate to existing projects
**Context**: 5 of 8 context files in the ctx project itself had stale/missing comment headers — templates evolved but non-destructive init never re-synced them
@@ -326,16 +361,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-23-165610] Subagents rename functions and restructure code beyond their scope
-
-**Context**: Agents tasked with fixing em-dashes in comments also renamed exported functions, changed import aliases, and modified function signatures
-
-**Lesson**: Always diff-audit agent output for structural changes before accepting edits, even when the task is narrowly scoped
-
-**Application**: After any agent batch edit, run git diff --stat and scan for non-comment changes before staging
-
----
-
## [2026-03-23-003544] Splitting core/ into subpackages reveals hidden structure
**Context**: init core/ was a flat bag of domain objects — splitting into backup/, claude/, entry/, merge/, plan/, plugin/, project/, prompt/, tpl/, validate/ exposed duplicated logic, misplaced types, and function-pointer smuggling that were invisible in the flat layout
@@ -356,26 +381,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-22-220846] Types in god-object files create circular dependencies
-
-**Context**: hook/types.go had 15+ types from 8 domains; session importing hook for SessionTokenInfo created a cycle
-
-**Lesson**: Moving types to their owning domain package breaks import cycles
-
-**Application**: When a type is only used by one domain package, move it there. Check with grep before assuming a type is cross-cutting.
-
----
-
-## [2026-03-20-232518] replace_all on short tokens like function names will also replace their definitions
-
-**Context**: Using replace_all to rename HumanAgo to format.DurationAgo also changed the func declaration line to func format.DurationAgo which is invalid Go
-
-**Lesson**: replace_all matches everywhere in the file including function definitions, not just call sites
-
-**Application**: When renaming functions, delete the old definition separately rather than using replace_all. Or use a more specific match pattern that excludes func declarations.
-
----
-
## [2026-03-20-160112] Commit messages containing script paths trigger PreToolUse hooks
**Context**: Git commit message body contained a path to a shell script under the hack directory which matched a hook pattern that blocks direct script invocation
@@ -386,36 +391,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-19-194942] Rename constants to avoid gosec G101 false positives
-
-**Context**: Constants named ColTokens, DriftPassed, StatusTokensFormat triggered gosec G101 credential detection. Suppressing with nolint or broadening .golangci.yml exclusion paths is fragile — paths change when files split.
-
-**Lesson**: Rename the constant to avoid the trigger word instead of suppressing the lint. Tokens→Usage, Passed→OK convey the same semantics without false positives. This is cleaner than nolint annotations or path-based exclusions that break on file reorganization.
-
-**Application**: When gosec flags a constant name, ask what the value semantically represents and rename to that. Do not add nolint, nosec, or path exclusions.
-
----
-
-## [2026-03-18-193616] Tests in package X cannot import X/sub packages that import X back
-
-**Context**: embed_test.go in package assets kept importing read/ sub-packages that import assets.FS, creating cycles. Recurred 4 times this session.
-
-**Lesson**: Tests that exercise domain accessor packages must live in those packages, not in the parent. The parent test file can only use the shared resource (FS) directly.
-
-**Application**: When splitting functions from a parent package into sub-packages, move the corresponding tests too. Do not leave them in the parent.
-
----
-
-## [2026-03-18-193602] Bulk sed on imports displaces aliased imports
-
-**Context**: Used sed to add desc import to 278 files. Files with aliased imports like ctxCfg config/ctx got the alias stolen by the new import line inserted before it.
-
-**Lesson**: sed insert-before-first-match does not understand Go import aliases. The alias attaches to whatever import line sed inserts, not the original target.
-
-**Application**: When bulk-adding imports, check for aliased imports first and handle them separately. Or use goimports if available.
-
----
-
## [2026-03-18-133457] Lazy sync.Once per-accessor is a code smell for static embedded data
**Context**: assets package had 4 sync.Once guards, 4 exported maps, 4 Load*() functions, and a wrapper desc package — all to lazily load YAML from embed.FS that never mutates. Every accessor call went through sync.Once + global map + wrapper indirection.
@@ -476,26 +451,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-15-230643] replace_all on short tokens like core. mangles aliased imports
-
-**Context**: Used Edit tool replace_all to change core. to tidy. across handle_tool.go
-
-**Lesson**: Short replacement tokens match inside longer identifiers — remindcore. became remindtidy. silently
-
-**Application**: When doing bulk renames, prefer targeted replacements or grep for collateral damage immediately after. Avoid replace_all on tokens shorter than the full identifier
-
----
-
-## [2026-03-15-101346] Delete legacy code instead of maintaining it — MigrateKeyFile had 5 callers and test coverage but zero users
-
-**Context**: Started by adding constants for legacy key names, then realized nobody uses legacy keys
-
-**Lesson**: When touching legacy compat code, first ask whether the legacy path has real users. If not, delete it entirely rather than improving it
-
-**Application**: Apply to any backward-compat shim: check actual usage before investing in maintenance
-
----
-
## [2026-03-15-101342] Contributor PRs need post-merge follow-up commits for convention alignment
**Context**: PR #42 (MCP v0.2) addressed bulk of review feedback but left ~12 inline strings, no embed_test coverage, and substring matching in containsOverlap
@@ -526,16 +481,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-14-180902] Subagents rename packages and modify unrelated files without being asked
-
-**Context**: ET agent renamed internal/eventlog/ to internal/log/ and modified 5+ caller files outside the internal/err/ scope
-
-**Lesson**: Always diff agent output against HEAD to catch scope creep before building; revert unrelated changes immediately
-
-**Application**: After any agent-driven refactor, run git diff --name-only HEAD and revert anything outside the intended scope before testing
-
----
-
## [2026-03-14-131202] Hardcoded _alt suffixes create implicit language favoritism
**Context**: Session parser had session_prefix_alt hardcoding Turkish as a special case alongside English default
@@ -546,46 +491,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-14-110750] Subagents reorganize file structure without being asked
-
-**Context**: Asked subagent to replace os.ReadFile callsites with validation wrappers. It also moved functions to new files, renamed them (ReadUserFile to SafeReadUserFile), and created a new internal/io package.
-
-**Lesson**: Subagents optimize for clean results and will restructure beyond stated scope — moved, renamed, and split files without being asked.
-
-**Application**: After subagent-driven refactors, always verify file organization matches intent. Audit for moved, renamed, and split files, not just the requested callsite changes.
-
----
-
-## [2026-03-14-093757] Internal skill rename requires updates across 6+ layers
-
-**Context**: Renamed ctx-alignment-audit to _ctx-alignment-audit. The allow list test in embed_test.go failed because it iterates all bundled skills and expects each in the allow list.
-
-**Lesson**: The allow list test needed a strings.HasPrefix(_) skip for internal skills. This was not obvious until tests ran.
-
-**Application**: When converting public to internal skills, audit: allow.txt, embed_test.go allow list test, reference/skills.md, all recipe docs referencing the skill, contributing.md dev-only skills table, and permissions docs.
-
----
-
-## [2026-03-13-223110] Skills without a trigger mechanism are dead code
-
-**Context**: ctx-context-monitor was a skill documenting how to respond to hook output, but no hook or agent ever loaded it. The hook output already contained sufficient instructions.
-
-**Lesson**: A skill only enters the agent context when explicitly invoked via /skill-name. If the description says not user-invocable and no mechanism loads it automatically, it is unreachable.
-
-**Application**: Audit skills for reachability. If nothing triggers the skill, either add a trigger or delete it.
-
----
-
-## [2026-03-13-223108] Variable shadowing causes cascading test failures after package splits
-
-**Context**: Large refactoring moved constants from monolithic config package to sub-packages (dir, entry, file). Test files had local variables named dir, entry, file that shadowed the new package imports.
-
-**Lesson**: When splitting a package, audit test files for local variable names that collide with new package names. dir, file, entry are especially common Go variable names.
-
-**Application**: Before committing a package split, run go test ./... and check for undefined errors caused by variable shadowing.
-
----
-
## [2026-03-13-151952] sync-why mechanism existed but was not wired to build
**Context**: assets/why/ had drifted from docs/ — the sync targets existed in the Makefile but build did not depend on sync-why
@@ -596,16 +501,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-13-151951] Linter reverts import-only edits when references still use old package
-
-**Context**: Moving tpl_entry.go from config/entry to assets — linter kept reverting the import change
-
-**Lesson**: When moving constants between packages, change imports and all references in a single atomic write (use Write not incremental Edit), so the linter never sees an inconsistent state
-
-**Application**: For future package migrations, use full file rewrites when a linter is active
-
----
-
## [2026-03-12-133008] Project-root files vs context files are distinct categories
**Context**: Tried moving ImplementationPlan constant to config/ctx assuming it was a context file. (Note: IMPLEMENTATION_PLAN.md was removed in 2026-03-25 as a dead file — no agent consumer.)
@@ -646,26 +541,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-06-200319] Spawned agents reliably create new files but consistently fail to delete old ones — always audit for stale files, duplicate function definitions, and orphaned imports after agent-driven refactoring
-
-**Context**: Multiple agent batches across cmd/ restructuring, color removal, and flag externalization left stale files, duplicate run.go, and unupdated parent imports
-
-**Lesson**: Agent cleanup is a known gap — budget 5-10 minutes for post-agent audit per batch
-
-**Application**: After every agent batch: grep for stale package declarations, check parent imports point to cmd/root not cmd/, verify old files are deleted
-
----
-
-## [2026-03-06-200237] Import cycle avoidance: when package A imports package B for logic, B must own shared types — A aliases them. entry imports add/core for insert logic, so add/core owns EntryParams and entry aliases it as entry.Params
-
-**Context**: Extracting entry.Params as a standalone struct in internal/entry created a cycle because entry/write.go imports add/core for AppendEntry
-
-**Lesson**: The package that provides implementation logic must own the types; the facade package aliases them
-
-**Application**: When extracting shared types from implementation packages, check the import direction first — the type lives where the logic lives
-
----
-
## [2026-03-06-141506] Stale directory inodes cause invisible files over SSH
**Context**: Files created by Claude Code hooks were visible inside the VM but not from the SSH terminal
@@ -696,16 +571,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-06-050126] nolint:goconst for trivial values normalizes magic strings
-
-**Context**: Found 5 callsites suppressing goconst with nolint:goconst trivial user input check for y/yes comparisons
-
-**Lesson**: Suppressing a linter with a trivial justification sets precedent for other agents to do the same. The fix (two constants) costs less than the accumulated tech debt.
-
-**Application**: Use config.ConfirmShort/config.ConfirmLong instead of suppressing goconst. Prefer constants over nolint directives.
-
----
-
## [2026-03-06-050125] Package-local err.go files invite broken windows from future agents
**Context**: Found err.go files in 5 CLI packages with heavily duplicated error constructors (errFileWrite, errMkdir, errZensicalNotFound repeated across packages)
@@ -766,26 +631,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-04-040211] nolint:errcheck in tests normalizes unchecked errors for agents
-
-**Context**: User flagged that suppressing errcheck in tests teaches the agent to spread the pattern to production code
-
-**Lesson**: Broken-window theory applies to lint suppressions. Agents learn from test code patterns. Use _ = f.Close() in a closure or check errors with t.Fatal — never suppress with nolint.
-
-**Application**: Handle all errors in test code the same as production: t.Fatal(err) for setup, defer func() { _ = f.Close() }() for best-effort cleanup.
-
----
-
-## [2026-03-04-040209] golangci-lint v2 ignores inline nolint directives for some linters
-
-**Context**: nolint:errcheck and nolint:gosec comments were present but golangci-lint v2 still reported violations
-
-**Lesson**: In golangci-lint v2, use config-level exclusions.rules for gosec patterns (G204, G301, G306) rather than relying on inline nolint directives. For errcheck, fix the code instead of suppressing.
-
-**Application**: When adding new lint suppressions, prefer config-level rules for gosec false positives on safe paths/args; never suppress errcheck — handle the error.
-
----
-
## [2026-03-02-165039] Hook message registry test enforces exhaustive coverage of embedded templates
**Context**: Adding billing.txt to embedded assets without a registry entry caused TestRegistryCoversAllEmbeddedFiles to fail immediately
@@ -836,16 +681,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-01-194147] Key path changes ripple across 15+ doc files and 2 skills
-
-**Context**: Updating docs for the .context/.ctx.key → ~/.local/ctx/keys/ → ~/.ctx/.ctx.key migrations
-
-**Lesson**: Key path changes have a long documentation tail — recipes, references, getting-started, operations, CLI docs, and skills all carry path references. The worktree behavior flip (limitation to works automatically) was the highest-value change per line edited. Simplifying from per-project slugs to a single global key eliminated more code and docs than the original migration added.
-
-**Application**: When moving a file path that appears in user-facing docs, grep broadly (not just code) and budget for 15+ file touches
-
----
-
## [2026-03-01-161459] Test HOME isolation is required for user-level path functions
**Context**: After adding ~/.ctx/.ctx.key as global key location, test suites wrote real files to the developer home directory
@@ -856,16 +691,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-01-144544] Skill enhancement is a documentation-heavy operation across 10+ files
-
-**Context**: Enhancing /ctx-journal-enrich-all to handle export-if-needed touched the skill, hook messages, fallback strings, 5 doc files, 2 Makefiles, and TASKS.md
-
-**Lesson**: Skill behavior changes ripple through hook messages, fallback strings in Go code, doc descriptions, and Makefile hints — all must stay synchronized
-
-**Application**: When modifying a skill's scope, grep for its name across the entire repo and update every description, not just the skill file itself
-
----
-
## [2026-03-01-133014] Task descriptions can be stale in reverse — implementation done but task not marked complete
**Context**: ctx recall sync task said 'command is not registered in Cobra' but the code was fully wired and all tests passed. The task description was stale.
@@ -876,16 +701,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-01-125807] Elevating private skills requires synchronized updates across 6 layers
-
-**Context**: Promoted 6 _ctx-* skills to bundled ctx-* plugin skills
-
-**Lesson**: Moving a skill from .claude/skills/ to internal/assets/claude/skills/ touches: (1) SKILL.md frontmatter name field, (2) internal cross-references between skills (slash command paths), (3) external cross-references in other skills and docs, (4) embed_test.go expected skill list, (5) recipe and reference docs that mention the old name, (6) plugin cache rebuild (`hack/plugin-reload.sh`) + session restart — Claude Code snapshots skills from `~/.claude/plugins/cache/` at startup, so new skills are invisible until the cache is refreshed. Also clean stale underscore-prefixed `Skill(_ctx-*)` entries from `.claude/settings.local.json`.
-
-**Application**: When promoting future skills, use grep -r /_ctx-{name} across the whole tree before declaring done. After code changes, run plugin-reload.sh and restart the session to verify the skill appears in autocomplete.
-
----
-
## [2026-03-01-124921] Model-to-window mapping requires ordered prefix matching
**Context**: Implementing modelContextWindow() for the three-tier context window fallback. Claude model IDs use nested prefixes (claude-sonnet-4-5 vs claude-sonnet-4-20250514).
@@ -896,26 +711,6 @@ DO NOT UPDATE FOR:
---
-## [2026-03-01-112538] Removing embedded asset directories requires synchronized cleanup across 5+ layers
-
-**Context**: Deleting .context/tools/ deployment touched embed directive, asset functions, init logic, tests, config constants, Makefile targets, and docs — missing any one layer leaves dead code or build failures.
-
-**Lesson**: Embedded asset removal is a cross-cutting concern: embed directive → accessor functions → callers → tests → config constants → build targets → documentation. Work outward from the embed.
-
-**Application**: When removing an embedded asset category, use the grep-first approach (search for all references to the accessor functions and constants) before deleting anything.
-
----
-
-## [2026-03-01-102232] Absorbing shell scripts into Go commands creates a discoverability gap
-
-**Context**: Deleted make backup/backup-global/backup-all and make rc-dev/rc-base/rc-status targets when absorbing into ctx system backup and ctx config switch. The Makefile served as self-documenting discovery (make help).
-
-**Lesson**: When eliminating Makefile targets, the CLI reference page alone is not sufficient — contributor-facing docs (contributing.md) and command catalogs (common-workflows.md) must gain explicit entries to compensate.
-
-**Application**: For future hack/ absorptions (e.g. pad-import-ideas.sh, context-watch.sh), audit contributing.md, common-workflows.md CLI-Only table, and the CLI index page as part of the absorption checklist.
-
----
-
## [2026-03-01-095709] TASKS.md template checkbox syntax inside HTML comments is parsed by RegExTaskMultiline
**Context**: Template had example checkboxes (- [x], - [ ]) in HTML comments that the line-based regex matched as real tasks, causing TestArchiveCommand_NoCompletedTasks to fail
@@ -1180,16 +975,6 @@ DO NOT UPDATE FOR:
---
-## [2026-02-19-215200] Feature can be code-complete but invisible to users
-
-**Context**: ctx pad merge was fully implemented with 19 passing tests and binary support, but had zero coverage in user-facing docs (scratchpad.md, cli-reference.md, scratchpad-sync recipe). Only discoverable via --help.
-
-**Lesson**: Implementation completeness \!= user-facing completeness. A feature without docs is invisible to users who don't explore CLI help.
-
-**Application**: After implementing a new CLI subcommand, always check: feature page, cli-reference.md, relevant recipes, and zensical.toml nav (if new page).
-
----
-
## [2026-01-28-051426] IDE is already the UI
**Context**: Considering whether to build custom UI for .context/ files
@@ -1202,5 +987,6 @@ git integration, extensions - all free.
---
+
*Module-specific, niche, and historical learnings:
[learnings-reference.md](learnings-reference.md)*
diff --git a/.context/TASKS.md b/.context/TASKS.md
index e54809aa0..77df2400d 100644
--- a/.context/TASKS.md
+++ b/.context/TASKS.md
@@ -32,6 +32,8 @@ TASK STATUS LABELS:
GITNEXUS.md should be updated accordingly. The make target should remind
the user about it.
+- [ ] SMB mount path support: add `CTX_BACKUP_SMB_MOUNT_PATH` env var so `ctx backup` can use fstab/systemd automounts instead of requiring GVFS. Spec: specs/smb-mount-path-support.md #priority:medium #added:2026-04-04-010000
+
### Architecture Docs
- [ ] Publish architecture docs to docs/: copy ARCHITECTURE.md, DETAILED_DESIGN domain files, and CHEAT-SHEETS.md to docs/reference/. Sanitize intervention points into docs/contributing/. Exclude DANGER-ZONES.md and ARCHITECTURE-PRINCIPAL.md (internal only). Spec: specs/publish-architecture-docs.md #priority:medium #added:2026-04-03-150000
@@ -40,7 +42,6 @@ TASK STATUS LABELS:
### Code Cleanup Findings
-- [x] Add TestTypeFileConvention audit check: type definitions must live in types.go, not mixed into function files. Scan all non-test .go files for ast.TypeSpec declarations; flag any that appear in files not named types.go. Migrate violations. #priority:medium #added:2026-04-03-030033 #done:2026-04-03
- [ ] Extend flagbind helpers (IntFlag, DurationFlag, DurationFlagP, StringP, BoolP) and migrate ~50 call sites to unblock TestNoFlagBindOutsideFlagbind #added:2026-04-01-233250
@@ -52,21 +53,12 @@ TASK STATUS LABELS:
and types from cmd/ directories to core/. See grandfathered map in
compliance_test.go for the full list. #priority:medium #added:2026-03-31-005115
-- [x] Collect all exec.Commands under internal/exec. See
- Phase EXEC below for breakdown. — done, exec/{git,dep,gio,zensical}
- exist, no exec.Command calls remain outside internal/exec
- #done:2026-03-31
- [ ] PD.4.5: Update AGENT_PLAYBOOK.md — add generic "check available skills"
instruction #priority:medium #added:2026-03-25-203340
**PD.5 — Validate:**
-- [x] PD.5.2: Run `ctx init` on a clean directory — verify no
- `.context/prompts/` created. loop.md and skills checks are stale:
- loop.md was never a ctx init artifact (ctx loop generates on demand),
- skills deploy via plugin install, not ctx init.
- #priority:high #added:2026-03-25-203340 #done:2026-03-31
### Phase -3: DevEx
@@ -99,15 +91,6 @@ TASK STATUS LABELS:
.ctxrc or settings.json. Related: consolidation nudge hook
spec. #added:2026-03-23-223500
-- [x] Bug: check-version hook missing throttle touch on plugin
- version read error (run.go:70). When claude.PluginVersion()
- fails, the hook returns without touching the daily throttle
- marker, causing repeated checks on days when plugin.json is
- missing or corrupted. Fix: add
- internalIo.TouchFile(markerFile) before the early return.
- See docs/recipes/hook-sequence-diagrams.md check-version
- diagram which documents the expected behavior.
- #added:2026-03-23-162802 #done:2026-03-31
- [ ] Design UserPromptSubmit hook that runs go build and
surfaces compilation errors before the agent acts on stale
@@ -312,9 +295,6 @@ P0.4.10 task.
one file; types need to go to types.go per convention etc etc)
* Human: split err package into sub packages.
-- [x] Add Use* constants for all system subcommands — all 30 system
- subcommands already use cmd.UseSystem* constants
- #added:2026-03-21-092550 #done:2026-03-31
- [ ] Refactor site/cmd/feed: extract helpers and types to core/, make Run
public #added:2026-03-21-074859
@@ -460,15 +440,6 @@ Many call sites use `_ =` or `_, _ =` to discard errors without
any feedback. Some are legitimate (best-effort cleanup), most are
lazy escapes that hide failures.
-- [x] EH.0: Create central warning sink — `internal/log/warn/warn.go` with
- `var sink io.Writer = os.Stderr` and `func Warn(format string,
- args ...any)`.
- All stderr warnings (`fmt.Fprintf(os.Stderr, ...)`) route through this
- function. The `fmt.Fprintf` return error is handled once, centrally.
- The sink is swappable (tests use `io.Discard`, future: syslog, file).
- EH.2–EH.4 should use `log.Warn()` instead of raw `fmt.Fprintf`.
- DoD: `grep -rn 'fmt.Fprintf(os.Stderr' internal/` returns zero hits
- #priority:high #added:2026-03-15
- [ ] EH.1: Catalogue all silent error discards — recursive walk of `internal/`
for patterns: `_ = `, `_, _ = `, `//nolint:errcheck`, bare `return` after
@@ -603,34 +574,11 @@ Taxonomy (from prefix analysis):
state.go to types.go — file and type no longer exist, refactored away
#priority:low #added:2026-03-07-220825
-- [x] Cleanup internal/cli/system/core/wrapup.go: line 18 constant should go to
- config; make WrappedUpExpiry configurable via ctxrc — already done,
- wrap.ExpiryHours and wrap.Marker exist in config/wrap
- #priority:low #added:2026-03-07-220825 #done:2026-03-31
-- [x] Cleanup internal/cli/system/core/version.go: line 81 newline should come
- from config — already done, uses token.NewlineLF
- #priority:low #added:2026-03-07-220819 #done:2026-03-31
-- [x] Add taxonomy to internal/cli/system/core/ — currently an unstructured bag
- of files; group by domain (backup, hooks, session, knowledge, etc.)
- — already done, 20 domain subdirectories exist
- #priority:medium #added:2026-03-07-220819 #done:2026-03-31
-- [x] Cleanup internal/cli/system/core/version_drift.go: line 53 string
- formatting should use assets — file moved to core/drift/, now uses
- desc.Text and assets throughout
- #priority:medium #added:2026-03-07-220819 #done:2026-03-31
-- [x] Cleanup internal/cli/system/core/state.go: magic permissions (0o750),
- magic strings ('Context: ' prefix, etc.) — file moved to core/state/,
- magic values extracted to config
- #priority:medium #added:2026-03-07-220819 #done:2026-03-31
-- [x] Cleanup internal/cli/system/core/smb.go: errors should come from
- internal/err; lines 101, 116, 111 need assets text — file moved to
- core/archive/, errors routed through err package
- #priority:medium #added:2026-03-07-220819 #done:2026-03-31
- [ ] Make AutoPruneStaleDays configurable via ctxrc. Currently hardcoded to 7
days in config.AutoPruneStaleDays; add a ctxrc key (e.g., auto_prune_days) and
@@ -721,19 +669,11 @@ Taxonomy (from prefix analysis):
tables) #added:2026-03-06-190225
-- [x] Remove FlagNoColor and fatih/color dependency — replaced with plain
- output, dependency removed from go.mod
- #added:2026-03-06-182831 #done:2026-03-31
- [ ] Validate .ctxrc against ctxrc.schema.json at load time — schema is
embedded but never enforced, doctor does field-level checks without using
it #added:2026-03-06-174851
-- [x] Fix 3 CI compliance issues from PR #27 after merge: missing copyright
- header on internal/mcp/server_test.go, missing doc.go for internal/cli/mcp/,
- literal newlines in internal/mcp/resources.go and
- tools.go — all fixed, files moved to mcp/server/ with copyright
- #added:2026-03-06-141508 #done:2026-03-31
- [ ] Add PostToolUse session event capture. Append lightweight event records
(tool name, files touched, timestamp) to .context/state/session-events.jsonl
diff --git a/.context/archive/decisions-consolidated-2026-04-03.md b/.context/archive/decisions-consolidated-2026-04-03.md
new file mode 100644
index 000000000..0f89a5087
--- /dev/null
+++ b/.context/archive/decisions-consolidated-2026-04-03.md
@@ -0,0 +1,229 @@
+# Archived Decisions (consolidated 2026-04-03)
+
+Originals replaced by consolidated entries in DECISIONS.md.
+
+## Group: Output/write location
+
+## [2026-03-22-084509] No runtime pluralization — use singular/plural text key pairs
+
+**Status**: Accepted
+
+**Context**: Hardcoded English plural rules (+ s, y → ies) were scattered across format.Pluralize, padPluralize, and inline code — all i18n dead-ends
+
+**Decision**: No runtime pluralization — use singular/plural text key pairs
+
+**Rationale**: Different languages have vastly different plural rules. Complete sentence templates with embedded counts (time.minute-count '1 minute', time.minutes-count '%d minutes') let each locale define its own plural forms.
+
+**Consequence**: format.Pluralize and format.PluralWord are deleted. All plural output uses paired text keys with the count embedded in the template.
+
+---
+
+## [2026-03-21-084020] Output functions belong in write/, logic and types in core/
+
+**Status**: Accepted
+
+**Context**: PrintFeedReport was initially placed in cli/site/core/ but it calls cmd.Println — that's output formatting, not business logic
+
+**Decision**: Output functions belong in write/, logic and types in core/
+
+**Rationale**: The project taxonomy separates concerns: core/ owns domain logic, types, and helpers; write/ owns CLI output formatting that takes *cobra.Command for Println. Mixing them blurs the boundary and makes testing harder.
+
+**Consequence**: All functions that call cmd.Print/Println/Printf belong in the write/ package tree. core/ never imports cobra for output purposes.
+
+---
+
+
+## Group: YAML text externalization
+
+## [2026-04-03-133236] YAML text externalization justification is agent legibility, not i18n
+
+**Status**: Accepted
+
+**Context**: Principal analysis initially framed 879-key YAML externalization as a bet on i18n. Blog post review (v0.8.0) revealed the real justification: agent legibility (named DescKey constants as traversable graphs), drift prevention (TestDescKeyYAMLLinkage catches orphans mechanically), and completing the archaeology of finding all 879 scattered strings.
+
+**Decision**: YAML text externalization justification is agent legibility, not i18n
+
+**Rationale**: The v0.8.0 blog makes it explicit: finding strings is the hard part, translation is mechanical. The externalization already pays for itself through agent legibility and mechanical verification. i18n is a free downstream consequence, not the justification.
+
+**Consequence**: Future architecture analysis should frame externalization as already-justified investment. The 3-file ceremony (DescKey + YAML + write/err function) is the cost of agent-legible, drift-proof output — not speculative i18n prep.
+
+---
+
+## [2026-03-15-101336] TextDescKey exhaustive test verifies all 879 constants resolve to non-empty YAML values
+
+**Status**: Accepted
+
+**Context**: PR #42 merged with ~80 new MCP text keys but no test coverage for key-to-YAML mapping
+
+**Decision**: TextDescKey exhaustive test verifies all 879 constants resolve to non-empty YAML values
+
+**Rationale**: A single table-driven test parsing embed.go source catches typos and missing YAML entries at test time — no manual key list to maintain
+
+**Consequence**: New TextDescKey constants are automatically covered; orphaned keys fail CI
+
+---
+
+## [2026-03-15-040638] Split text.yaml into 6 domain files loaded via loadYAMLDir
+
+**Status**: Accepted
+
+**Context**: text.yaml grew to 1812 lines covering write, errors, mcp, doctor, hooks, and ui domains
+
+**Decision**: Split text.yaml into 6 domain files loaded via loadYAMLDir
+
+**Rationale**: Matches existing split pattern (commands.yaml, flags.yaml, examples.yaml); loadYAMLDir merges all files in commands/text/ transparently so TextDesc() API stays unchanged
+
+**Consequence**: New domain files must go into commands/text/; loadYAMLDir reads all .yaml in the directory at init time
+
+---
+
+## [2026-03-12-133007] Split commands.yaml into 4 domain files
+
+**Status**: Accepted
+
+**Context**: Single 2373-line YAML mixed commands, flags, text, and examples with inconsistent quoting
+
+**Decision**: Split commands.yaml into 4 domain files
+
+**Rationale**: Context is for humans — localization files should be human-readable block scalars. Separate files eliminate the underscore prefix namespace hack
+
+**Consequence**: 4 files (commands.yaml, flags.yaml, text.yaml, examples.yaml) with dedicated loaders in embed.go
+
+---
+
+## [2026-03-06-200257] Externalize all command descriptions to embedded YAML for i18n readiness — commands.yaml holds Short/Long for 105 commands plus flag descriptions, loaded via assets.CommandDesc() and assets.FlagDesc()
+
+**Status**: Accepted
+
+**Context**: Command descriptions were inline strings scattered across 105 cobra.Command definitions
+
+**Decision**: Externalize all command descriptions to embedded YAML for i18n readiness — commands.yaml holds Short/Long for 105 commands plus flag descriptions, loaded via assets.CommandDesc() and assets.FlagDesc()
+
+**Rationale**: Centralizing user-facing text in a single translatable file prepares for i18n without runtime cost (embedded at compile time)
+
+**Consequence**: System's 30 hidden hook subcommands excluded (not user-facing); flag descriptions use _flags.scope.name convention
+
+---
+
+
+## Group: Package taxonomy and code placement
+
+## [2026-03-06-200247] cmd/root + core taxonomy for all CLI packages — single-command packages use cmd/root/{cmd.go,run.go}, multi-subcommand packages use cmd//{cmd.go,run.go}, shared helpers in core/
+
+**Status**: Accepted
+
+**Context**: 35 CLI packages had inconsistent flat structures mixing Cmd(), run logic, helpers, and types in the same directory
+
+**Decision**: cmd/root + core taxonomy for all CLI packages — single-command packages use cmd/root/{cmd.go,run.go}, multi-subcommand packages use cmd//{cmd.go,run.go}, shared helpers in core/
+
+**Rationale**: Taxonomical symmetry: every package has the same predictable shape, making navigation instant and agent-friendly
+
+**Consequence**: cmd/ contains only cmd.go + run.go; helpers go to core/; 474 files changed in initial restructuring
+
+---
+
+## [2026-03-06-200227] Shared entry types and API live in internal/entry, not in CLI packages — domain types that multiple packages consume (mcp, watch, memory) belong in a domain package, not a CLI subpackage
+
+**Status**: Accepted
+
+**Context**: External consumers were importing cli/add for EntryParams/ValidateEntry/WriteEntry, creating a leaky abstraction
+
+**Decision**: Shared entry types and API live in internal/entry, not in CLI packages — domain types that multiple packages consume (mcp, watch, memory) belong in a domain package, not a CLI subpackage
+
+**Rationale**: Domain types in CLI packages force consumers to depend on CLI internals; internal/entry provides a clean boundary
+
+**Consequence**: entry aliases Params from add/core to avoid import cycle (entry imports add/core for insert logic); future work may move insert logic to entry to eliminate the cycle
+
+---
+
+## [2026-03-13-151954] Templates and user-facing text live in assets, structural constants stay in config
+
+**Status**: Accepted
+
+**Context**: Ongoing refactoring session moving Tpl* constants out of config/
+
+**Decision**: Templates and user-facing text live in assets, structural constants stay in config
+
+**Rationale**: config/ is for structural constants (paths, limits, regexes); assets/ is for templates, labels, and text that would need i18n. Clean separation of concerns
+
+**Consequence**: All tpl_entry.go, tpl_journal.go, tpl_loop.go, tpl_recall.go moved to assets/
+
+---
+
+
+## Group: Eager init over lazy loading
+
+## [2026-03-18-193631] Eager Init() for static embedded data instead of per-accessor sync.Once
+
+**Status**: Accepted
+
+**Context**: 4 sync.Once guards + 4 exported maps + 4 Load functions + a wrapper package for YAML that never mutates.
+
+**Decision**: Eager Init() for static embedded data instead of per-accessor sync.Once
+
+**Rationale**: Data is static and required at startup. sync.Once per accessor is cargo cult. One Init() in main.go is sufficient. Tests call Init() in TestMain.
+
+**Consequence**: Maps unexported, accessors are plain lookups, permissions and stopwords also loaded eagerly. Zero sync.Once remains in the lookup pipeline.
+
+---
+
+## [2026-03-16-104143] Explicit Init over package-level init() for resource lookup
+
+**Status**: Accepted
+
+**Context**: server/resource package used init() to silently build the URI lookup map
+
+**Decision**: Explicit Init over package-level init() for resource lookup
+
+**Rationale**: Implicit init hides startup dependencies, makes ordering unclear, and is harder to test. Explicit Init called from NewServer makes the dependency visible.
+
+**Consequence**: res.Init() called explicitly from NewServer before ToList(); no package-level side effects
+
+---
+
+
+## Group: Pure logic separation of concerns
+
+## [2026-03-15-230640] Pure-logic CompactContext with no I/O — callers own file writes and reporting
+
+**Status**: Accepted
+
+**Context**: MCP server and CLI compact command both implemented task compaction independently, with the MCP handler using a local WriteContextFile wrapper
+
+**Decision**: Pure-logic CompactContext with no I/O — callers own file writes and reporting
+
+**Rationale**: Separating pure logic from I/O lets both MCP (JSON-RPC responses) and CLI (cobra cmd.Println) callers control output and file writes. Eliminates duplication and the unnecessary mcp/server/fs package
+
+**Consequence**: tidy.CompactContext returns a CompactResult struct; callers iterate FileUpdates and write them. Archive logic stays in callers since MCP and CLI have different archive policies
+
+---
+
+## [2026-03-16-122033] Server methods only handle dispatch and I/O, not struct construction
+
+**Status**: Accepted
+
+**Context**: MCP server had ok/error/writeError as methods plus prompt builders that didn't use Server state — they just constructed response structs
+
+**Decision**: Server methods only handle dispatch and I/O, not struct construction
+
+**Rationale**: Methods that don't access receiver state hide their true dependencies and inflate the Server interface. Free functions make the dependency graph explicit and are independently testable.
+
+**Consequence**: New response helpers go in server/out, prompt builders in server/prompt. Server methods are limited to dispatch (handlePromptsGet) and I/O (writeJSON, emitNotification). Same principle applies to future tool/resource builders.
+
+---
+
+## [2026-03-23-003346] Pure-data param structs in entity — replace function pointers with text keys
+
+**Status**: Accepted
+
+**Context**: MergeParams had UpdateFn callback, DeployParams had ListErr/ReadErr function pointers — both smuggled side effects into data structs
+
+**Decision**: Pure-data param structs in entity — replace function pointers with text keys
+
+**Rationale**: Text keys are pure data, keep entity dependency-free, and the consuming function can do the dispatch itself
+
+**Consequence**: All cross-cutting param structs in entity must be function-pointer-free; I/O functions passed as direct parameters
+
+---
+
+
diff --git a/.context/archive/learnings-consolidated-2026-04-03.md b/.context/archive/learnings-consolidated-2026-04-03.md
new file mode 100644
index 000000000..851ed7f74
--- /dev/null
+++ b/.context/archive/learnings-consolidated-2026-04-03.md
@@ -0,0 +1,295 @@
+# Archived Learnings (consolidated 2026-04-03)
+
+Originals replaced by consolidated entries in LEARNINGS.md.
+
+## Group: Subagent scope creep and cleanup
+
+## [2026-03-23-165610] Subagents rename functions and restructure code beyond their scope
+
+**Context**: Agents tasked with fixing em-dashes in comments also renamed exported functions, changed import aliases, and modified function signatures
+
+**Lesson**: Always diff-audit agent output for structural changes before accepting edits, even when the task is narrowly scoped
+
+**Application**: After any agent batch edit, run git diff --stat and scan for non-comment changes before staging
+
+---
+
+## [2026-03-14-180902] Subagents rename packages and modify unrelated files without being asked
+
+**Context**: ET agent renamed internal/eventlog/ to internal/log/ and modified 5+ caller files outside the internal/err/ scope
+
+**Lesson**: Always diff agent output against HEAD to catch scope creep before building; revert unrelated changes immediately
+
+**Application**: After any agent-driven refactor, run git diff --name-only HEAD and revert anything outside the intended scope before testing
+
+---
+
+## [2026-03-14-110750] Subagents reorganize file structure without being asked
+
+**Context**: Asked subagent to replace os.ReadFile callsites with validation wrappers. It also moved functions to new files, renamed them (ReadUserFile to SafeReadUserFile), and created a new internal/io package.
+
+**Lesson**: Subagents optimize for clean results and will restructure beyond stated scope — moved, renamed, and split files without being asked.
+
+**Application**: After subagent-driven refactors, always verify file organization matches intent. Audit for moved, renamed, and split files, not just the requested callsite changes.
+
+---
+
+## [2026-03-06-200319] Spawned agents reliably create new files but consistently fail to delete old ones — always audit for stale files, duplicate function definitions, and orphaned imports after agent-driven refactoring
+
+**Context**: Multiple agent batches across cmd/ restructuring, color removal, and flag externalization left stale files, duplicate run.go, and unupdated parent imports
+
+**Lesson**: Agent cleanup is a known gap — budget 5-10 minutes for post-agent audit per batch
+
+**Application**: After every agent batch: grep for stale package declarations, check parent imports point to cmd/root not cmd/, verify old files are deleted
+
+---
+
+
+## Group: Bulk rename and replace_all hazards
+
+## [2026-03-20-232518] replace_all on short tokens like function names will also replace their definitions
+
+**Context**: Using replace_all to rename HumanAgo to format.DurationAgo also changed the func declaration line to func format.DurationAgo which is invalid Go
+
+**Lesson**: replace_all matches everywhere in the file including function definitions, not just call sites
+
+**Application**: When renaming functions, delete the old definition separately rather than using replace_all. Or use a more specific match pattern that excludes func declarations.
+
+---
+
+## [2026-03-15-230643] replace_all on short tokens like core. mangles aliased imports
+
+**Context**: Used Edit tool replace_all to change core. to tidy. across handle_tool.go
+
+**Lesson**: Short replacement tokens match inside longer identifiers — remindcore. became remindtidy. silently
+
+**Application**: When doing bulk renames, prefer targeted replacements or grep for collateral damage immediately after. Avoid replace_all on tokens shorter than the full identifier
+
+---
+
+## [2026-03-18-193602] Bulk sed on imports displaces aliased imports
+
+**Context**: Used sed to add desc import to 278 files. Files with aliased imports like ctxCfg config/ctx got the alias stolen by the new import line inserted before it.
+
+**Lesson**: sed insert-before-first-match does not understand Go import aliases. The alias attaches to whatever import line sed inserts, not the original target.
+
+**Application**: When bulk-adding imports, check for aliased imports first and handle them separately. Or use goimports if available.
+
+---
+
+
+## Group: Import cycles and package splits
+
+## [2026-03-22-220846] Types in god-object files create circular dependencies
+
+**Context**: hook/types.go had 15+ types from 8 domains; session importing hook for SessionTokenInfo created a cycle
+
+**Lesson**: Moving types to their owning domain package breaks import cycles
+
+**Application**: When a type is only used by one domain package, move it there. Check with grep before assuming a type is cross-cutting.
+
+---
+
+## [2026-03-18-193616] Tests in package X cannot import X/sub packages that import X back
+
+**Context**: embed_test.go in package assets kept importing read/ sub-packages that import assets.FS, creating cycles. Recurred 4 times this session.
+
+**Lesson**: Tests that exercise domain accessor packages must live in those packages, not in the parent. The parent test file can only use the shared resource (FS) directly.
+
+**Application**: When splitting functions from a parent package into sub-packages, move the corresponding tests too. Do not leave them in the parent.
+
+---
+
+## [2026-03-13-223108] Variable shadowing causes cascading test failures after package splits
+
+**Context**: Large refactoring moved constants from monolithic config package to sub-packages (dir, entry, file). Test files had local variables named dir, entry, file that shadowed the new package imports.
+
+**Lesson**: When splitting a package, audit test files for local variable names that collide with new package names. dir, file, entry are especially common Go variable names.
+
+**Application**: Before committing a package split, run go test ./... and check for undefined errors caused by variable shadowing.
+
+---
+
+## [2026-03-13-151951] Linter reverts import-only edits when references still use old package
+
+**Context**: Moving tpl_entry.go from config/entry to assets — linter kept reverting the import change
+
+**Lesson**: When moving constants between packages, change imports and all references in a single atomic write (use Write not incremental Edit), so the linter never sees an inconsistent state
+
+**Application**: For future package migrations, use full file rewrites when a linter is active
+
+---
+
+## [2026-03-06-200237] Import cycle avoidance: when package A imports package B for logic, B must own shared types — A aliases them. entry imports add/core for insert logic, so add/core owns EntryParams and entry aliases it as entry.Params
+
+**Context**: Extracting entry.Params as a standalone struct in internal/entry created a cycle because entry/write.go imports add/core for AppendEntry
+
+**Lesson**: The package that provides implementation logic must own the types; the facade package aliases them
+
+**Application**: When extracting shared types from implementation packages, check the import direction first — the type lives where the logic lives
+
+---
+
+
+## Group: Lint suppression and gosec patterns
+
+## [2026-03-19-194942] Rename constants to avoid gosec G101 false positives
+
+**Context**: Constants named ColTokens, DriftPassed, StatusTokensFormat triggered gosec G101 credential detection. Suppressing with nolint or broadening .golangci.yml exclusion paths is fragile — paths change when files split.
+
+**Lesson**: Rename the constant to avoid the trigger word instead of suppressing the lint. Tokens→Usage, Passed→OK convey the same semantics without false positives. This is cleaner than nolint annotations or path-based exclusions that break on file reorganization.
+
+**Application**: When gosec flags a constant name, ask what the value semantically represents and rename to that. Do not add nolint, nosec, or path exclusions.
+
+---
+
+## [2026-03-06-050126] nolint:goconst for trivial values normalizes magic strings
+
+**Context**: Found 5 callsites suppressing goconst with nolint:goconst trivial user input check for y/yes comparisons
+
+**Lesson**: Suppressing a linter with a trivial justification sets precedent for other agents to do the same. The fix (two constants) costs less than the accumulated tech debt.
+
+**Application**: Use config.ConfirmShort/config.ConfirmLong instead of suppressing goconst. Prefer constants over nolint directives.
+
+---
+
+## [2026-03-04-040211] nolint:errcheck in tests normalizes unchecked errors for agents
+
+**Context**: User flagged that suppressing errcheck in tests teaches the agent to spread the pattern to production code
+
+**Lesson**: Broken-window theory applies to lint suppressions. Agents learn from test code patterns. Use _ = f.Close() in a closure or check errors with t.Fatal — never suppress with nolint.
+
+**Application**: Handle all errors in test code the same as production: t.Fatal(err) for setup, defer func() { _ = f.Close() }() for best-effort cleanup.
+
+---
+
+## [2026-03-04-040209] golangci-lint v2 ignores inline nolint directives for some linters
+
+**Context**: nolint:errcheck and nolint:gosec comments were present but golangci-lint v2 still reported violations
+
+**Lesson**: In golangci-lint v2, use config-level exclusions.rules for gosec patterns (G204, G301, G306) rather than relying on inline nolint directives. For errcheck, fix the code instead of suppressing.
+
+**Application**: When adding new lint suppressions, prefer config-level rules for gosec false positives on safe paths/args; never suppress errcheck — handle the error.
+
+---
+
+
+## Group: Skill lifecycle and promotion
+
+## [2026-03-14-093757] Internal skill rename requires updates across 6+ layers
+
+**Context**: Renamed ctx-alignment-audit to _ctx-alignment-audit. The allow list test in embed_test.go failed because it iterates all bundled skills and expects each in the allow list.
+
+**Lesson**: The allow list test needed a strings.HasPrefix(_) skip for internal skills. This was not obvious until tests ran.
+
+**Application**: When converting public to internal skills, audit: allow.txt, embed_test.go allow list test, reference/skills.md, all recipe docs referencing the skill, contributing.md dev-only skills table, and permissions docs.
+
+---
+
+## [2026-03-13-223110] Skills without a trigger mechanism are dead code
+
+**Context**: ctx-context-monitor was a skill documenting how to respond to hook output, but no hook or agent ever loaded it. The hook output already contained sufficient instructions.
+
+**Lesson**: A skill only enters the agent context when explicitly invoked via /skill-name. If the description says not user-invocable and no mechanism loads it automatically, it is unreachable.
+
+**Application**: Audit skills for reachability. If nothing triggers the skill, either add a trigger or delete it.
+
+---
+
+## [2026-03-01-125807] Elevating private skills requires synchronized updates across 6 layers
+
+**Context**: Promoted 6 _ctx-* skills to bundled ctx-* plugin skills
+
+**Lesson**: Moving a skill from .claude/skills/ to internal/assets/claude/skills/ touches: (1) SKILL.md frontmatter name field, (2) internal cross-references between skills (slash command paths), (3) external cross-references in other skills and docs, (4) embed_test.go expected skill list, (5) recipe and reference docs that mention the old name, (6) plugin cache rebuild (`hack/plugin-reload.sh`) + session restart — Claude Code snapshots skills from `~/.claude/plugins/cache/` at startup, so new skills are invisible until the cache is refreshed. Also clean stale underscore-prefixed `Skill(_ctx-*)` entries from `.claude/settings.local.json`.
+
+**Application**: When promoting future skills, use grep -r /_ctx-{name} across the whole tree before declaring done. After code changes, run plugin-reload.sh and restart the session to verify the skill appears in autocomplete.
+
+---
+
+## [2026-03-01-144544] Skill enhancement is a documentation-heavy operation across 10+ files
+
+**Context**: Enhancing /ctx-journal-enrich-all to handle export-if-needed touched the skill, hook messages, fallback strings, 5 doc files, 2 Makefiles, and TASKS.md
+
+**Lesson**: Skill behavior changes ripple through hook messages, fallback strings in Go code, doc descriptions, and Makefile hints — all must stay synchronized
+
+**Application**: When modifying a skill's scope, grep for its name across the entire repo and update every description, not just the skill file itself
+
+---
+
+
+## Group: Cross-cutting change ripple
+
+## [2026-03-01-194147] Key path changes ripple across 15+ doc files and 2 skills
+
+**Context**: Updating docs for the .context/.ctx.key → ~/.local/ctx/keys/ → ~/.ctx/.ctx.key migrations
+
+**Lesson**: Key path changes have a long documentation tail — recipes, references, getting-started, operations, CLI docs, and skills all carry path references. The worktree behavior flip (limitation to works automatically) was the highest-value change per line edited. Simplifying from per-project slugs to a single global key eliminated more code and docs than the original migration added.
+
+**Application**: When moving a file path that appears in user-facing docs, grep broadly (not just code) and budget for 15+ file touches
+
+---
+
+## [2026-03-01-112538] Removing embedded asset directories requires synchronized cleanup across 5+ layers
+
+**Context**: Deleting .context/tools/ deployment touched embed directive, asset functions, init logic, tests, config constants, Makefile targets, and docs — missing any one layer leaves dead code or build failures.
+
+**Lesson**: Embedded asset removal is a cross-cutting concern: embed directive → accessor functions → callers → tests → config constants → build targets → documentation. Work outward from the embed.
+
+**Application**: When removing an embedded asset category, use the grep-first approach (search for all references to the accessor functions and constants) before deleting anything.
+
+---
+
+## [2026-03-01-102232] Absorbing shell scripts into Go commands creates a discoverability gap
+
+**Context**: Deleted make backup/backup-global/backup-all and make rc-dev/rc-base/rc-status targets when absorbing into ctx system backup and ctx config switch. The Makefile served as self-documenting discovery (make help).
+
+**Lesson**: When eliminating Makefile targets, the CLI reference page alone is not sufficient — contributor-facing docs (contributing.md) and command catalogs (common-workflows.md) must gain explicit entries to compensate.
+
+**Application**: For future hack/ absorptions (e.g. pad-import-ideas.sh, context-watch.sh), audit contributing.md, common-workflows.md CLI-Only table, and the CLI index page as part of the absorption checklist.
+
+---
+
+## [2026-02-19-215200] Feature can be code-complete but invisible to users
+
+**Context**: ctx pad merge was fully implemented with 19 passing tests and binary support, but had zero coverage in user-facing docs (scratchpad.md, cli-reference.md, scratchpad-sync recipe). Only discoverable via --help.
+
+**Lesson**: Implementation completeness \!= user-facing completeness. A feature without docs is invisible to users who don't explore CLI help.
+
+**Application**: After implementing a new CLI subcommand, always check: feature page, cli-reference.md, relevant recipes, and zensical.toml nav (if new page).
+
+---
+
+
+## Group: Dead code detection
+
+## [2026-03-30-003720] internal/cli/recall/ was dead code — never registered in bootstrap
+
+**Context**: The entire recall CLI package existed with tests but was never wired into the command tree. Journal consumed it but nobody deleted the ghost
+
+**Lesson**: Dead package detection requires checking bootstrap registration, not just build success. A package can build and test green while being completely unreachable
+
+**Application**: Add a compliance test that verifies all cli/ packages are registered in bootstrap
+
+---
+
+## [2026-03-25-173339] Dead files accumulate when nothing consumes them
+
+**Context**: IMPLEMENTATION_PLAN.md and PROMPT.md were created by ctx init but no agent, hook, or skill ever read them
+
+**Lesson**: Before adding a file to init scaffolding, verify there is at least one consumer. Periodically audit what init creates vs what the system reads.
+
+**Application**: The prompt deprecation spec documents the reasoning as a papertrail for future removals.
+
+---
+
+## [2026-03-15-101346] Delete legacy code instead of maintaining it — MigrateKeyFile had 5 callers and test coverage but zero users
+
+**Context**: Started by adding constants for legacy key names, then realized nobody uses legacy keys
+
+**Lesson**: When touching legacy compat code, first ask whether the legacy path has real users. If not, delete it entirely rather than improving it
+
+**Application**: Apply to any backward-compat shim: check actual usage before investing in maintenance
+
+---
+
+
diff --git a/.context/archive/tasks-2026-04-03.md b/.context/archive/tasks-2026-04-03.md
new file mode 100644
index 000000000..5210b4d81
--- /dev/null
+++ b/.context/archive/tasks-2026-04-03.md
@@ -0,0 +1,64 @@
+# Archived Tasks - 2026-04-03
+
+- [x] Add TestTypeFileConvention audit check: type definitions must live in types.go, not mixed into function files. Scan all non-test .go files for ast.TypeSpec declarations; flag any that appear in files not named types.go. Migrate violations. #priority:medium #added:2026-04-03-030033 #done:2026-04-03
+- [x] Collect all exec.Commands under internal/exec. See
+ Phase EXEC below for breakdown. — done, exec/{git,dep,gio,zensical}
+ exist, no exec.Command calls remain outside internal/exec
+ #done:2026-03-31
+- [x] PD.5.2: Run `ctx init` on a clean directory — verify no
+ `.context/prompts/` created. loop.md and skills checks are stale:
+ loop.md was never a ctx init artifact (ctx loop generates on demand),
+ skills deploy via plugin install, not ctx init.
+ #priority:high #added:2026-03-25-203340 #done:2026-03-31
+- [x] Bug: check-version hook missing throttle touch on plugin
+ version read error (run.go:70). When claude.PluginVersion()
+ fails, the hook returns without touching the daily throttle
+ marker, causing repeated checks on days when plugin.json is
+ missing or corrupted. Fix: add
+ internalIo.TouchFile(markerFile) before the early return.
+ See docs/recipes/hook-sequence-diagrams.md check-version
+ diagram which documents the expected behavior.
+ #added:2026-03-23-162802 #done:2026-03-31
+- [x] Add Use* constants for all system subcommands — all 30 system
+ subcommands already use cmd.UseSystem* constants
+ #added:2026-03-21-092550 #done:2026-03-31
+- [x] EH.0: Create central warning sink — `internal/log/warn/warn.go` with
+ `var sink io.Writer = os.Stderr` and `func Warn(format string,
+ args ...any)`.
+ All stderr warnings (`fmt.Fprintf(os.Stderr, ...)`) route through this
+ function. The `fmt.Fprintf` return error is handled once, centrally.
+ The sink is swappable (tests use `io.Discard`, future: syslog, file).
+ EH.2–EH.4 should use `log.Warn()` instead of raw `fmt.Fprintf`.
+ DoD: `grep -rn 'fmt.Fprintf(os.Stderr' internal/` returns zero hits
+ #priority:high #added:2026-03-15
+- [x] Cleanup internal/cli/system/core/wrapup.go: line 18 constant should go to
+ config; make WrappedUpExpiry configurable via ctxrc — already done,
+ wrap.ExpiryHours and wrap.Marker exist in config/wrap
+ #priority:low #added:2026-03-07-220825 #done:2026-03-31
+- [x] Cleanup internal/cli/system/core/version.go: line 81 newline should come
+ from config — already done, uses token.NewlineLF
+ #priority:low #added:2026-03-07-220819 #done:2026-03-31
+- [x] Add taxonomy to internal/cli/system/core/ — currently an unstructured bag
+ of files; group by domain (backup, hooks, session, knowledge, etc.)
+ — already done, 20 domain subdirectories exist
+ #priority:medium #added:2026-03-07-220819 #done:2026-03-31
+- [x] Cleanup internal/cli/system/core/version_drift.go: line 53 string
+ formatting should use assets — file moved to core/drift/, now uses
+ desc.Text and assets throughout
+ #priority:medium #added:2026-03-07-220819 #done:2026-03-31
+- [x] Cleanup internal/cli/system/core/state.go: magic permissions (0o750),
+ magic strings ('Context: ' prefix, etc.) — file moved to core/state/,
+ magic values extracted to config
+ #priority:medium #added:2026-03-07-220819 #done:2026-03-31
+- [x] Cleanup internal/cli/system/core/smb.go: errors should come from
+ internal/err; lines 101, 116, 111 need assets text — file moved to
+ core/archive/, errors routed through err package
+ #priority:medium #added:2026-03-07-220819 #done:2026-03-31
+- [x] Remove FlagNoColor and fatih/color dependency — replaced with plain
+ output, dependency removed from go.mod
+ #added:2026-03-06-182831 #done:2026-03-31
+- [x] Fix 3 CI compliance issues from PR #27 after merge: missing copyright
+ header on internal/mcp/server_test.go, missing doc.go for internal/cli/mcp/,
+ literal newlines in internal/mcp/resources.go and
+ tools.go — all fixed, files moved to mcp/server/ with copyright
+ #added:2026-03-06-141508 #done:2026-03-31
diff --git a/internal/assets/commands/text/ui.yaml b/internal/assets/commands/text/ui.yaml
index b36285a2f..9bb61ec7f 100644
--- a/internal/assets/commands/text/ui.yaml
+++ b/internal/assets/commands/text/ui.yaml
@@ -617,3 +617,57 @@ write.trigger-context:
short: "Context:\n%s"
write.trigger-err-line:
short: ' %s'
+
+write.setup-cursor-head:
+ short: 'Cursor integration:'
+write.setup-cursor-run:
+ short: ' Run: ctx setup cursor --write'
+write.setup-cursor-mcp:
+ short: ' Creates: .cursor/mcp.json (MCP server config)'
+write.setup-cursor-sync:
+ short: ' Syncs: .cursor/rules/ (steering files)'
+write.setup-kiro-head:
+ short: 'Kiro integration:'
+write.setup-kiro-run:
+ short: ' Run: ctx setup kiro --write'
+write.setup-kiro-mcp:
+ short: ' Creates: .kiro/settings/mcp.json (MCP server config)'
+write.setup-kiro-sync:
+ short: ' Syncs: .kiro/steering/ (steering files)'
+write.setup-cline-head:
+ short: 'Cline integration:'
+write.setup-cline-run:
+ short: ' Run: ctx setup cline --write'
+write.setup-cline-mcp:
+ short: ' Creates: .vscode/mcp.json (MCP server config)'
+write.setup-cline-sync:
+ short: ' Syncs: .clinerules/ (steering files)'
+write.setup-no-steering-to-sync:
+ short: ' No steering files to sync (run ctx steering init first)'
+
+write.skill-no-skills:
+ short: 'No skills installed.'
+
+write.steering-no-files:
+ short: 'No steering files found.'
+write.steering-no-match:
+ short: 'No steering files match the given prompt.'
+
+write.trigger-no-hooks:
+ short: 'No hooks found.'
+write.trigger-errors-hdr:
+ short: 'Errors:'
+write.trigger-no-output:
+ short: 'No output from hooks.'
+
+mcp.hooks-disabled:
+ short: 'Hooks disabled.'
+mcp.session-start-ok:
+ short: 'Session start hooks executed. No additional context.'
+mcp.session-end-ok:
+ short: 'Session end hooks executed.'
+
+mcp.steering-no-files:
+ short: 'No steering files found.'
+mcp.steering-no-match:
+ short: 'No matching steering files.'
diff --git a/internal/assets/hooks/messages/doc.go b/internal/assets/hooks/messages/doc.go
index 21d4e5ac5..4c681ad55 100644
--- a/internal/assets/hooks/messages/doc.go
+++ b/internal/assets/hooks/messages/doc.go
@@ -9,6 +9,6 @@
// The embedded registry.yaml maps each hook+variant pair to a
// category and description. [Registry] returns all entries,
// [Lookup] finds a specific one, and [Variants] enumerates
-// the available names. [CategoryCtxSpecific] marks entries
-// internal to ctx.
+// the available names. Category constants live in
+// config/hook (CategoryCustomizable, CategoryCtxSpecific).
package messages
diff --git a/internal/assets/hooks/messages/hooks.go b/internal/assets/hooks/messages/hooks.go
new file mode 100644
index 000000000..d3b6a8e34
--- /dev/null
+++ b/internal/assets/hooks/messages/hooks.go
@@ -0,0 +1,24 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package messages
+
+// hooks returns a deduplicated list of hook names in
+// the registry.
+//
+// Returns:
+// - []string: Hook names in alphabetical order
+func hooks() []string {
+ seen := make(map[string]bool)
+ var result []string
+ for _, info := range Registry() {
+ if !seen[info.Hook] {
+ seen[info.Hook] = true
+ result = append(result, info.Hook)
+ }
+ }
+ return result
+}
diff --git a/internal/assets/hooks/messages/registry.go b/internal/assets/hooks/messages/registry.go
index 2d0ca0e71..78cca260b 100644
--- a/internal/assets/hooks/messages/registry.go
+++ b/internal/assets/hooks/messages/registry.go
@@ -17,14 +17,6 @@ import (
errParser "github.com/ActiveMemory/ctx/internal/err/parser"
)
-// CategoryCustomizable marks messages intended for
-// project-specific customization.
-const CategoryCustomizable = "customizable"
-
-// CategoryCtxSpecific marks messages specific to ctx's
-// own development workflow.
-const CategoryCtxSpecific = "ctx-specific"
-
// registryOnce, registryData, and registryErr cache the parsed hook
// message registry loaded once via sync.Once.
var (
@@ -89,22 +81,6 @@ func Lookup(hookName, variant string) *HookMessageInfo {
return nil
}
-// Hooks returns a deduplicated list of hook names in the registry.
-//
-// Returns:
-// - []string: Hook names in alphabetical order
-func Hooks() []string {
- seen := make(map[string]bool)
- var hooks []string
- for _, info := range Registry() {
- if !seen[info.Hook] {
- seen[info.Hook] = true
- hooks = append(hooks, info.Hook)
- }
- }
- return hooks
-}
-
// Variants returns the variant names for a given hook.
//
// Parameters:
diff --git a/internal/assets/hooks/messages/registry_test.go b/internal/assets/hooks/messages/registry_test.go
index 6ab0f3777..95d5fc250 100644
--- a/internal/assets/hooks/messages/registry_test.go
+++ b/internal/assets/hooks/messages/registry_test.go
@@ -6,7 +6,11 @@
package messages
-import "testing"
+import (
+ "testing"
+
+ cfgHook "github.com/ActiveMemory/ctx/internal/config/hook"
+)
func TestRegistryCount(t *testing.T) {
entries := Registry()
@@ -30,8 +34,8 @@ func TestRegistryYAMLParses(t *testing.T) {
if entry.Variant == "" {
t.Errorf("entry %d: empty variant", i)
}
- validCategory := entry.Category == CategoryCustomizable ||
- entry.Category == CategoryCtxSpecific
+ validCategory := entry.Category == cfgHook.CategoryCustomizable ||
+ entry.Category == cfgHook.CategoryCtxSpecific
if !validCategory {
t.Errorf("entry %d (%s/%s): invalid category %q",
i, entry.Hook, entry.Variant, entry.Category)
@@ -48,8 +52,8 @@ func TestLookupKnownEntry(t *testing.T) {
if info == nil {
t.Fatal("Lookup(check-persistence, nudge) = nil, want non-nil")
}
- if info.Category != CategoryCustomizable {
- t.Errorf("category = %q, want %q", info.Category, CategoryCustomizable)
+ if info.Category != cfgHook.CategoryCustomizable {
+ t.Errorf("category = %q, want %q", info.Category, cfgHook.CategoryCustomizable)
}
if info.Description != "Context persistence nudge" {
t.Errorf("description = %q, want %q",
@@ -69,7 +73,7 @@ func TestLookupUnknown(t *testing.T) {
}
func TestHooksReturnsUniqueNames(t *testing.T) {
- hooks := Hooks()
+ hooks := hooks()
if len(hooks) == 0 {
t.Fatal("Hooks() returned empty list")
}
diff --git a/internal/assets/tpl/tpl_steering.go b/internal/assets/tpl/tpl_steering.go
new file mode 100644
index 000000000..62e38628f
--- /dev/null
+++ b/internal/assets/tpl/tpl_steering.go
@@ -0,0 +1,55 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package tpl
+
+// Foundation steering file names.
+const (
+ // SteeringNameProduct is the name for the product context file.
+ SteeringNameProduct = "product"
+ // SteeringNameTech is the name for the technology stack file.
+ SteeringNameTech = "tech"
+ // SteeringNameStructure is the name for the project structure file.
+ SteeringNameStructure = "structure"
+ // SteeringNameWorkflow is the name for the development workflow file.
+ SteeringNameWorkflow = "workflow"
+)
+
+// Foundation steering file descriptions.
+const (
+ // SteeringDescProduct describes the product context file.
+ SteeringDescProduct = "Product context, goals, and target users"
+ // SteeringDescTech describes the technology stack file.
+ SteeringDescTech = "Technology stack, constraints, " +
+ "and dependencies"
+ // SteeringDescStructure describes the project structure file.
+ SteeringDescStructure = "Project structure and " +
+ "directory conventions"
+ // SteeringDescWorkflow describes the development workflow file.
+ SteeringDescWorkflow = "Development workflow and process rules"
+)
+
+// Foundation steering file body templates.
+const (
+ // SteeringBodyProduct is the body for the product context file.
+ SteeringBodyProduct = "# Product Context\n\n" +
+ "Describe the product, its goals, " +
+ "and target users.\n"
+ // SteeringBodyTech is the body for the technology stack file.
+ SteeringBodyTech = "# Technology Stack\n\n" +
+ "Describe the technology stack, " +
+ "constraints, and key dependencies.\n"
+ // SteeringBodyStructure is the body for the project structure
+ // file.
+ SteeringBodyStructure = "# Project Structure\n\n" +
+ "Describe the project layout " +
+ "and directory conventions.\n"
+ // SteeringBodyWorkflow is the body for the development workflow
+ // file.
+ SteeringBodyWorkflow = "# Development Workflow\n\n" +
+ "Describe the development workflow, " +
+ "branching strategy, and process rules.\n"
+)
diff --git a/internal/assets/tpl/tpl_trigger.go b/internal/assets/tpl/tpl_trigger.go
new file mode 100644
index 000000000..969017bbd
--- /dev/null
+++ b/internal/assets/tpl/tpl_trigger.go
@@ -0,0 +1,32 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package tpl
+
+// Shell script template for new hook scripts created by
+// ctx hook add.
+const (
+ // TriggerScript is the bash hook script template.
+ // Args: name, hookType.
+ TriggerScript = `#!/usr/bin/env bash
+# Hook: %s
+# Type: %s
+# Created by: ctx hook add
+
+set -euo pipefail
+
+INPUT=$(cat)
+
+# Parse input fields
+HOOK_TYPE=$(echo "$INPUT" | jq -r '.hookType')
+TOOL=$(echo "$INPUT" | jq -r '.tool // empty')
+
+# Your hook logic here
+
+# Return output
+echo '{"cancel": false, "context": "", "message": ""}'
+`
+)
diff --git a/internal/audit/dead_exports_test.go b/internal/audit/dead_exports_test.go
index 4515ffab3..723681781 100644
--- a/internal/audit/dead_exports_test.go
+++ b/internal/audit/dead_exports_test.go
@@ -36,7 +36,7 @@ import (
// positives. Keep this list small: prefer eliminating
// the export over adding it here.
var testOnlyExports = map[string]bool{
- "github.com/ActiveMemory/ctx/internal/assets/hooks/messages.CategoryCustomizable": true,
+ "github.com/ActiveMemory/ctx/internal/config/hook.CategoryCustomizable": true,
"github.com/ActiveMemory/ctx/internal/assets/hooks/messages.Hooks": true,
"github.com/ActiveMemory/ctx/internal/assets/hooks/messages.RegistryError": true,
"github.com/ActiveMemory/ctx/internal/cli/initialize/core/vscode.CreateVSCodeArtifacts": true,
diff --git a/internal/audit/magic_strings_test.go b/internal/audit/magic_strings_test.go
index 362f5f007..bf6e8c93c 100644
--- a/internal/audit/magic_strings_test.go
+++ b/internal/audit/magic_strings_test.go
@@ -80,10 +80,14 @@ func TestNoMagicStrings(t *testing.T) {
return true
}
- if isConstDef(file, lit) ||
- isVarDef(file, lit) {
- return true
- }
+ // Const/var definitions in exempt packages
+ // are already skipped (line 61). Outside
+ // those packages, string constants are
+ // magic strings that belong in config/.
+ //
+ // DO NOT re-add a blanket isConstDef
+ // exemption. It masks constants defined
+ // in the wrong package.
if isStructTag(file, lit) {
return true
diff --git a/internal/bootstrap/cmd.go b/internal/bootstrap/cmd.go
index 9eaf19f5d..0f74932e0 100644
--- a/internal/bootstrap/cmd.go
+++ b/internal/bootstrap/cmd.go
@@ -14,6 +14,7 @@ import (
"github.com/spf13/cobra"
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
+ cfgBootstrap "github.com/ActiveMemory/ctx/internal/config/bootstrap"
"github.com/ActiveMemory/ctx/internal/config/cli"
"github.com/ActiveMemory/ctx/internal/config/embed/cmd"
embedFlag "github.com/ActiveMemory/ctx/internal/config/embed/flag"
@@ -30,7 +31,7 @@ import (
// version is set at build time via ldflags:
//
// -X github.com/ActiveMemory/ctx/internal/bootstrap.version=$(cat VERSION)
-var version = "dev"
+var version = cfgBootstrap.DefaultVersion
// RootCmd creates and returns the root cobra command for the ctx CLI.
//
diff --git a/internal/cli/dep/core/python/python.go b/internal/cli/dep/core/python/python.go
index 912da600e..eeb7facd3 100644
--- a/internal/cli/dep/core/python/python.go
+++ b/internal/cli/dep/core/python/python.go
@@ -20,14 +20,13 @@ import (
ctxLog "github.com/ActiveMemory/ctx/internal/log/warn"
)
-// Ecosystem is the ecosystem label for Python projects.
-const Ecosystem = "python"
-
// Builder implements GraphBuilder for Python projects.
type Builder struct{}
// Name returns the ecosystem label.
-func (p *Builder) Name() string { return Ecosystem }
+func (p *Builder) Name() string {
+ return cfgDep.EcosystemPython
+}
// Detect returns true if requirements.txt or pyproject.toml
// exists.
@@ -305,7 +304,7 @@ func ParsePyprojectDeps(
); idx > 0 {
name := strings.TrimSpace(trimmed[:idx])
lower := strings.ToLower(name)
- if lower == Ecosystem ||
+ if lower == cfgDep.EcosystemPython ||
cfgDep.PyMetaKeys[lower] {
continue
}
diff --git a/internal/cli/dep/core/rust/rust.go b/internal/cli/dep/core/rust/rust.go
index 73533c0a3..edd14731f 100644
--- a/internal/cli/dep/core/rust/rust.go
+++ b/internal/cli/dep/core/rust/rust.go
@@ -16,14 +16,13 @@ import (
execDep "github.com/ActiveMemory/ctx/internal/exec/dep"
)
-// Ecosystem is the ecosystem label for Rust projects.
-const Ecosystem = "rust"
-
// Builder implements GraphBuilder for Rust projects.
type Builder struct{}
// Name returns the ecosystem label.
-func (r *Builder) Name() string { return Ecosystem }
+func (r *Builder) Name() string {
+ return cfgDep.EcosystemRust
+}
// Detect returns true if Cargo.toml exists in the current
// directory.
diff --git a/internal/cli/doctor/core/check/check.go b/internal/cli/doctor/core/check/check.go
index 5e439fd3e..3cf71e5f0 100644
--- a/internal/cli/doctor/core/check/check.go
+++ b/internal/cli/doctor/core/check/check.go
@@ -25,6 +25,7 @@ import (
"github.com/ActiveMemory/ctx/internal/config/regex"
"github.com/ActiveMemory/ctx/internal/config/reminder"
"github.com/ActiveMemory/ctx/internal/config/stats"
+ cfgSysinfo "github.com/ActiveMemory/ctx/internal/config/sysinfo"
cfgToken "github.com/ActiveMemory/ctx/internal/config/token"
"github.com/ActiveMemory/ctx/internal/context/load"
"github.com/ActiveMemory/ctx/internal/context/token"
@@ -623,7 +624,7 @@ func AddResourceResults(
snap.Memory.TotalBytes,
text.DescKeyDoctorResourceMemoryFormat,
doctor.CheckResourceMemory,
- sysinfo.ResourceMemory,
+ cfgSysinfo.ResourceMemory,
},
{
snap.Memory.Supported,
@@ -631,7 +632,7 @@ func AddResourceResults(
snap.Memory.SwapTotalBytes,
text.DescKeyDoctorResourceSwapFormat,
doctor.CheckResourceSwap,
- sysinfo.ResourceSwap,
+ cfgSysinfo.ResourceSwap,
},
{
snap.Disk.Supported,
@@ -639,7 +640,7 @@ func AddResourceResults(
snap.Disk.TotalBytes,
text.DescKeyDoctorResourceDiskFormat,
doctor.CheckResourceDisk,
- sysinfo.ResourceDisk,
+ cfgSysinfo.ResourceDisk,
},
}
for _, bc := range byteChecks {
@@ -669,7 +670,7 @@ func AddResourceResults(
Name: doctor.CheckResourceLoad,
Category: doctor.CategoryResources,
Status: SeverityToStatus(
- sevMap[sysinfo.ResourceLoad],
+ sevMap[cfgSysinfo.ResourceLoad],
),
Message: msg,
})
diff --git a/internal/cli/drift/core/fix/fix.go b/internal/cli/drift/core/fix/fix.go
index 63f763ad6..f1d50cdd7 100644
--- a/internal/cli/drift/core/fix/fix.go
+++ b/internal/cli/drift/core/fix/fix.go
@@ -17,6 +17,7 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/template"
"github.com/ActiveMemory/ctx/internal/config/archive"
cfgCtx "github.com/ActiveMemory/ctx/internal/config/ctx"
+ cfgDrift "github.com/ActiveMemory/ctx/internal/config/drift"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
"github.com/ActiveMemory/ctx/internal/config/fs"
"github.com/ActiveMemory/ctx/internal/config/marker"
@@ -67,7 +68,7 @@ func Apply(
for _, issue := range report.Warnings {
switch issue.Type {
- case drift.IssueStaleness:
+ case cfgDrift.IssueStaleness:
if fixErr := Staleness(cmd, ctx); fixErr != nil {
result.Errors = append(result.Errors,
fmt.Sprintf(
@@ -79,7 +80,7 @@ func Apply(
result.Fixed++
}
- case drift.IssueMissing:
+ case cfgDrift.IssueMissing:
if fixErr := MissingFile(issue.File); fixErr != nil {
result.Errors = append(result.Errors,
fmt.Sprintf(
@@ -91,20 +92,20 @@ func Apply(
result.Fixed++
}
- case drift.IssueDeadPath:
+ case cfgDrift.IssueDeadPath:
writeDrift.SkipDeadPath(
cmd, issue.File, issue.Line, issue.Path,
)
result.Skipped++
- case drift.IssueStaleAge:
+ case cfgDrift.IssueStaleAge:
writeDrift.SkipStaleAge(cmd, issue.File)
result.Skipped++
}
}
for _, issue := range report.Violations {
- if issue.Type == drift.IssueSecret {
+ if issue.Type == cfgDrift.IssueSecret {
writeDrift.SkipSensitiveFile(cmd, issue.File)
result.Skipped++
}
diff --git a/internal/cli/drift/core/out/out.go b/internal/cli/drift/core/out/out.go
index 510219831..633e50e7f 100644
--- a/internal/cli/drift/core/out/out.go
+++ b/internal/cli/drift/core/out/out.go
@@ -15,6 +15,7 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/cli/drift/core/sanitize"
+ cfgDrift "github.com/ActiveMemory/ctx/internal/config/drift"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
"github.com/ActiveMemory/ctx/internal/drift"
errDrift "github.com/ActiveMemory/ctx/internal/err/drift"
@@ -31,11 +32,11 @@ import (
// - Violations: Constitution violations
// - Passed: Names of checks that passed
type JSONOutput struct {
- Timestamp string `json:"timestamp"`
- Status drift.StatusType `json:"status"`
- Warnings []drift.Issue `json:"warnings"`
- Violations []drift.Issue `json:"violations"`
- Passed []drift.CheckName `json:"passed"`
+ Timestamp string `json:"timestamp"`
+ Status cfgDrift.StatusType `json:"status"`
+ Warnings []drift.Issue `json:"warnings"`
+ Violations []drift.Issue `json:"violations"`
+ Passed []cfgDrift.CheckName `json:"passed"`
}
// DriftText writes the drift report as formatted text with
@@ -93,9 +94,9 @@ func DriftText(
for _, w := range report.Warnings {
switch w.Type {
- case drift.IssueDeadPath:
+ case cfgDrift.IssueDeadPath:
pathRefs = append(pathRefs, w)
- case drift.IssueStaleness, drift.IssueStaleAge:
+ case cfgDrift.IssueStaleness, cfgDrift.IssueStaleAge:
staleness = append(staleness, w)
default:
other = append(other, w)
@@ -152,10 +153,10 @@ func DriftText(
// Summary
status := report.Status()
switch status {
- case drift.StatusViolation:
+ case cfgDrift.StatusViolation:
writeDrift.StatusViolation(cmd)
return errDrift.Violations()
- case drift.StatusWarning:
+ case cfgDrift.StatusWarning:
writeDrift.StatusWarning(cmd)
default:
writeDrift.StatusOK(cmd)
diff --git a/internal/cli/drift/core/sanitize/sanitize.go b/internal/cli/drift/core/sanitize/sanitize.go
index bea58c0f6..e3b39f124 100644
--- a/internal/cli/drift/core/sanitize/sanitize.go
+++ b/internal/cli/drift/core/sanitize/sanitize.go
@@ -8,8 +8,8 @@ package sanitize
import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
+ cfgDrift "github.com/ActiveMemory/ctx/internal/config/drift"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
- "github.com/ActiveMemory/ctx/internal/drift"
)
// FormatCheckName converts internal check identifiers to
@@ -21,23 +21,23 @@ import (
// Returns:
// - string: Human-readable description, or the original
// name if unknown
-func FormatCheckName(name drift.CheckName) string {
+func FormatCheckName(name cfgDrift.CheckName) string {
switch name {
- case drift.CheckPathReferences:
+ case cfgDrift.CheckPathReferences:
return desc.Text(text.DescKeyDriftCheckPathRefs)
- case drift.CheckStaleness:
+ case cfgDrift.CheckStaleness:
return desc.Text(text.DescKeyDriftCheckStaleness)
- case drift.CheckConstitution:
+ case cfgDrift.CheckConstitution:
return desc.Text(text.DescKeyDriftCheckConstitution)
- case drift.CheckRequiredFiles:
+ case cfgDrift.CheckRequiredFiles:
return desc.Text(text.DescKeyDriftCheckRequired)
- case drift.CheckFileAge:
+ case cfgDrift.CheckFileAge:
return desc.Text(text.DescKeyDriftCheckFileAge)
- case drift.CheckTemplateHeaders:
+ case cfgDrift.CheckTemplateHeaders:
return desc.Text(
text.DescKeyDriftCheckTemplateHeader,
)
default:
- return string(name)
+ return name
}
}
diff --git a/internal/cli/setup/core/cline/cline.go b/internal/cli/setup/core/cline/cline.go
index 45303ffe3..0241344b8 100644
--- a/internal/cli/setup/core/cline/cline.go
+++ b/internal/cli/setup/core/cline/cline.go
@@ -10,19 +10,10 @@ package cline
import (
"github.com/spf13/cobra"
+ cfgSetup "github.com/ActiveMemory/ctx/internal/config/setup"
writeSetup "github.com/ActiveMemory/ctx/internal/write/setup"
)
-// Cline deploy constants.
-const (
- // displayName is the display name for Cline.
- displayName = "Cline"
- // mcpConfigPath is the deployed MCP config path.
- mcpConfigPath = ".vscode/mcp.json"
- // steeringPath is the deployed steering path.
- steeringPath = ".clinerules/"
-)
-
// Deploy generates Cline integration files:
// 1. .vscode/mcp.json — MCP server configuration (shared with VS Code)
// 2. .clinerules/*.md — synced steering files
@@ -33,6 +24,10 @@ func Deploy(cmd *cobra.Command) error {
if steerErr := syncSteering(cmd); steerErr != nil {
return steerErr
}
- writeSetup.DeployComplete(cmd, displayName, mcpConfigPath, steeringPath)
+ writeSetup.DeployComplete(
+ cmd, cfgSetup.DisplayCline,
+ cfgSetup.MCPConfigPathCline,
+ cfgSetup.SteeringPathCline,
+ )
return nil
}
diff --git a/internal/cli/setup/core/cursor/cursor.go b/internal/cli/setup/core/cursor/cursor.go
index 1d2acf7cf..bfeb9989c 100644
--- a/internal/cli/setup/core/cursor/cursor.go
+++ b/internal/cli/setup/core/cursor/cursor.go
@@ -10,23 +10,10 @@ package cursor
import (
"github.com/spf13/cobra"
+ cfgSetup "github.com/ActiveMemory/ctx/internal/config/setup"
writeSetup "github.com/ActiveMemory/ctx/internal/write/setup"
)
-// Cursor configuration paths.
-const (
- // dirCursor is the Cursor editor config directory.
- dirCursor = ".cursor"
- // fileMCPJSON is the MCP server config file name.
- fileMCPJSON = "mcp.json"
- // displayName is the display name for Cursor.
- displayName = "Cursor"
- // mcpConfigPath is the deployed MCP config path.
- mcpConfigPath = dirCursor + "/mcp.json"
- // steeringPath is the deployed steering path.
- steeringPath = dirCursor + "/rules/"
-)
-
// Deploy generates Cursor integration files:
// 1. .cursor/mcp.json — MCP server configuration
// 2. .cursor/rules/*.mdc — synced steering files
@@ -37,6 +24,10 @@ func Deploy(cmd *cobra.Command) error {
if steerErr := syncSteering(cmd); steerErr != nil {
return steerErr
}
- writeSetup.DeployComplete(cmd, displayName, mcpConfigPath, steeringPath)
+ writeSetup.DeployComplete(
+ cmd, cfgSetup.DisplayCursor,
+ cfgSetup.MCPConfigPathCursor,
+ cfgSetup.SteeringPathCursor,
+ )
return nil
}
diff --git a/internal/cli/setup/core/cursor/deploy.go b/internal/cli/setup/core/cursor/deploy.go
index 0a88c1ecb..1444c0118 100644
--- a/internal/cli/setup/core/cursor/deploy.go
+++ b/internal/cli/setup/core/cursor/deploy.go
@@ -16,6 +16,7 @@ import (
"github.com/ActiveMemory/ctx/internal/config/fs"
cfgHook "github.com/ActiveMemory/ctx/internal/config/hook"
mcpServer "github.com/ActiveMemory/ctx/internal/config/mcp/server"
+ cfgSetup "github.com/ActiveMemory/ctx/internal/config/setup"
"github.com/ActiveMemory/ctx/internal/config/token"
errSetup "github.com/ActiveMemory/ctx/internal/err/setup"
ctxIo "github.com/ActiveMemory/ctx/internal/io"
@@ -27,7 +28,7 @@ import (
// ensureMCPConfig creates .cursor/mcp.json with the ctx
// MCP server configuration. Skips if the file exists.
func ensureMCPConfig(cmd *cobra.Command) error {
- target := filepath.Join(dirCursor, fileMCPJSON)
+ target := filepath.Join(cfgSetup.DirCursor, cfgSetup.FileMCPJSONCursor)
if _, statErr := ctxIo.SafeStat(target); statErr == nil {
writeSetup.DeployFileExists(cmd, target)
@@ -35,9 +36,9 @@ func ensureMCPConfig(cmd *cobra.Command) error {
}
if mkdirErr := ctxIo.SafeMkdirAll(
- dirCursor, fs.PermExec,
+ cfgSetup.DirCursor, fs.PermExec,
); mkdirErr != nil {
- return errSetup.CreateDir(dirCursor, mkdirErr)
+ return errSetup.CreateDir(cfgSetup.DirCursor, mkdirErr)
}
cfg := mcpConfig{
diff --git a/internal/cli/setup/core/kiro/deploy.go b/internal/cli/setup/core/kiro/deploy.go
index bda1cf39e..eabc68db2 100644
--- a/internal/cli/setup/core/kiro/deploy.go
+++ b/internal/cli/setup/core/kiro/deploy.go
@@ -17,6 +17,7 @@ import (
cfgHook "github.com/ActiveMemory/ctx/internal/config/hook"
mcpServer "github.com/ActiveMemory/ctx/internal/config/mcp/server"
cfgMcpTool "github.com/ActiveMemory/ctx/internal/config/mcp/tool"
+ cfgSetup "github.com/ActiveMemory/ctx/internal/config/setup"
"github.com/ActiveMemory/ctx/internal/config/token"
errSetup "github.com/ActiveMemory/ctx/internal/err/setup"
ctxIo "github.com/ActiveMemory/ctx/internal/io"
@@ -28,8 +29,10 @@ import (
// ensureMCPConfig creates .kiro/settings/mcp.json with
// the ctx MCP server config. Skips if the file exists.
func ensureMCPConfig(cmd *cobra.Command) error {
- settingsDir := filepath.Join(DirKiro, DirSettings)
- target := filepath.Join(settingsDir, FileMCPJSON)
+ settingsDir := filepath.Join(
+ cfgSetup.DirKiro, cfgSetup.DirSettings,
+ )
+ target := filepath.Join(settingsDir, cfgSetup.FileMCPJSON)
if _, statErr := ctxIo.SafeStat(
target,
diff --git a/internal/cli/setup/core/kiro/kiro.go b/internal/cli/setup/core/kiro/kiro.go
index c4825ddf7..372ef769f 100644
--- a/internal/cli/setup/core/kiro/kiro.go
+++ b/internal/cli/setup/core/kiro/kiro.go
@@ -10,25 +10,10 @@ package kiro
import (
"github.com/spf13/cobra"
+ cfgSetup "github.com/ActiveMemory/ctx/internal/config/setup"
writeSetup "github.com/ActiveMemory/ctx/internal/write/setup"
)
-// Kiro configuration paths.
-const (
- // DirKiro is the Kiro editor config directory.
- DirKiro = ".kiro"
- // DirSettings is the Kiro settings subdirectory.
- DirSettings = "settings"
- // FileMCPJSON is the MCP server config file name.
- FileMCPJSON = "mcp.json"
- // displayName is the display name for Kiro.
- displayName = "Kiro"
- // mcpConfigPath is the deployed MCP config path.
- mcpConfigPath = DirKiro + "/settings/mcp.json"
- // steeringDeployPath is the deployed steering path.
- steeringDeployPath = DirKiro + "/steering/"
-)
-
// Deploy generates Kiro integration files:
// 1. .kiro/settings/mcp.json — MCP server configuration
// 2. .kiro/steering/*.md — synced steering files
@@ -50,9 +35,9 @@ func Deploy(cmd *cobra.Command) error {
}
writeSetup.DeployComplete(
- cmd, displayName,
- mcpConfigPath,
- steeringDeployPath,
+ cmd, cfgSetup.DisplayKiro,
+ cfgSetup.MCPConfigPathKiro,
+ cfgSetup.SteeringDeployPathKiro,
)
return nil
}
diff --git a/internal/cli/steering/cmd/add/cmd.go b/internal/cli/steering/cmd/add/cmd.go
index 2917c0aec..d71f0d0cc 100644
--- a/internal/cli/steering/cmd/add/cmd.go
+++ b/internal/cli/steering/cmd/add/cmd.go
@@ -16,6 +16,7 @@ import (
"github.com/ActiveMemory/ctx/internal/config/embed/cmd"
"github.com/ActiveMemory/ctx/internal/config/file"
"github.com/ActiveMemory/ctx/internal/config/fs"
+ cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
errSteering "github.com/ActiveMemory/ctx/internal/err/steering"
ctxIo "github.com/ActiveMemory/ctx/internal/io"
"github.com/ActiveMemory/ctx/internal/rc"
@@ -77,7 +78,7 @@ func Run(c *cobra.Command, name string) error {
sf := &steering.SteeringFile{
Name: name,
- Inclusion: steering.InclusionManual,
+ Inclusion: cfgSteering.InclusionManual,
Priority: defaultPriority,
}
diff --git a/internal/cli/steering/cmd/initcmd/cmd.go b/internal/cli/steering/cmd/initcmd/cmd.go
index f815fedf9..036b19cf0 100644
--- a/internal/cli/steering/cmd/initcmd/cmd.go
+++ b/internal/cli/steering/cmd/initcmd/cmd.go
@@ -16,6 +16,7 @@ import (
"github.com/ActiveMemory/ctx/internal/config/embed/cmd"
"github.com/ActiveMemory/ctx/internal/config/file"
"github.com/ActiveMemory/ctx/internal/config/fs"
+ cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
errSteering "github.com/ActiveMemory/ctx/internal/err/steering"
ctxIo "github.com/ActiveMemory/ctx/internal/io"
"github.com/ActiveMemory/ctx/internal/rc"
@@ -83,7 +84,7 @@ func Run(c *cobra.Command) error {
sf := &steering.SteeringFile{
Name: ff.Name,
Description: ff.Description,
- Inclusion: steering.InclusionAlways,
+ Inclusion: cfgSteering.InclusionAlways,
Priority: 10,
Body: ff.Body,
}
diff --git a/internal/cli/steering/cmd/list/cmd.go b/internal/cli/steering/cmd/list/cmd.go
index 73489b66a..acf1d377b 100644
--- a/internal/cli/steering/cmd/list/cmd.go
+++ b/internal/cli/steering/cmd/list/cmd.go
@@ -13,16 +13,13 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/config/embed/cmd"
+ cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
"github.com/ActiveMemory/ctx/internal/config/token"
"github.com/ActiveMemory/ctx/internal/rc"
"github.com/ActiveMemory/ctx/internal/steering"
writeSteering "github.com/ActiveMemory/ctx/internal/write/steering"
)
-// labelAllTools is the display label when a steering file
-// applies to all tools.
-const labelAllTools = "all"
-
// Cmd returns the "ctx steering list" subcommand.
//
// Returns:
@@ -59,7 +56,7 @@ func Run(c *cobra.Command) error {
}
for _, sf := range files {
- tools := labelAllTools
+ tools := cfgSteering.LabelAllTools
if len(sf.Tools) > 0 {
tools = strings.Join(sf.Tools, token.CommaSpace)
}
diff --git a/internal/cli/steering/cmd/preview/cmd.go b/internal/cli/steering/cmd/preview/cmd.go
index 0884aa1b1..e58d407b2 100644
--- a/internal/cli/steering/cmd/preview/cmd.go
+++ b/internal/cli/steering/cmd/preview/cmd.go
@@ -13,16 +13,13 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/config/embed/cmd"
+ cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
"github.com/ActiveMemory/ctx/internal/config/token"
"github.com/ActiveMemory/ctx/internal/rc"
"github.com/ActiveMemory/ctx/internal/steering"
writeSteering "github.com/ActiveMemory/ctx/internal/write/steering"
)
-// labelAllTools is the display label when a steering file
-// applies to all tools.
-const labelAllTools = "all"
-
// Cmd returns the "ctx steering preview" subcommand.
//
// Returns:
@@ -65,7 +62,7 @@ func Run(c *cobra.Command, prompt string) error {
writeSteering.PreviewHeader(c, prompt)
for _, sf := range matched {
- tools := labelAllTools
+ tools := cfgSteering.LabelAllTools
if len(sf.Tools) > 0 {
tools = strings.Join(sf.Tools, token.CommaSpace)
}
diff --git a/internal/cli/system/cmd/message/cmd/edit/run.go b/internal/cli/system/cmd/message/cmd/edit/run.go
index a35577e7a..1391ff01d 100644
--- a/internal/cli/system/cmd/message/cmd/edit/run.go
+++ b/internal/cli/system/cmd/message/cmd/edit/run.go
@@ -16,6 +16,7 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/hook"
"github.com/ActiveMemory/ctx/internal/cli/system/core/message"
"github.com/ActiveMemory/ctx/internal/config/file"
+ cfgHook "github.com/ActiveMemory/ctx/internal/config/hook"
"github.com/ActiveMemory/ctx/internal/err/fs"
errTrigger "github.com/ActiveMemory/ctx/internal/err/trigger"
ctxIo "github.com/ActiveMemory/ctx/internal/io"
@@ -44,7 +45,7 @@ func Run(cmd *cobra.Command, hk, variant string) error {
return errTrigger.OverrideExists(oPath, hk, variant)
}
- if info.Category == messages.CategoryCtxSpecific {
+ if info.Category == cfgHook.CategoryCtxSpecific {
writeMessage.CtxSpecificWarning(cmd)
}
diff --git a/internal/cli/trigger/cmd/add/cmd.go b/internal/cli/trigger/cmd/add/cmd.go
index d5b3970b2..de8929cdc 100644
--- a/internal/cli/trigger/cmd/add/cmd.go
+++ b/internal/cli/trigger/cmd/add/cmd.go
@@ -14,6 +14,7 @@ import (
"github.com/spf13/cobra"
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
+ "github.com/ActiveMemory/ctx/internal/assets/tpl"
"github.com/ActiveMemory/ctx/internal/config/embed/cmd"
"github.com/ActiveMemory/ctx/internal/config/file"
"github.com/ActiveMemory/ctx/internal/config/fs"
@@ -25,26 +26,6 @@ import (
writeTrigger "github.com/ActiveMemory/ctx/internal/write/trigger"
)
-// scriptTemplate is the shell script template for new hooks.
-const scriptTemplate = `#!/usr/bin/env bash
-# Hook: %s
-# Type: %s
-# Created by: ctx hook add
-
-set -euo pipefail
-
-INPUT=$(cat)
-
-# Parse input fields
-HOOK_TYPE=$(echo "$INPUT" | jq -r '.hookType')
-TOOL=$(echo "$INPUT" | jq -r '.tool // empty')
-
-# Your hook logic here
-
-# Return output
-echo '{"cancel": false, "context": "", "message": ""}'
-`
-
// Cmd returns the "ctx hook add" subcommand.
//
// Returns:
@@ -71,7 +52,7 @@ func Cmd() *cobra.Command {
// - name: The hook script name (without .sh extension)
func Run(c *cobra.Command, hookType, name string) error {
// Validate hook type.
- ht := trigger.HookType(hookType)
+ ht := hookType
valid := trigger.ValidTypes()
found := false
@@ -84,9 +65,7 @@ func Run(c *cobra.Command, hookType, name string) error {
if !found {
names := make([]string, len(valid))
- for i, v := range valid {
- names[i] = string(v)
- }
+ copy(names, valid)
return errTrigger.InvalidType(hookType, strings.Join(names, token.CommaSpace))
}
@@ -107,7 +86,7 @@ func Run(c *cobra.Command, hookType, name string) error {
return errTrigger.ScriptExists(filePath)
}
- content := fmt.Sprintf(scriptTemplate, name, hookType)
+ content := fmt.Sprintf(tpl.TriggerScript, name, hookType)
writeErr := ctxIo.SafeWriteFile(
filePath, []byte(content), fs.PermExec,
)
diff --git a/internal/cli/trigger/cmd/list/cmd.go b/internal/cli/trigger/cmd/list/cmd.go
index 99681b7cd..888fd6738 100644
--- a/internal/cli/trigger/cmd/list/cmd.go
+++ b/internal/cli/trigger/cmd/list/cmd.go
@@ -11,19 +11,12 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/config/embed/cmd"
+ cfgTrigger "github.com/ActiveMemory/ctx/internal/config/trigger"
"github.com/ActiveMemory/ctx/internal/rc"
"github.com/ActiveMemory/ctx/internal/trigger"
writeTrigger "github.com/ActiveMemory/ctx/internal/write/trigger"
)
-// Hook status labels.
-const (
- // statusEnabled is the label for an enabled hook.
- statusEnabled = "enabled"
- // statusDisabled is the label for a disabled hook.
- statusDisabled = "disabled"
-)
-
// Cmd returns the "ctx hook list" subcommand.
//
// Returns:
@@ -61,11 +54,11 @@ func Run(c *cobra.Command) error {
continue
}
- writeTrigger.TypeHeader(c, string(ht))
+ writeTrigger.TypeHeader(c, ht)
for _, h := range hooks {
- status := statusEnabled
+ status := cfgTrigger.StatusEnabled
if !h.Enabled {
- status = statusDisabled
+ status = cfgTrigger.StatusDisabled
}
writeTrigger.Entry(c, h.Name, status, h.Path)
total++
diff --git a/internal/cli/trigger/cmd/test/cmd.go b/internal/cli/trigger/cmd/test/cmd.go
index 7f1894f23..de4b8277f 100644
--- a/internal/cli/trigger/cmd/test/cmd.go
+++ b/internal/cli/trigger/cmd/test/cmd.go
@@ -18,6 +18,7 @@ import (
embedFlag "github.com/ActiveMemory/ctx/internal/config/embed/flag"
"github.com/ActiveMemory/ctx/internal/config/flag"
"github.com/ActiveMemory/ctx/internal/config/token"
+ cfgTrigger "github.com/ActiveMemory/ctx/internal/config/trigger"
errTrigger "github.com/ActiveMemory/ctx/internal/err/trigger"
"github.com/ActiveMemory/ctx/internal/flagbind"
"github.com/ActiveMemory/ctx/internal/rc"
@@ -25,16 +26,6 @@ import (
writeTrigger "github.com/ActiveMemory/ctx/internal/write/trigger"
)
-// Mock input constants for hook testing.
-const (
- // mockSessionID is the session ID used in test hook input.
- mockSessionID = "test-session"
- // mockModel is the model name used in test hook input.
- mockModel = "test-model"
- // mockVersion is the ctx version used in test hook input.
- mockVersion = "test"
-)
-
// Cmd returns the "ctx hook test" subcommand.
//
// Returns:
@@ -71,7 +62,7 @@ func Cmd() *cobra.Command {
// - path: Optional file path for mock input
func Run(c *cobra.Command, hookType, toolName, path string) error {
// Validate hook type.
- ht := trigger.HookType(hookType)
+ ht := hookType
valid := trigger.ValidTypes()
found := false
@@ -84,9 +75,7 @@ func Run(c *cobra.Command, hookType, toolName, path string) error {
if !found {
names := make([]string, len(valid))
- for i, v := range valid {
- names[i] = string(v)
- }
+ copy(names, valid)
return errTrigger.InvalidType(hookType, strings.Join(names, token.CommaSpace))
}
@@ -104,11 +93,11 @@ func Run(c *cobra.Command, hookType, toolName, path string) error {
Tool: toolName,
Parameters: params,
Session: trigger.HookSession{
- ID: mockSessionID,
- Model: mockModel,
+ ID: cfgTrigger.MockSessionID,
+ Model: cfgTrigger.MockModel,
},
Timestamp: time.Now().UTC().Format(time.RFC3339),
- CtxVersion: mockVersion,
+ CtxVersion: cfgTrigger.MockVersion,
}
writeTrigger.TestingHeader(c, hookType)
diff --git a/internal/compat/compat_test.go b/internal/compat/compat_test.go
index a954842a4..55cdb6b88 100644
--- a/internal/compat/compat_test.go
+++ b/internal/compat/compat_test.go
@@ -14,6 +14,7 @@ import (
"time"
"github.com/ActiveMemory/ctx/internal/cli/agent/core/budget"
+ cfgTrigger "github.com/ActiveMemory/ctx/internal/config/trigger"
"github.com/ActiveMemory/ctx/internal/entity"
"github.com/ActiveMemory/ctx/internal/skill"
"github.com/ActiveMemory/ctx/internal/steering"
@@ -65,7 +66,7 @@ func TestBackwardCompat_HookRunAll_NonExistentDir(t *testing.T) {
nonexistent := filepath.Join(t.TempDir(), "no-such-hooks")
input := &trigger.HookInput{TriggerType: "pre-tool-use", Tool: "test"}
- agg, err := trigger.RunAll(nonexistent, trigger.PreToolUse, input, 5*time.Second)
+ agg, err := trigger.RunAll(nonexistent, cfgTrigger.PreToolUse, input, 5*time.Second)
if err != nil {
t.Fatalf("expected no error, got %v", err)
}
diff --git a/internal/config/README.md b/internal/config/README.md
index 8da352db1..3f7db97c3 100644
--- a/internal/config/README.md
+++ b/internal/config/README.md
@@ -125,3 +125,26 @@ go list ./internal/config/...
provides and what domain it serves.
- **Audit-enforced.** TestDescKeyYAMLLinkage verifies all 879+
DescKey constants resolve to non-empty YAML values.
+
+## config/ vs entity/ for Types
+
+String-typed enums (`type IssueType string`) and their const
+values live in `config/` — the same place all other string
+constants live. The type annotation adds compile-time safety but
+does not change where the definition belongs.
+
+**When to promote to `entity/`:** When the type grows behavior —
+method receivers, interface participation, or business logic. A
+type with `func (t IssueType) Severity() int` has outgrown
+`config/` and belongs in `entity/`.
+
+| Stage | Home | Example |
+|-------|------|---------|
+| Pure value enum | `config//` | `type IssueType string` with const values |
+| Cross-package value enum | `config//` | Same — `config/` is already importable everywhere |
+| Type with methods | `entity/` | `func (t IssueType) Severity() int` |
+| Type implementing interfaces | `entity/` | `var _ fmt.Stringer = IssueType("")` |
+
+The migration path is natural: start in `config/`, promote to
+`entity/` when behavior appears. `TestCrossPackageTypes` catches
+the cross-package signal that indicates a type may need promotion.
diff --git a/internal/config/bootstrap/bootstrap.go b/internal/config/bootstrap/bootstrap.go
index 5090b1660..7fdb6f494 100644
--- a/internal/config/bootstrap/bootstrap.go
+++ b/internal/config/bootstrap/bootstrap.go
@@ -6,6 +6,10 @@
package bootstrap
+// DefaultVersion is the fallback version string when no
+// build-time ldflags are provided.
+const DefaultVersion = "dev"
+
// Bootstrap display constants.
const (
// FileListWidth is the character width at which the file list wraps.
diff --git a/internal/config/dep/dep.go b/internal/config/dep/dep.go
index f4e3642d2..4a5b08104 100644
--- a/internal/config/dep/dep.go
+++ b/internal/config/dep/dep.go
@@ -40,6 +40,14 @@ const (
WorkspaceRoot = "root"
)
+// Ecosystem label constants.
+const (
+ // EcosystemPython is the ecosystem label for Python.
+ EcosystemPython = "python"
+ // EcosystemRust is the ecosystem label for Rust.
+ EcosystemRust = "rust"
+)
+
// Rust ecosystem constants.
const (
// CargoToml is the Rust manifest filename.
diff --git a/internal/config/dir/dir.go b/internal/config/dir/dir.go
index 9ecadbcd6..2c5321126 100644
--- a/internal/config/dir/dir.go
+++ b/internal/config/dir/dir.go
@@ -51,6 +51,12 @@ const (
Templates = "templates"
// CtxData is the user-level ctx data directory (~/.ctx/).
CtxData = ".ctx"
+ // DefaultSteeringPath is the default steering directory
+ // path relative to the project root.
+ DefaultSteeringPath = ".context/steering"
+ // DefaultHooksPath is the default hooks directory path
+ // relative to the project root.
+ DefaultHooksPath = ".context/hooks"
)
// Platform-specific home directory path components.
diff --git a/internal/config/drift/doc.go b/internal/config/drift/doc.go
new file mode 100644
index 000000000..0d630cd7b
--- /dev/null
+++ b/internal/config/drift/doc.go
@@ -0,0 +1,13 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+// Package drift defines issue types, status codes, check names,
+// and rule identifiers for drift detection.
+//
+// Constants are referenced by domain packages via config/drift.*.
+// Provides constants and definitions for drift operations.
+// Constants are referenced by domain packages.
+package drift
diff --git a/internal/config/drift/types.go b/internal/config/drift/types.go
new file mode 100644
index 000000000..676bf7acf
--- /dev/null
+++ b/internal/config/drift/types.go
@@ -0,0 +1,112 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package drift
+
+// IssueType categorizes a drift issue for grouping
+// and filtering.
+type IssueType = string
+
+// Drift issue type constants for categorization.
+const (
+ // IssueDeadPath indicates a file path reference
+ // that no longer exists.
+ IssueDeadPath IssueType = "dead_path"
+ // IssueStaleness indicates accumulated completed
+ // tasks needing archival.
+ IssueStaleness IssueType = "staleness"
+ // IssueSecret indicates a file that may contain
+ // secrets or credentials.
+ IssueSecret IssueType = "potential_secret"
+ // IssueMissing indicates a required context file
+ // that does not exist.
+ IssueMissing IssueType = "missing_file"
+ // IssueStaleAge indicates a context file that
+ // hasn't been modified recently.
+ IssueStaleAge IssueType = "stale_age"
+ // IssueEntryCount indicates a knowledge file has
+ // too many entries.
+ IssueEntryCount IssueType = "entry_count"
+ // IssueMissingPackage indicates an internal package
+ // not documented in ARCHITECTURE.md.
+ IssueMissingPackage IssueType = "missing_package"
+ // IssueStaleHeader indicates a context file whose
+ // comment header doesn't match the embedded template.
+ IssueStaleHeader IssueType = "stale_header"
+ // IssueInvalidTool indicates an unsupported tool
+ // identifier in a steering file or .ctxrc config.
+ IssueInvalidTool IssueType = "invalid_tool"
+ // IssueHookNoExec indicates a hook script missing
+ // the executable permission bit.
+ IssueHookNoExec IssueType = "hook_no_exec"
+ // IssueStaleSyncFile indicates a synced tool-native
+ // file that is out of date compared to its source.
+ IssueStaleSyncFile IssueType = "stale_sync_file"
+)
+
+// StatusType represents the overall status of a drift
+// report.
+type StatusType = string
+
+// Drift report status constants.
+const (
+ // StatusOk means no drift was detected.
+ StatusOk StatusType = "ok"
+ // StatusWarning means non-critical issues were found.
+ StatusWarning StatusType = "warning"
+ // StatusViolation means constitution violations were
+ // found.
+ StatusViolation StatusType = "violation"
+)
+
+// CheckName identifies a drift detection check.
+type CheckName = string
+
+// Drift detection check name constants.
+const (
+ // CheckPathReferences validates that file paths in
+ // context files exist.
+ CheckPathReferences CheckName = "path_references"
+ // CheckStaleness detects accumulated completed tasks.
+ CheckStaleness CheckName = "staleness_check"
+ // CheckConstitution verifies constitution rules are
+ // respected.
+ CheckConstitution CheckName = "constitution_check"
+ // CheckRequiredFiles ensures all required context files
+ // are present.
+ CheckRequiredFiles CheckName = "required_files"
+ // CheckFileAge checks whether context files have been
+ // modified recently.
+ CheckFileAge CheckName = "file_age_check"
+ // CheckEntryCount checks whether knowledge files have
+ // excessive entries.
+ CheckEntryCount CheckName = "entry_count_check"
+ // CheckMissingPackages checks for undocumented internal
+ // packages.
+ CheckMissingPackages CheckName = "missing_packages"
+ // CheckTemplateHeaders checks context file comment
+ // headers against templates.
+ CheckTemplateHeaders CheckName = "template_headers"
+ // CheckSteeringTools validates tool identifiers in
+ // steering files.
+ CheckSteeringTools CheckName = "steering_tools"
+ // CheckHookPerms checks hook scripts for executable
+ // permission bits.
+ CheckHookPerms CheckName = "hook_permissions"
+ // CheckSyncStaleness compares synced tool-native files
+ // against source steering files.
+ CheckSyncStaleness CheckName = "sync_staleness"
+ // CheckRCTool validates the .ctxrc tool field against
+ // supported identifiers.
+ CheckRCTool CheckName = "rc_tool_field"
+)
+
+// Constitution rule names referenced in drift violations.
+const (
+ // RuleNoSecrets is the constitution rule for secret
+ // file detection.
+ RuleNoSecrets = "no_secrets"
+)
diff --git a/internal/config/embed/text/mcp_tool.go b/internal/config/embed/text/mcp_tool.go
index c95db99ec..c15605d2e 100644
--- a/internal/config/embed/text/mcp_tool.go
+++ b/internal/config/embed/text/mcp_tool.go
@@ -112,3 +112,26 @@ const (
// DescKeyMCPSearchNoMatch is the text key for mcp search no match messages.
DescKeyMCPSearchNoMatch = "mcp.search-no-match"
)
+
+// DescKeys for MCP session hook output.
+const (
+ // DescKeyMCPHooksDisabled is the message when hooks are
+ // not enabled.
+ DescKeyMCPHooksDisabled = "mcp.hooks-disabled"
+ // DescKeyMCPSessionStartOK is the message when start hooks
+ // produce no additional context.
+ DescKeyMCPSessionStartOK = "mcp.session-start-ok"
+ // DescKeyMCPSessionEndOK is the message when end hooks
+ // complete without context.
+ DescKeyMCPSessionEndOK = "mcp.session-end-ok"
+)
+
+// DescKeys for MCP handler steering result messages.
+const (
+ // DescKeyMCPSteeringNoFiles is the message when no steering
+ // files exist.
+ DescKeyMCPSteeringNoFiles = "mcp.steering-no-files"
+ // DescKeyMCPSteeringNoMatch is the message when no steering
+ // files match.
+ DescKeyMCPSteeringNoMatch = "mcp.steering-no-match"
+)
diff --git a/internal/config/embed/text/setup.go b/internal/config/embed/text/setup.go
index c9e742192..a2552dffa 100644
--- a/internal/config/embed/text/setup.go
+++ b/internal/config/embed/text/setup.go
@@ -34,3 +34,34 @@ const (
// skip steer messages.
DescKeyWriteSetupDeploySkipSteer = "write.setup-deploy-skip-steer"
)
+
+// DescKeys for setup integration instruction output.
+const (
+ // DescKeyWriteSetupCursorHead is the Cursor section header.
+ DescKeyWriteSetupCursorHead = "write.setup-cursor-head"
+ // DescKeyWriteSetupCursorRun is the Cursor run command hint.
+ DescKeyWriteSetupCursorRun = "write.setup-cursor-run"
+ // DescKeyWriteSetupCursorMCP is the Cursor MCP config path.
+ DescKeyWriteSetupCursorMCP = "write.setup-cursor-mcp"
+ // DescKeyWriteSetupCursorSync is the Cursor steering sync path.
+ DescKeyWriteSetupCursorSync = "write.setup-cursor-sync"
+ // DescKeyWriteSetupKiroHead is the Kiro section header.
+ DescKeyWriteSetupKiroHead = "write.setup-kiro-head"
+ // DescKeyWriteSetupKiroRun is the Kiro run command hint.
+ DescKeyWriteSetupKiroRun = "write.setup-kiro-run"
+ // DescKeyWriteSetupKiroMCP is the Kiro MCP config path.
+ DescKeyWriteSetupKiroMCP = "write.setup-kiro-mcp"
+ // DescKeyWriteSetupKiroSync is the Kiro steering sync path.
+ DescKeyWriteSetupKiroSync = "write.setup-kiro-sync"
+ // DescKeyWriteSetupClineHead is the Cline section header.
+ DescKeyWriteSetupClineHead = "write.setup-cline-head"
+ // DescKeyWriteSetupClineRun is the Cline run command hint.
+ DescKeyWriteSetupClineRun = "write.setup-cline-run"
+ // DescKeyWriteSetupClineMCP is the Cline MCP config path.
+ DescKeyWriteSetupClineMCP = "write.setup-cline-mcp"
+ // DescKeyWriteSetupClineSync is the Cline steering sync path.
+ DescKeyWriteSetupClineSync = "write.setup-cline-sync"
+ // DescKeyWriteSetupNoSteeringToSync is the message when no
+ // steering files are available for sync.
+ DescKeyWriteSetupNoSteeringToSync = "write.setup-no-steering-to-sync"
+)
diff --git a/internal/config/embed/text/skill.go b/internal/config/embed/text/skill.go
index c5110aedb..4ebf90b33 100644
--- a/internal/config/embed/text/skill.go
+++ b/internal/config/embed/text/skill.go
@@ -24,4 +24,7 @@ const (
DescKeyWriteSkillCount = "write.skill-count"
// DescKeyWriteSkillRemoved is the text key for write skill removed messages.
DescKeyWriteSkillRemoved = "write.skill-removed"
+ // DescKeyWriteSkillNoSkills is the message when no skills
+ // are installed.
+ DescKeyWriteSkillNoSkills = "write.skill-no-skills"
)
diff --git a/internal/config/embed/text/steering.go b/internal/config/embed/text/steering.go
index 2febc1a58..d98159a0d 100644
--- a/internal/config/embed/text/steering.go
+++ b/internal/config/embed/text/steering.go
@@ -44,4 +44,10 @@ const (
// DescKeyWriteSteeringSyncSummary is the text key for write steering sync
// summary messages.
DescKeyWriteSteeringSyncSummary = "write.steering-sync-summary"
+ // DescKeyWriteSteeringNoFiles is the message when no steering
+ // files exist.
+ DescKeyWriteSteeringNoFiles = "write.steering-no-files"
+ // DescKeyWriteSteeringNoMatch is the message when no steering
+ // files match the prompt.
+ DescKeyWriteSteeringNoMatch = "write.steering-no-match"
)
diff --git a/internal/config/embed/text/trigger.go b/internal/config/embed/text/trigger.go
index a5fd49862..945535159 100644
--- a/internal/config/embed/text/trigger.go
+++ b/internal/config/embed/text/trigger.go
@@ -49,4 +49,12 @@ const (
// DescKeyWriteTriggerErrLine is the text key for write trigger err line
// messages.
DescKeyWriteTriggerErrLine = "write.trigger-err-line"
+ // DescKeyWriteTriggerNoHooks is the message when no hooks
+ // are found.
+ DescKeyWriteTriggerNoHooks = "write.trigger-no-hooks"
+ // DescKeyWriteTriggerErrorsHdr is the errors section header.
+ DescKeyWriteTriggerErrorsHdr = "write.trigger-errors-hdr"
+ // DescKeyWriteTriggerNoOutput is the message when hooks
+ // produce no output.
+ DescKeyWriteTriggerNoOutput = "write.trigger-no-output"
)
diff --git a/internal/config/hook/category.go b/internal/config/hook/category.go
new file mode 100644
index 000000000..2581f35fc
--- /dev/null
+++ b/internal/config/hook/category.go
@@ -0,0 +1,17 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package hook
+
+// Hook message category labels.
+const (
+ // CategoryCustomizable marks messages intended for
+ // project-specific customization.
+ CategoryCustomizable = "customizable"
+ // CategoryCtxSpecific marks messages specific to ctx's
+ // own development workflow.
+ CategoryCtxSpecific = "ctx-specific"
+)
diff --git a/internal/config/hook/hook.go b/internal/config/hook/hook.go
index 6112737c8..12b75a678 100644
--- a/internal/config/hook/hook.go
+++ b/internal/config/hook/hook.go
@@ -66,6 +66,7 @@ const (
ToolCursor = "cursor"
ToolKiro = "kiro"
ToolCline = "cline"
+ ToolCodex = "codex"
ToolWindsurf = "windsurf"
)
diff --git a/internal/config/io/doc.go b/internal/config/io/doc.go
new file mode 100644
index 000000000..95a676526
--- /dev/null
+++ b/internal/config/io/doc.go
@@ -0,0 +1,13 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+// Package io defines path safety constants including dangerous
+// system directory prefixes.
+//
+// Constants are referenced by domain packages via config/io.*.
+// Provides constants and definitions for io operations.
+// Constants are referenced by domain packages.
+package io
diff --git a/internal/config/io/prefix.go b/internal/config/io/prefix.go
new file mode 100644
index 000000000..8b8fd6880
--- /dev/null
+++ b/internal/config/io/prefix.go
@@ -0,0 +1,37 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package io
+
+// DangerousPrefixes lists system directories where ctx
+// should never read from or write to. Checked after
+// filepath.Abs resolution.
+var DangerousPrefixes = []string{
+ // PrefixBin is the system binaries directory.
+ "/bin/",
+ // PrefixBoot is the boot loader directory.
+ "/boot/",
+ // PrefixDev is the device files directory.
+ "/dev/",
+ // PrefixEtc is the system configuration directory.
+ "/etc/",
+ // PrefixLib is the shared libraries directory.
+ "/lib/",
+ // PrefixLib64 is the 64-bit shared libraries directory.
+ "/lib64/",
+ // PrefixProc is the process information directory.
+ "/proc/",
+ // PrefixSbin is the system binaries directory.
+ "/sbin/",
+ // PrefixSys is the kernel/device tree directory.
+ "/sys/",
+ // PrefixUsrBin is the user binaries directory.
+ "/usr/bin/",
+ // PrefixUsrLib is the user libraries directory.
+ "/usr/lib/",
+ // PrefixUsrSbin is the user system binaries directory.
+ "/usr/sbin/",
+}
diff --git a/internal/config/mcp/schema/schema.go b/internal/config/mcp/schema/schema.go
index e73ac259e..0a4b39d22 100644
--- a/internal/config/mcp/schema/schema.go
+++ b/internal/config/mcp/schema/schema.go
@@ -6,6 +6,9 @@
package schema
+// ProtocolVersion is the MCP protocol version string.
+const ProtocolVersion = "2024-11-05"
+
// JSON Schema type constants.
const (
// Object is the JSON Schema type for objects.
diff --git a/internal/config/rc/doc.go b/internal/config/rc/doc.go
new file mode 100644
index 000000000..bff6f5911
--- /dev/null
+++ b/internal/config/rc/doc.go
@@ -0,0 +1,13 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+// Package rc defines default values for hooks, steering,
+// and timeout configuration in .ctxrc.
+//
+// Constants are referenced by domain packages via config/rc.*.
+// Provides constants and definitions for rc operations.
+// Constants are referenced by domain packages.
+package rc
diff --git a/internal/config/setup/doc.go b/internal/config/setup/doc.go
new file mode 100644
index 000000000..952321d74
--- /dev/null
+++ b/internal/config/setup/doc.go
@@ -0,0 +1,14 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+// Package setup defines integration info strings and deploy
+// path constants for tool setup commands.
+//
+// Constants are referenced by domain packages via
+// config/setup.*.
+// Provides constants and definitions for setup operations.
+// Constants are referenced by domain packages.
+package setup
diff --git a/internal/config/setup/setup.go b/internal/config/setup/setup.go
new file mode 100644
index 000000000..4a4830a23
--- /dev/null
+++ b/internal/config/setup/setup.go
@@ -0,0 +1,54 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package setup
+
+// Display names for supported integration tools.
+const (
+ // DisplayKiro is the display name for Kiro.
+ DisplayKiro = "Kiro"
+ // DisplayCursor is the display name for Cursor.
+ DisplayCursor = "Cursor"
+ // DisplayCline is the display name for Cline.
+ DisplayCline = "Cline"
+)
+
+// Kiro configuration paths.
+const (
+ // DirKiro is the Kiro editor config directory.
+ DirKiro = ".kiro"
+ // DirSettings is the Kiro settings subdirectory.
+ DirSettings = "settings"
+ // FileMCPJSON is the Kiro MCP server config file name.
+ FileMCPJSON = "mcp.json"
+ // MCPConfigPathKiro is the deployed MCP config path.
+ MCPConfigPathKiro = DirKiro + "/settings/mcp.json"
+ // SteeringDeployPathKiro is the deployed steering
+ // path for Kiro.
+ SteeringDeployPathKiro = DirKiro + "/steering/"
+)
+
+// Cursor configuration paths.
+const (
+ // DirCursor is the Cursor editor config directory.
+ DirCursor = ".cursor"
+ // FileMCPJSONCursor is the Cursor MCP config file.
+ FileMCPJSONCursor = "mcp.json"
+ // MCPConfigPathCursor is the deployed MCP config path.
+ MCPConfigPathCursor = DirCursor + "/mcp.json"
+ // SteeringPathCursor is the deployed steering path
+ // for Cursor.
+ SteeringPathCursor = DirCursor + "/rules/"
+)
+
+// Cline configuration paths.
+const (
+ // MCPConfigPathCline is the deployed MCP config path.
+ MCPConfigPathCline = ".vscode/mcp.json"
+ // SteeringPathCline is the deployed steering path
+ // for Cline.
+ SteeringPathCline = ".clinerules/"
+)
diff --git a/internal/config/skill/doc.go b/internal/config/skill/doc.go
new file mode 100644
index 000000000..95aec6f49
--- /dev/null
+++ b/internal/config/skill/doc.go
@@ -0,0 +1,14 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+// Package skill defines manifest file names and parsing
+// constants for skill loading.
+//
+// Constants are referenced by domain packages via
+// config/skill.*.
+// Provides constants and definitions for skill operations.
+// Constants are referenced by domain packages.
+package skill
diff --git a/internal/config/skill/skill.go b/internal/config/skill/skill.go
new file mode 100644
index 000000000..345f61700
--- /dev/null
+++ b/internal/config/skill/skill.go
@@ -0,0 +1,11 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package skill
+
+// SkillManifest is the expected filename inside each
+// skill directory.
+const SkillManifest = "SKILL.md"
diff --git a/internal/config/steering/doc.go b/internal/config/steering/doc.go
new file mode 100644
index 000000000..51ac78198
--- /dev/null
+++ b/internal/config/steering/doc.go
@@ -0,0 +1,14 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+// Package steering defines inclusion modes, sync directory paths,
+// and file extension constants for steering file operations.
+//
+// Constants are referenced by domain packages via
+// config/steering.*.
+// Provides constants and definitions for steering operations.
+// Constants are referenced by domain packages.
+package steering
diff --git a/internal/config/steering/steering.go b/internal/config/steering/steering.go
new file mode 100644
index 000000000..21d596e5b
--- /dev/null
+++ b/internal/config/steering/steering.go
@@ -0,0 +1,28 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package steering
+
+// Tool-native directory and extension constants used by
+// steering sync to write files in each tool's format.
+const (
+ // DirCursorDot is the Cursor configuration directory.
+ DirCursorDot = ".cursor"
+ // DirRules is the Cursor rules subdirectory.
+ DirRules = "rules"
+ // ExtMDC is the Cursor MDC rule file extension.
+ ExtMDC = ".mdc"
+ // DirClinerules is the Cline rules directory.
+ DirClinerules = ".clinerules"
+ // DirKiroDot is the Kiro configuration directory.
+ DirKiroDot = ".kiro"
+ // DirSteering is the Kiro steering subdirectory.
+ DirSteering = "steering"
+)
+
+// LabelAllTools is the display label when a steering
+// or trigger item applies to all tools.
+const LabelAllTools = "all"
diff --git a/internal/config/steering/types.go b/internal/config/steering/types.go
new file mode 100644
index 000000000..ffcf6e674
--- /dev/null
+++ b/internal/config/steering/types.go
@@ -0,0 +1,21 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package steering
+
+// InclusionMode determines when a steering file is
+// injected into an AI prompt.
+type InclusionMode string
+
+// Inclusion mode constants for steering file injection.
+const (
+ // InclusionAlways includes the file in every packet.
+ InclusionAlways InclusionMode = "always"
+ // InclusionAuto includes when prompt matches.
+ InclusionAuto InclusionMode = "auto"
+ // InclusionManual includes only when named.
+ InclusionManual InclusionMode = "manual"
+)
diff --git a/internal/config/sysinfo/resource.go b/internal/config/sysinfo/resource.go
new file mode 100644
index 000000000..5e4613b46
--- /dev/null
+++ b/internal/config/sysinfo/resource.go
@@ -0,0 +1,33 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package sysinfo
+
+// Severity label strings for display output.
+const (
+ // LabelOK is the severity label for no concern.
+ LabelOK = "ok"
+ // LabelWarning is the severity label for
+ // approaching limits.
+ LabelWarning = "warning"
+ // LabelDanger is the severity label for critically
+ // low resources.
+ LabelDanger = "danger"
+)
+
+// Resource name constants for threshold evaluation.
+const (
+ // ResourceMemory is the resource name for physical
+ // memory.
+ ResourceMemory = "memory"
+ // ResourceSwap is the resource name for swap space.
+ ResourceSwap = "swap"
+ // ResourceDisk is the resource name for filesystem
+ // usage.
+ ResourceDisk = "disk"
+ // ResourceLoad is the resource name for system load.
+ ResourceLoad = "load"
+)
diff --git a/internal/config/token/delim.go b/internal/config/token/delim.go
index 72d2f2bdd..ffb347f24 100644
--- a/internal/config/token/delim.go
+++ b/internal/config/token/delim.go
@@ -59,6 +59,11 @@ const (
Plus = "+"
// Hash is the hash/pound character.
Hash = "#"
+ // ParentDir is the relative parent directory component.
+ ParentDir = ".."
+ // FrontmatterDelimiter is the YAML frontmatter
+ // boundary marker.
+ FrontmatterDelimiter = "---"
)
// TopicSeparators are the delimiters between a date and topic in session
diff --git a/internal/config/token/whitespace.go b/internal/config/token/whitespace.go
index 1239836a7..aa4b344c4 100644
--- a/internal/config/token/whitespace.go
+++ b/internal/config/token/whitespace.go
@@ -20,4 +20,10 @@ const (
Space = " "
// Tab is a horizontal tab character.
Tab = "\t"
+ // DoubleNewline is two consecutive Unix newlines,
+ // used as a paragraph separator.
+ DoubleNewline = "\n\n"
+ // TrimCR is the character set trimmed from the start
+ // of raw frontmatter to normalize line endings.
+ TrimCR = "\n\r"
)
diff --git a/internal/config/trigger/doc.go b/internal/config/trigger/doc.go
new file mode 100644
index 000000000..75c04784b
--- /dev/null
+++ b/internal/config/trigger/doc.go
@@ -0,0 +1,14 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+// Package trigger defines lifecycle event type constants
+// and hook input field keys for trigger operations.
+//
+// Constants are referenced by domain packages via
+// config/trigger.*.
+// Provides constants and definitions for trigger operations.
+// Constants are referenced by domain packages.
+package trigger
diff --git a/internal/config/trigger/trigger.go b/internal/config/trigger/trigger.go
new file mode 100644
index 000000000..002978c95
--- /dev/null
+++ b/internal/config/trigger/trigger.go
@@ -0,0 +1,25 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package trigger
+
+// Status labels for hook list display.
+const (
+ // StatusEnabled is the label for an enabled hook.
+ StatusEnabled = "enabled"
+ // StatusDisabled is the label for a disabled hook.
+ StatusDisabled = "disabled"
+)
+
+// Mock input constants for hook testing.
+const (
+ // MockSessionID is the session ID in test input.
+ MockSessionID = "test-session"
+ // MockModel is the model name in test input.
+ MockModel = "test-model"
+ // MockVersion is the ctx version in test input.
+ MockVersion = "test"
+)
diff --git a/internal/config/trigger/types.go b/internal/config/trigger/types.go
new file mode 100644
index 000000000..08654dcad
--- /dev/null
+++ b/internal/config/trigger/types.go
@@ -0,0 +1,26 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package trigger
+
+// TriggerType identifies the lifecycle trigger event type.
+type TriggerType = string
+
+// Lifecycle trigger event type constants.
+const (
+ // PreToolUse fires before an AI tool invocation.
+ PreToolUse TriggerType = "pre-tool-use"
+ // PostToolUse fires after an AI tool invocation.
+ PostToolUse TriggerType = "post-tool-use"
+ // SessionStart fires when an AI session begins.
+ SessionStart TriggerType = "session-start"
+ // SessionEnd fires when an AI session ends.
+ SessionEnd TriggerType = "session-end"
+ // FileSave fires when a file is saved.
+ FileSave TriggerType = "file-save"
+ // ContextAdd fires when context is added.
+ ContextAdd TriggerType = "context-add"
+)
diff --git a/internal/drift/check.go b/internal/drift/check.go
index c359ec1d3..ba6eb8ba9 100644
--- a/internal/drift/check.go
+++ b/internal/drift/check.go
@@ -15,6 +15,7 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
readTpl "github.com/ActiveMemory/ctx/internal/assets/read/template"
cfgCtx "github.com/ActiveMemory/ctx/internal/config/ctx"
+ cfgDrift "github.com/ActiveMemory/ctx/internal/config/drift"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
"github.com/ActiveMemory/ctx/internal/config/file"
"github.com/ActiveMemory/ctx/internal/config/marker"
@@ -81,7 +82,7 @@ func checkPathReferences(ctx *entity.Context, report *Report) {
report.Warnings = append(report.Warnings, Issue{
File: f.Name,
Line: lineNum + 1,
- Type: IssueDeadPath,
+ Type: cfgDrift.IssueDeadPath,
Message: desc.Text(text.DescKeyDriftDeadPath),
Path: path,
})
@@ -92,7 +93,7 @@ func checkPathReferences(ctx *entity.Context, report *Report) {
}
if !foundDeadPaths {
- report.Passed = append(report.Passed, CheckPathReferences)
+ report.Passed = append(report.Passed, cfgDrift.CheckPathReferences)
}
}
@@ -113,7 +114,7 @@ func checkStaleness(ctx *entity.Context, report *Report) {
if completedCount > 10 {
report.Warnings = append(report.Warnings, Issue{
File: f.Name,
- Type: IssueStaleness,
+ Type: cfgDrift.IssueStaleness,
Message: desc.Text(text.DescKeyDriftStaleness),
Path: "",
})
@@ -122,7 +123,7 @@ func checkStaleness(ctx *entity.Context, report *Report) {
}
if !staleness {
- report.Passed = append(report.Passed, CheckStaleness)
+ report.Passed = append(report.Passed, cfgDrift.CheckStaleness)
}
}
@@ -174,9 +175,9 @@ func checkConstitution(_ *entity.Context, report *Report) {
if len(content) > 0 && !templateFile(content) {
report.Violations = append(report.Violations, Issue{
File: entry.Name(),
- Type: IssueSecret,
+ Type: cfgDrift.IssueSecret,
Message: desc.Text(text.DescKeyDriftSecret),
- Rule: RuleNoSecrets,
+ Rule: cfgDrift.RuleNoSecrets,
})
foundViolation = true
}
@@ -185,7 +186,7 @@ func checkConstitution(_ *entity.Context, report *Report) {
}
if !foundViolation {
- report.Passed = append(report.Passed, CheckConstitution)
+ report.Passed = append(report.Passed, cfgDrift.CheckConstitution)
}
}
@@ -208,7 +209,7 @@ func checkRequiredFiles(ctx *entity.Context, report *Report) {
if !existingFiles[name] {
report.Warnings = append(report.Warnings, Issue{
File: name,
- Type: IssueMissing,
+ Type: cfgDrift.IssueMissing,
Message: desc.Text(text.DescKeyDriftMissingFile),
})
allPresent = false
@@ -216,7 +217,7 @@ func checkRequiredFiles(ctx *entity.Context, report *Report) {
}
if allPresent {
- report.Passed = append(report.Passed, CheckRequiredFiles)
+ report.Passed = append(report.Passed, cfgDrift.CheckRequiredFiles)
}
}
@@ -254,7 +255,7 @@ func checkFileAge(ctx *entity.Context, report *Report) {
days := int(time.Since(f.ModTime).Hours() / cfgTime.HoursPerDay)
report.Warnings = append(report.Warnings, Issue{
File: f.Name,
- Type: IssueStaleAge,
+ Type: cfgDrift.IssueStaleAge,
Message: fmt.Sprintf(desc.Text(text.DescKeyDriftStaleAge), days),
})
foundStale = true
@@ -262,7 +263,7 @@ func checkFileAge(ctx *entity.Context, report *Report) {
}
if !foundStale {
- report.Passed = append(report.Passed, CheckFileAge)
+ report.Passed = append(report.Passed, cfgDrift.CheckFileAge)
}
}
@@ -297,7 +298,7 @@ func checkEntryCount(ctx *entity.Context, report *Report) {
if len(blocks) > c.threshold {
report.Warnings = append(report.Warnings, Issue{
File: f.Name,
- Type: IssueEntryCount,
+ Type: cfgDrift.IssueEntryCount,
Message: fmt.Sprintf(
desc.Text(text.DescKeyDriftEntryCount),
len(blocks), c.threshold,
@@ -308,7 +309,7 @@ func checkEntryCount(ctx *entity.Context, report *Report) {
}
if !found {
- report.Passed = append(report.Passed, CheckEntryCount)
+ report.Passed = append(report.Passed, cfgDrift.CheckEntryCount)
}
}
@@ -352,7 +353,7 @@ func checkMissingPackages(ctx *entity.Context, report *Report) {
if !referenced[pkg] {
report.Warnings = append(report.Warnings, Issue{
File: f.Name,
- Type: IssueMissingPackage,
+ Type: cfgDrift.IssueMissingPackage,
Message: fmt.Sprintf(
desc.Text(text.DescKeyDriftMissingPackage), pkg,
),
@@ -363,7 +364,7 @@ func checkMissingPackages(ctx *entity.Context, report *Report) {
}
if !found {
- report.Passed = append(report.Passed, CheckMissingPackages)
+ report.Passed = append(report.Passed, cfgDrift.CheckMissingPackages)
}
}
@@ -416,7 +417,7 @@ func checkTemplateHeaders(ctx *entity.Context, report *Report) {
report.Warnings = append(report.Warnings, Issue{
File: f.Name,
- Type: IssueStaleHeader,
+ Type: cfgDrift.IssueStaleHeader,
Message: fmt.Sprintf(
desc.Text(text.DescKeyDriftStaleHeader), f.Name,
),
@@ -425,6 +426,6 @@ func checkTemplateHeaders(ctx *entity.Context, report *Report) {
}
if !found {
- report.Passed = append(report.Passed, CheckTemplateHeaders)
+ report.Passed = append(report.Passed, cfgDrift.CheckTemplateHeaders)
}
}
diff --git a/internal/drift/check_ext.go b/internal/drift/check_ext.go
index 7ff0bab89..e14d49bfd 100644
--- a/internal/drift/check_ext.go
+++ b/internal/drift/check_ext.go
@@ -13,6 +13,7 @@ import (
"slices"
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
+ cfgDrift "github.com/ActiveMemory/ctx/internal/config/drift"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
"github.com/ActiveMemory/ctx/internal/config/file"
"github.com/ActiveMemory/ctx/internal/config/fs"
@@ -23,7 +24,13 @@ import (
)
// supportedTools lists the valid tool identifiers for ctx.
-var supportedTools = []string{"claude", "cursor", "cline", "kiro", "codex"}
+var supportedTools = []string{
+ cfgHook.ToolClaude,
+ cfgHook.ToolCursor,
+ cfgHook.ToolCline,
+ cfgHook.ToolKiro,
+ cfgHook.ToolCodex,
+}
// checkSteeringTools validates that all steering files reference only
// supported tool identifiers in their tools list.
@@ -36,7 +43,7 @@ func checkSteeringTools(report *Report) {
files, err := steering.LoadAll(steeringDir)
if err != nil {
// Directory doesn't exist or can't be read — skip silently.
- report.Passed = append(report.Passed, CheckSteeringTools)
+ report.Passed = append(report.Passed, cfgDrift.CheckSteeringTools)
return
}
@@ -46,7 +53,7 @@ func checkSteeringTools(report *Report) {
if !slices.Contains(supportedTools, tool) {
report.Warnings = append(report.Warnings, Issue{
File: filepath.Base(sf.Path),
- Type: IssueInvalidTool,
+ Type: cfgDrift.IssueInvalidTool,
Message: fmt.Sprintf(
desc.Text(text.DescKeyDriftInvalidTool), tool,
),
@@ -57,7 +64,7 @@ func checkSteeringTools(report *Report) {
}
if !found {
- report.Passed = append(report.Passed, CheckSteeringTools)
+ report.Passed = append(report.Passed, cfgDrift.CheckSteeringTools)
}
}
@@ -73,7 +80,7 @@ func checkHookPerms(report *Report) {
// We don't use trigger.Discover here because it skips non-executable scripts.
found := false
for _, ht := range trigger.ValidTypes() {
- typeDir := filepath.Join(hooksDir, string(ht))
+ typeDir := filepath.Join(hooksDir, ht)
entries, readErr := os.ReadDir(typeDir)
if readErr != nil {
continue
@@ -88,8 +95,8 @@ func checkHookPerms(report *Report) {
}
if info.Mode().Perm()&fs.ExecBitMask == 0 {
report.Warnings = append(report.Warnings, Issue{
- File: filepath.Join(string(ht), e.Name()),
- Type: IssueHookNoExec,
+ File: filepath.Join(ht, e.Name()),
+ Type: cfgDrift.IssueHookNoExec,
Message: desc.Text(text.DescKeyDriftHookNoExec),
Path: filepath.Join(typeDir, e.Name()),
})
@@ -99,7 +106,7 @@ func checkHookPerms(report *Report) {
}
if !found {
- report.Passed = append(report.Passed, CheckHookPerms)
+ report.Passed = append(report.Passed, cfgDrift.CheckHookPerms)
}
}
@@ -115,18 +122,18 @@ func checkSyncStaleness(report *Report) {
files, err := steering.LoadAll(steeringDir)
if err != nil {
// No steering files — nothing to check.
- report.Passed = append(report.Passed, CheckSyncStaleness)
+ report.Passed = append(report.Passed, cfgDrift.CheckSyncStaleness)
return
}
if len(files) == 0 {
- report.Passed = append(report.Passed, CheckSyncStaleness)
+ report.Passed = append(report.Passed, cfgDrift.CheckSyncStaleness)
return
}
cwd, cwdErr := os.Getwd()
if cwdErr != nil {
- report.Passed = append(report.Passed, CheckSyncStaleness)
+ report.Passed = append(report.Passed, cfgDrift.CheckSyncStaleness)
return
}
@@ -141,7 +148,7 @@ func checkSyncStaleness(report *Report) {
for _, name := range stale {
report.Warnings = append(report.Warnings, Issue{
File: name,
- Type: IssueStaleSyncFile,
+ Type: cfgDrift.IssueStaleSyncFile,
Message: desc.Text(text.DescKeyDriftStaleSyncFile),
Path: fmt.Sprintf(
desc.Text(text.DescKeyDriftToolSuffix),
@@ -152,7 +159,7 @@ func checkSyncStaleness(report *Report) {
}
if !found {
- report.Passed = append(report.Passed, CheckSyncStaleness)
+ report.Passed = append(report.Passed, cfgDrift.CheckSyncStaleness)
}
}
@@ -166,14 +173,14 @@ func checkRCTool(report *Report) {
// Empty tool field is valid — it means no tool is configured.
if tool == "" {
- report.Passed = append(report.Passed, CheckRCTool)
+ report.Passed = append(report.Passed, cfgDrift.CheckRCTool)
return
}
if !slices.Contains(supportedTools, tool) {
report.Warnings = append(report.Warnings, Issue{
File: file.CtxRC,
- Type: IssueInvalidTool,
+ Type: cfgDrift.IssueInvalidTool,
Message: fmt.Sprintf(
desc.Text(text.DescKeyDriftInvalidTool), tool,
),
@@ -181,5 +188,5 @@ func checkRCTool(report *Report) {
return
}
- report.Passed = append(report.Passed, CheckRCTool)
+ report.Passed = append(report.Passed, cfgDrift.CheckRCTool)
}
diff --git a/internal/drift/check_ext_test.go b/internal/drift/check_ext_test.go
index 83bebd18d..40e20f37f 100644
--- a/internal/drift/check_ext_test.go
+++ b/internal/drift/check_ext_test.go
@@ -12,6 +12,7 @@ import (
"path/filepath"
"testing"
+ cfgDrift "github.com/ActiveMemory/ctx/internal/config/drift"
"github.com/ActiveMemory/ctx/internal/rc"
"github.com/ActiveMemory/ctx/internal/steering"
)
@@ -95,7 +96,7 @@ func TestCheckSteeringTools(t *testing.T) {
report := &Report{
Warnings: []Issue{},
Violations: []Issue{},
- Passed: []CheckName{},
+ Passed: []cfgDrift.CheckName{},
}
checkSteeringTools(report)
@@ -108,12 +109,12 @@ func TestCheckSteeringTools(t *testing.T) {
}
for _, w := range report.Warnings {
- if w.Type != IssueInvalidTool {
- t.Errorf("expected issue type %q, got %q", IssueInvalidTool, w.Type)
+ if w.Type != cfgDrift.IssueInvalidTool {
+ t.Errorf("expected issue type %q, got %q", cfgDrift.IssueInvalidTool, w.Type)
}
}
- passed := checkPassed(report, CheckSteeringTools)
+ passed := checkPassed(report, cfgDrift.CheckSteeringTools)
if passed != tt.wantPassed {
t.Errorf("expected passed=%v, got passed=%v", tt.wantPassed, passed)
}
@@ -191,7 +192,7 @@ func TestCheckHookPerms(t *testing.T) {
report := &Report{
Warnings: []Issue{},
Violations: []Issue{},
- Passed: []CheckName{},
+ Passed: []cfgDrift.CheckName{},
}
checkHookPerms(report)
@@ -204,12 +205,12 @@ func TestCheckHookPerms(t *testing.T) {
}
for _, w := range report.Warnings {
- if w.Type != IssueHookNoExec {
- t.Errorf("expected issue type %q, got %q", IssueHookNoExec, w.Type)
+ if w.Type != cfgDrift.IssueHookNoExec {
+ t.Errorf("expected issue type %q, got %q", cfgDrift.IssueHookNoExec, w.Type)
}
}
- passed := checkPassed(report, CheckHookPerms)
+ passed := checkPassed(report, cfgDrift.CheckHookPerms)
if passed != tt.wantPassed {
t.Errorf("expected passed=%v, got passed=%v", tt.wantPassed, passed)
}
@@ -289,7 +290,7 @@ func TestCheckSyncStaleness(t *testing.T) {
report := &Report{
Warnings: []Issue{},
Violations: []Issue{},
- Passed: []CheckName{},
+ Passed: []cfgDrift.CheckName{},
}
checkSyncStaleness(report)
@@ -302,12 +303,12 @@ func TestCheckSyncStaleness(t *testing.T) {
}
for _, w := range report.Warnings {
- if w.Type != IssueStaleSyncFile {
- t.Errorf("expected issue type %q, got %q", IssueStaleSyncFile, w.Type)
+ if w.Type != cfgDrift.IssueStaleSyncFile {
+ t.Errorf("expected issue type %q, got %q", cfgDrift.IssueStaleSyncFile, w.Type)
}
}
- passed := checkPassed(report, CheckSyncStaleness)
+ passed := checkPassed(report, cfgDrift.CheckSyncStaleness)
if passed != tt.wantPassed {
t.Errorf("expected passed=%v, got passed=%v", tt.wantPassed, passed)
}
@@ -356,7 +357,7 @@ func TestCheckRCTool(t *testing.T) {
report := &Report{
Warnings: []Issue{},
Violations: []Issue{},
- Passed: []CheckName{},
+ Passed: []cfgDrift.CheckName{},
}
checkRCTool(report)
@@ -369,12 +370,12 @@ func TestCheckRCTool(t *testing.T) {
}
for _, w := range report.Warnings {
- if w.Type != IssueInvalidTool {
- t.Errorf("expected issue type %q, got %q", IssueInvalidTool, w.Type)
+ if w.Type != cfgDrift.IssueInvalidTool {
+ t.Errorf("expected issue type %q, got %q", cfgDrift.IssueInvalidTool, w.Type)
}
}
- passed := checkPassed(report, CheckRCTool)
+ passed := checkPassed(report, cfgDrift.CheckRCTool)
if passed != tt.wantPassed {
t.Errorf("expected passed=%v, got passed=%v", tt.wantPassed, passed)
}
@@ -384,7 +385,7 @@ func TestCheckRCTool(t *testing.T) {
// --- helpers ---
-func checkPassed(report *Report, check CheckName) bool {
+func checkPassed(report *Report, check cfgDrift.CheckName) bool {
for _, p := range report.Passed {
if p == check {
return true
diff --git a/internal/drift/detector.go b/internal/drift/detector.go
index f688cf6a0..774cf6363 100644
--- a/internal/drift/detector.go
+++ b/internal/drift/detector.go
@@ -7,21 +7,24 @@
// Package drift provides functionality for detecting stale or invalid context.
package drift
-import "github.com/ActiveMemory/ctx/internal/entity"
+import (
+ cfgDrift "github.com/ActiveMemory/ctx/internal/config/drift"
+ "github.com/ActiveMemory/ctx/internal/entity"
+)
// Status returns the overall status of the report.
//
// Returns:
-// - StatusType: StatusViolation if any violations, StatusWarning if only
-// warnings, StatusOk otherwise
-func (r *Report) Status() StatusType {
+// - cfgDrift.StatusType: StatusViolation if any violations,
+// StatusWarning if only warnings, StatusOk otherwise
+func (r *Report) Status() cfgDrift.StatusType {
if len(r.Violations) > 0 {
- return StatusViolation
+ return cfgDrift.StatusViolation
}
if len(r.Warnings) > 0 {
- return StatusWarning
+ return cfgDrift.StatusWarning
}
- return StatusOk
+ return cfgDrift.StatusOk
}
// Detect runs all drift detection checks on the given context.
@@ -38,7 +41,7 @@ func Detect(ctx *entity.Context) *Report {
report := &Report{
Warnings: []Issue{},
Violations: []Issue{},
- Passed: []CheckName{},
+ Passed: []cfgDrift.CheckName{},
}
// Check path references in context files
diff --git a/internal/drift/detector_test.go b/internal/drift/detector_test.go
index 82a8d9ea0..a32345a07 100644
--- a/internal/drift/detector_test.go
+++ b/internal/drift/detector_test.go
@@ -13,6 +13,7 @@ import (
"strings"
"testing"
+ cfgDrift "github.com/ActiveMemory/ctx/internal/config/drift"
"github.com/ActiveMemory/ctx/internal/context/load"
"github.com/ActiveMemory/ctx/internal/entity"
"github.com/ActiveMemory/ctx/internal/io"
@@ -23,34 +24,34 @@ func TestReportStatus(t *testing.T) {
tests := []struct {
name string
report Report
- expected StatusType
+ expected cfgDrift.StatusType
}{
{
name: "no issues",
report: Report{},
- expected: StatusOk,
+ expected: cfgDrift.StatusOk,
},
{
name: "only warnings",
report: Report{
- Warnings: []Issue{{File: "test.md", Type: IssueStaleness}},
+ Warnings: []Issue{{File: "test.md", Type: cfgDrift.IssueStaleness}},
},
- expected: StatusWarning,
+ expected: cfgDrift.StatusWarning,
},
{
name: "only violations",
report: Report{
- Violations: []Issue{{File: "test.md", Type: IssueSecret}},
+ Violations: []Issue{{File: "test.md", Type: cfgDrift.IssueSecret}},
},
- expected: StatusViolation,
+ expected: cfgDrift.StatusViolation,
},
{
name: "warnings and violations",
report: Report{
- Warnings: []Issue{{File: "test.md", Type: IssueStaleness}},
- Violations: []Issue{{File: "test.md", Type: IssueSecret}},
+ Warnings: []Issue{{File: "test.md", Type: cfgDrift.IssueStaleness}},
+ Violations: []Issue{{File: "test.md", Type: cfgDrift.IssueSecret}},
},
- expected: StatusViolation,
+ expected: cfgDrift.StatusViolation,
},
}
@@ -192,7 +193,7 @@ func TestCheckPathReferences(t *testing.T) {
report := &Report{
Warnings: []Issue{},
Violations: []Issue{},
- Passed: []CheckName{},
+ Passed: []cfgDrift.CheckName{},
}
checkPathReferences(ctx, report)
@@ -252,7 +253,7 @@ func TestCheckStaleness(t *testing.T) {
report := &Report{
Warnings: []Issue{},
Violations: []Issue{},
- Passed: []CheckName{},
+ Passed: []cfgDrift.CheckName{},
}
checkStaleness(ctx, report)
@@ -305,7 +306,7 @@ func TestCheckRequiredFiles(t *testing.T) {
report := &Report{
Warnings: []Issue{},
Violations: []Issue{},
- Passed: []CheckName{},
+ Passed: []cfgDrift.CheckName{},
}
checkRequiredFiles(ctx, report)
@@ -408,7 +409,7 @@ func TestCheckEntryCount(t *testing.T) {
report := &Report{
Warnings: []Issue{},
Violations: []Issue{},
- Passed: []CheckName{},
+ Passed: []cfgDrift.CheckName{},
}
checkEntryCount(ctx, report)
@@ -422,7 +423,7 @@ func TestCheckEntryCount(t *testing.T) {
passedCheck := false
for _, p := range report.Passed {
- if p == CheckEntryCount {
+ if p == cfgDrift.CheckEntryCount {
passedCheck = true
break
}
@@ -433,8 +434,8 @@ func TestCheckEntryCount(t *testing.T) {
// Verify warning type and message format
for _, w := range report.Warnings {
- if w.Type != IssueEntryCount {
- t.Errorf("expected issue type %q, got %q", IssueEntryCount, w.Type)
+ if w.Type != cfgDrift.IssueEntryCount {
+ t.Errorf("expected issue type %q, got %q", cfgDrift.IssueEntryCount, w.Type)
}
if !strings.Contains(w.Message, "entries (recommended:") {
t.Errorf("unexpected message format: %q", w.Message)
@@ -474,7 +475,7 @@ func TestCheckEntryCountDisabled(t *testing.T) {
report := &Report{
Warnings: []Issue{},
Violations: []Issue{},
- Passed: []CheckName{},
+ Passed: []cfgDrift.CheckName{},
}
// With default thresholds (30/20), 100 entries should trigger warnings
@@ -582,7 +583,7 @@ func TestCheckMissingPackages(t *testing.T) {
report := &Report{
Warnings: []Issue{},
Violations: []Issue{},
- Passed: []CheckName{},
+ Passed: []cfgDrift.CheckName{},
}
checkMissingPackages(ctx, report)
@@ -599,7 +600,7 @@ func TestCheckMissingPackages(t *testing.T) {
passedCheck := false
for _, p := range report.Passed {
- if p == CheckMissingPackages {
+ if p == cfgDrift.CheckMissingPackages {
passedCheck = true
break
}
@@ -609,8 +610,8 @@ func TestCheckMissingPackages(t *testing.T) {
}
for _, w := range report.Warnings {
- if w.Type != IssueMissingPackage {
- t.Errorf("expected issue type %q, got %q", IssueMissingPackage, w.Type)
+ if w.Type != cfgDrift.IssueMissingPackage {
+ t.Errorf("expected issue type %q, got %q", cfgDrift.IssueMissingPackage, w.Type)
}
}
diff --git a/internal/drift/types.go b/internal/drift/types.go
index ee105df37..8f5482e3a 100644
--- a/internal/drift/types.go
+++ b/internal/drift/types.go
@@ -6,87 +6,7 @@
package drift
-// IssueType categorizes a drift issue for grouping and filtering.
-type IssueType string
-
-const (
- // IssueDeadPath indicates a file path reference that no longer exists.
- IssueDeadPath IssueType = "dead_path"
- // IssueStaleness indicates accumulated completed tasks needing archival.
- IssueStaleness IssueType = "staleness"
- // IssueSecret indicates a file that may contain secrets or credentials.
- IssueSecret IssueType = "potential_secret"
- // IssueMissing indicates a required context file that does not exist.
- IssueMissing IssueType = "missing_file"
- // IssueStaleAge indicates a context file that hasn't been modified recently.
- IssueStaleAge IssueType = "stale_age"
- // IssueEntryCount indicates a knowledge file has too many entries.
- IssueEntryCount IssueType = "entry_count"
- // IssueMissingPackage indicates an internal package
- // not documented in ARCHITECTURE.md.
- IssueMissingPackage IssueType = "missing_package"
- // IssueStaleHeader indicates a context file whose comment header
- // doesn't match the embedded template.
- IssueStaleHeader IssueType = "stale_header"
- // IssueInvalidTool indicates an unsupported tool identifier in a
- // steering file or .ctxrc configuration.
- IssueInvalidTool IssueType = "invalid_tool"
- // IssueHookNoExec indicates a hook script missing the executable
- // permission bit.
- IssueHookNoExec IssueType = "hook_no_exec"
- // IssueStaleSyncFile indicates a synced tool-native file that is
- // out of date compared to its source steering file.
- IssueStaleSyncFile IssueType = "stale_sync_file"
-)
-
-// StatusType represents the overall status of a drift report.
-type StatusType string
-
-const (
- // StatusOk means no drift was detected.
- StatusOk StatusType = "ok"
- // StatusWarning means non-critical issues were found.
- StatusWarning StatusType = "warning"
- // StatusViolation means constitution violations were found.
- StatusViolation StatusType = "violation"
-)
-
-// CheckName identifies a drift detection check.
-type CheckName string
-
-const (
- // CheckPathReferences validates that file paths in context files exist.
- CheckPathReferences CheckName = "path_references"
- // CheckStaleness detects accumulated completed tasks.
- CheckStaleness CheckName = "staleness_check"
- // CheckConstitution verifies constitution rules are respected.
- CheckConstitution CheckName = "constitution_check"
- // CheckRequiredFiles ensures all required context files are present.
- CheckRequiredFiles CheckName = "required_files"
- // CheckFileAge checks whether context files have been modified recently.
- CheckFileAge CheckName = "file_age_check"
- // CheckEntryCount checks whether knowledge files have excessive entries.
- CheckEntryCount CheckName = "entry_count_check"
- // CheckMissingPackages checks for undocumented internal packages.
- CheckMissingPackages CheckName = "missing_packages"
- // CheckTemplateHeaders checks context file comment headers against templates.
- CheckTemplateHeaders CheckName = "template_headers"
- // CheckSteeringTools validates tool identifiers in steering files.
- CheckSteeringTools CheckName = "steering_tools"
- // CheckHookPerms checks hook scripts for executable permission bits.
- CheckHookPerms CheckName = "hook_permissions"
- // CheckSyncStaleness compares synced tool-native files
- // against source steering files.
- CheckSyncStaleness CheckName = "sync_staleness"
- // CheckRCTool validates the .ctxrc tool field against supported identifiers.
- CheckRCTool CheckName = "rc_tool_field"
-)
-
-// Constitution rule names referenced in drift violations.
-const (
- // RuleNoSecrets is the constitution rule for secret file detection.
- RuleNoSecrets = "no_secrets"
-)
+import cfgDrift "github.com/ActiveMemory/ctx/internal/config/drift"
// Issue represents a detected drift issue.
//
@@ -101,12 +21,12 @@ const (
// - Path: Referenced path that caused the issue, if applicable
// - Rule: Constitution rule that was violated, if applicable
type Issue struct {
- File string `json:"file"`
- Line int `json:"line,omitempty"`
- Type IssueType `json:"type"`
- Message string `json:"message"`
- Path string `json:"path,omitempty"`
- Rule string `json:"rule,omitempty"`
+ File string `json:"file"`
+ Line int `json:"line,omitempty"`
+ Type cfgDrift.IssueType `json:"type"`
+ Message string `json:"message"`
+ Path string `json:"path,omitempty"`
+ Rule string `json:"rule,omitempty"`
}
// Report represents the complete drift detection report.
@@ -118,7 +38,7 @@ type Issue struct {
// - Violations: Critical issues that indicate constitution violations
// - Passed: Names of checks that are completed without issues
type Report struct {
- Warnings []Issue `json:"warnings"`
- Violations []Issue `json:"violations"`
- Passed []CheckName `json:"passed"`
+ Warnings []Issue `json:"warnings"`
+ Violations []Issue `json:"violations"`
+ Passed []cfgDrift.CheckName `json:"passed"`
}
diff --git a/internal/entity/trigger.go b/internal/entity/trigger.go
index af4bb0e84..bfce4ba36 100644
--- a/internal/entity/trigger.go
+++ b/internal/entity/trigger.go
@@ -6,23 +6,10 @@
package entity
-// TriggerType identifies the lifecycle trigger event type.
-type TriggerType string
+import cfgTrigger "github.com/ActiveMemory/ctx/internal/config/trigger"
-const (
- // TriggerPreToolUse fires before an AI tool invocation.
- TriggerPreToolUse TriggerType = "pre-tool-use"
- // TriggerPostToolUse fires after an AI tool invocation.
- TriggerPostToolUse TriggerType = "post-tool-use"
- // TriggerSessionStart fires when an AI session begins.
- TriggerSessionStart TriggerType = "session-start"
- // TriggerSessionEnd fires when an AI session ends.
- TriggerSessionEnd TriggerType = "session-end"
- // TriggerFileSave fires when a file is saved.
- TriggerFileSave TriggerType = "file-save"
- // TriggerContextAdd fires when context is added.
- TriggerContextAdd TriggerType = "context-add"
-)
+// TriggerType identifies the lifecycle trigger event type.
+type TriggerType = cfgTrigger.TriggerType
// TriggerInput is the JSON object sent to trigger scripts via stdin.
//
diff --git a/internal/io/prefix.go b/internal/io/prefix.go
deleted file mode 100644
index 5c110d349..000000000
--- a/internal/io/prefix.go
+++ /dev/null
@@ -1,24 +0,0 @@
-// / ctx: https://ctx.ist
-// ,'`./ do you remember?
-// `.,'\
-// \ Copyright 2026-present Context contributors.
-// SPDX-License-Identifier: Apache-2.0
-
-package io
-
-// dangerousPrefixes lists system directories where ctx should never
-// read from or write to. Checked after filepath.Abs resolution.
-var dangerousPrefixes = []string{
- "/bin/",
- "/boot/",
- "/dev/",
- "/etc/",
- "/lib/",
- "/lib64/",
- "/proc/",
- "/sbin/",
- "/sys/",
- "/usr/bin/",
- "/usr/lib/",
- "/usr/sbin/",
-}
diff --git a/internal/io/validate.go b/internal/io/validate.go
index f8fc2c61e..da33d0dcb 100644
--- a/internal/io/validate.go
+++ b/internal/io/validate.go
@@ -12,6 +12,7 @@ import (
"strings"
cfgHTTP "github.com/ActiveMemory/ctx/internal/config/http"
+ cfgIo "github.com/ActiveMemory/ctx/internal/config/io"
"github.com/ActiveMemory/ctx/internal/config/token"
errFs "github.com/ActiveMemory/ctx/internal/err/fs"
errHTTP "github.com/ActiveMemory/ctx/internal/err/http"
@@ -29,7 +30,7 @@ func rejectDangerousPath(absPath string) error {
if absPath == token.Slash {
return errFs.RefuseSystemPathRoot()
}
- for _, prefix := range dangerousPrefixes {
+ for _, prefix := range cfgIo.DangerousPrefixes {
if strings.HasPrefix(absPath, prefix) {
return errFs.RefuseSystemPath(absPath)
}
diff --git a/internal/mcp/handler/session_hooks.go b/internal/mcp/handler/session_hooks.go
index 37b565b78..a199e3867 100644
--- a/internal/mcp/handler/session_hooks.go
+++ b/internal/mcp/handler/session_hooks.go
@@ -9,26 +9,15 @@ package handler
import (
"time"
+ "github.com/ActiveMemory/ctx/internal/assets/read/desc"
+ "github.com/ActiveMemory/ctx/internal/config/embed/text"
+ "github.com/ActiveMemory/ctx/internal/config/mcp/field"
+ cfgTrigger "github.com/ActiveMemory/ctx/internal/config/trigger"
"github.com/ActiveMemory/ctx/internal/entity"
"github.com/ActiveMemory/ctx/internal/rc"
"github.com/ActiveMemory/ctx/internal/trigger"
)
-// Trigger result messages.
-const (
- // msgHooksDisabled is returned when triggers are not enabled.
- msgHooksDisabled = "Hooks disabled."
- // msgSessionStartOK is returned when start triggers produce no
- // additional context.
- msgSessionStartOK = "Session start hooks executed. " +
- "No additional context."
- // msgSessionEndOK is returned when end triggers produce no
- // additional context.
- msgSessionEndOK = "Session end hooks executed."
- // paramSummary is the parameter key for session summary.
- paramSummary = "summary"
-)
-
// SessionStartHooks executes session-start triggers and returns
// aggregated context.
//
@@ -40,20 +29,20 @@ const (
// - error: trigger discovery or execution error
func (h *Handler) SessionStartHooks() (string, error) {
if !rc.HooksEnabled() {
- return msgHooksDisabled, nil
+ return desc.Text(text.DescKeyMCPHooksDisabled), nil
}
hooksDir := rc.HooksDir()
timeout := time.Duration(rc.HookTimeout()) * time.Second
input := &entity.TriggerInput{
- TriggerType: string(entity.TriggerSessionStart),
+ TriggerType: cfgTrigger.SessionStart,
Parameters: map[string]any{},
Timestamp: time.Now().UTC().Format(time.RFC3339),
}
agg, runErr := trigger.RunAll(
- hooksDir, trigger.SessionStart, input, timeout,
+ hooksDir, cfgTrigger.SessionStart, input, timeout,
)
if runErr != nil {
return "", runErr
@@ -64,7 +53,7 @@ func (h *Handler) SessionStartHooks() (string, error) {
}
if agg.Context == "" {
- return msgSessionStartOK, nil
+ return desc.Text(text.DescKeyMCPSessionStartOK), nil
}
return agg.Context, nil
@@ -84,7 +73,7 @@ func (h *Handler) SessionStartHooks() (string, error) {
// - error: trigger discovery or execution error
func (h *Handler) SessionEndHooks(summary string) (string, error) {
if !rc.HooksEnabled() {
- return msgHooksDisabled, nil
+ return desc.Text(text.DescKeyMCPHooksDisabled), nil
}
hooksDir := rc.HooksDir()
@@ -92,17 +81,17 @@ func (h *Handler) SessionEndHooks(summary string) (string, error) {
params := map[string]any{}
if summary != "" {
- params[paramSummary] = summary
+ params[field.Summary] = summary
}
input := &entity.TriggerInput{
- TriggerType: string(entity.TriggerSessionEnd),
+ TriggerType: cfgTrigger.SessionEnd,
Parameters: params,
Timestamp: time.Now().UTC().Format(time.RFC3339),
}
agg, runErr := trigger.RunAll(
- hooksDir, trigger.SessionEnd, input, timeout,
+ hooksDir, cfgTrigger.SessionEnd, input, timeout,
)
if runErr != nil {
return "", runErr
@@ -113,7 +102,7 @@ func (h *Handler) SessionEndHooks(summary string) (string, error) {
}
if agg.Context == "" {
- return msgSessionEndOK, nil
+ return desc.Text(text.DescKeyMCPSessionEndOK), nil
}
return agg.Context, nil
diff --git a/internal/mcp/handler/steering.go b/internal/mcp/handler/steering.go
index 1d04586b6..2eb29053b 100644
--- a/internal/mcp/handler/steering.go
+++ b/internal/mcp/handler/steering.go
@@ -22,14 +22,6 @@ import (
"github.com/ActiveMemory/ctx/internal/steering"
)
-// Steering result messages.
-const (
- // msgNoSteeringFiles is returned when no steering files exist.
- msgNoSteeringFiles = "No steering files found."
- // msgNoMatchingSteering is returned when no files match.
- msgNoMatchingSteering = "No matching steering files."
-)
-
// SteeringGet returns applicable steering files for the given prompt.
// If prompt is empty, returns only "always" inclusion files.
//
@@ -45,19 +37,19 @@ func (h *Handler) SteeringGet(prompt string) (string, error) {
files, loadErr := steering.LoadAll(steeringDir)
if loadErr != nil {
if errors.Is(loadErr, os.ErrNotExist) {
- return msgNoSteeringFiles, nil
+ return desc.Text(text.DescKeyMCPSteeringNoFiles), nil
}
return "", loadErr
}
if len(files) == 0 {
- return msgNoSteeringFiles, nil
+ return desc.Text(text.DescKeyMCPSteeringNoFiles), nil
}
filtered := steering.Filter(files, prompt, nil, "")
if len(filtered) == 0 {
- return msgNoMatchingSteering, nil
+ return desc.Text(text.DescKeyMCPSteeringNoMatch), nil
}
var sb strings.Builder
diff --git a/internal/mcp/proto/schema.go b/internal/mcp/proto/schema.go
index c36b98f7a..f98c949a3 100644
--- a/internal/mcp/proto/schema.go
+++ b/internal/mcp/proto/schema.go
@@ -78,9 +78,6 @@ const (
ErrCodeInternal = -32603
)
-// ProtocolVersion is the MCP protocol version.
-const ProtocolVersion = "2024-11-05"
-
// --- Initialization types ---
// InitializeParams contains client information sent during initialization.
diff --git a/internal/mcp/server/route/initialize/dispatch.go b/internal/mcp/server/route/initialize/dispatch.go
index ea2e32b70..eda657ba3 100644
--- a/internal/mcp/server/route/initialize/dispatch.go
+++ b/internal/mcp/server/route/initialize/dispatch.go
@@ -7,6 +7,7 @@
package initialize
import (
+ cfgSchema "github.com/ActiveMemory/ctx/internal/config/mcp/schema"
"github.com/ActiveMemory/ctx/internal/config/mcp/server"
"github.com/ActiveMemory/ctx/internal/mcp/proto"
"github.com/ActiveMemory/ctx/internal/mcp/server/out"
@@ -22,7 +23,7 @@ import (
// - *proto.Response: server capabilities and protocol version
func Dispatch(version string, req proto.Request) *proto.Response {
return out.OkResponse(req.ID, proto.InitializeResult{
- ProtocolVersion: proto.ProtocolVersion,
+ ProtocolVersion: cfgSchema.ProtocolVersion,
Capabilities: proto.ServerCaps{
Resources: &proto.ResourcesCap{Subscribe: true},
Tools: &proto.ToolsCap{},
diff --git a/internal/mcp/server/server_test.go b/internal/mcp/server/server_test.go
index a376fcf48..e2d283cda 100644
--- a/internal/mcp/server/server_test.go
+++ b/internal/mcp/server/server_test.go
@@ -17,6 +17,7 @@ import (
"time"
"github.com/ActiveMemory/ctx/internal/config/ctx"
+ cfgSchema "github.com/ActiveMemory/ctx/internal/config/mcp/schema"
"github.com/ActiveMemory/ctx/internal/mcp/proto"
mcpIO "github.com/ActiveMemory/ctx/internal/mcp/server/io"
)
@@ -99,7 +100,7 @@ func request(
func TestInitialize(t *testing.T) {
srv, _ := newTestServer(t)
resp := request(t, srv, "initialize", proto.InitializeParams{
- ProtocolVersion: proto.ProtocolVersion,
+ ProtocolVersion: cfgSchema.ProtocolVersion,
ClientInfo: proto.AppInfo{Name: "test", Version: "1.0"},
})
if resp.Error != nil {
@@ -110,10 +111,10 @@ func TestInitialize(t *testing.T) {
if err := json.Unmarshal(raw, &result); err != nil {
t.Fatalf("unmarshal result: %v", err)
}
- if result.ProtocolVersion != proto.ProtocolVersion {
+ if result.ProtocolVersion != cfgSchema.ProtocolVersion {
t.Errorf(
"protocol version = %q, want %q",
- result.ProtocolVersion, proto.ProtocolVersion,
+ result.ProtocolVersion, cfgSchema.ProtocolVersion,
)
}
if result.ServerInfo.Name != "ctx" {
diff --git a/internal/rc/default.go b/internal/rc/default.go
index 28c40d8f7..39a730932 100644
--- a/internal/rc/default.go
+++ b/internal/rc/default.go
@@ -6,7 +6,10 @@
package rc
-import "github.com/ActiveMemory/ctx/internal/config/runtime"
+import (
+ cfgDir "github.com/ActiveMemory/ctx/internal/config/dir"
+ "github.com/ActiveMemory/ctx/internal/config/runtime"
+)
// Aliases re-exported from config/runtime for use within rc.
const (
@@ -35,9 +38,9 @@ const (
// Hooks & Steering defaults.
const (
// DefaultSteeringDir is the default steering directory path.
- DefaultSteeringDir = ".context/steering"
+ DefaultSteeringDir = cfgDir.DefaultSteeringPath
// DefaultHooksDir is the default hooks directory path.
- DefaultHooksDir = ".context/hooks"
+ DefaultHooksDir = cfgDir.DefaultHooksPath
// DefaultHookTimeout is the default per-hook execution timeout in seconds.
DefaultHookTimeout = 10
)
diff --git a/internal/skill/install.go b/internal/skill/install.go
index e8a23e06a..5dceb7c1c 100644
--- a/internal/skill/install.go
+++ b/internal/skill/install.go
@@ -11,6 +11,7 @@ import (
"path/filepath"
"github.com/ActiveMemory/ctx/internal/config/fs"
+ cfgSkill "github.com/ActiveMemory/ctx/internal/config/skill"
errSkill "github.com/ActiveMemory/ctx/internal/err/skill"
ctxIo "github.com/ActiveMemory/ctx/internal/io"
)
@@ -19,7 +20,7 @@ import (
// The source must contain a valid SKILL.md with parseable YAML frontmatter.
// The skill name is derived from the parsed frontmatter.
func Install(source, skillsDir string) (*Skill, error) {
- manifestPath := filepath.Join(source, skillManifest)
+ manifestPath := filepath.Join(source, cfgSkill.SkillManifest)
data, readErr := ctxIo.SafeReadUserFile(manifestPath)
if readErr != nil {
@@ -28,11 +29,11 @@ func Install(source, skillsDir string) (*Skill, error) {
sk, parseErr := parseManifest(data, filepath.Base(source), source)
if parseErr != nil {
- return nil, errSkill.InvalidManifest(skillManifest, parseErr)
+ return nil, errSkill.InvalidManifest(cfgSkill.SkillManifest, parseErr)
}
if sk.Name == "" {
- return nil, errSkill.MissingName(skillManifest)
+ return nil, errSkill.MissingName(cfgSkill.SkillManifest)
}
destDir := filepath.Join(skillsDir, sk.Name)
diff --git a/internal/skill/install_test.go b/internal/skill/install_test.go
index 2e177d37e..1034f9207 100644
--- a/internal/skill/install_test.go
+++ b/internal/skill/install_test.go
@@ -10,6 +10,8 @@ import (
"os"
"path/filepath"
"testing"
+
+ cfgSkill "github.com/ActiveMemory/ctx/internal/config/skill"
)
func TestInstall_ValidSkill(t *testing.T) {
@@ -23,7 +25,7 @@ description: A test skill
# Instructions
Do the thing.
`
- if err := os.WriteFile(filepath.Join(source, skillManifest), []byte(manifest), 0o644); err != nil {
+ if err := os.WriteFile(filepath.Join(source, cfgSkill.SkillManifest), []byte(manifest), 0o644); err != nil {
t.Fatal(err)
}
// Add an extra file to verify full directory copy.
@@ -47,7 +49,7 @@ Do the thing.
}
// Verify SKILL.md was copied.
- if _, err := os.Stat(filepath.Join(skillsDir, "test-skill", skillManifest)); err != nil {
+ if _, err := os.Stat(filepath.Join(skillsDir, "test-skill", cfgSkill.SkillManifest)); err != nil {
t.Errorf("SKILL.md not copied: %v", err)
}
// Verify extra file was copied.
@@ -65,7 +67,7 @@ name: nested-skill
---
Body
`
- if err := os.WriteFile(filepath.Join(source, skillManifest), []byte(manifest), 0o644); err != nil {
+ if err := os.WriteFile(filepath.Join(source, cfgSkill.SkillManifest), []byte(manifest), 0o644); err != nil {
t.Fatal(err)
}
subDir := filepath.Join(source, "templates")
@@ -110,7 +112,7 @@ name: [broken yaml
---
Body
`
- if err := os.WriteFile(filepath.Join(source, skillManifest), []byte(manifest), 0o644); err != nil {
+ if err := os.WriteFile(filepath.Join(source, cfgSkill.SkillManifest), []byte(manifest), 0o644); err != nil {
t.Fatal(err)
}
@@ -129,7 +131,7 @@ description: no name field
---
Body
`
- if err := os.WriteFile(filepath.Join(source, skillManifest), []byte(manifest), 0o644); err != nil {
+ if err := os.WriteFile(filepath.Join(source, cfgSkill.SkillManifest), []byte(manifest), 0o644); err != nil {
t.Fatal(err)
}
@@ -143,7 +145,7 @@ func TestInstall_NoFrontmatterDelimiters(t *testing.T) {
source := t.TempDir()
skillsDir := t.TempDir()
- if err := os.WriteFile(filepath.Join(source, skillManifest), []byte("just plain text"), 0o644); err != nil {
+ if err := os.WriteFile(filepath.Join(source, cfgSkill.SkillManifest), []byte("just plain text"), 0o644); err != nil {
t.Fatal(err)
}
diff --git a/internal/skill/load.go b/internal/skill/load.go
index 79882cd86..6d8c4d25c 100644
--- a/internal/skill/load.go
+++ b/internal/skill/load.go
@@ -11,20 +11,11 @@ import (
"os"
"path/filepath"
+ cfgSkill "github.com/ActiveMemory/ctx/internal/config/skill"
errSkill "github.com/ActiveMemory/ctx/internal/err/skill"
ctxIo "github.com/ActiveMemory/ctx/internal/io"
)
-// skillManifest is the expected filename inside each skill directory.
-const skillManifest = "SKILL.md"
-
-// frontmatterDelimiter is the YAML frontmatter boundary marker.
-const frontmatterDelimiter = "---"
-
-// trimCR is the character set trimmed from the start of
-// raw frontmatter content to normalize line endings.
-const trimCR = "\n\r"
-
// LoadAll reads all installed skills from subdirectories of skillsDir.
// Each subdirectory must contain a SKILL.md file with YAML frontmatter.
// Returns an empty slice without error if the skills directory does not exist.
@@ -55,7 +46,7 @@ func LoadAll(skillsDir string) ([]*Skill, error) {
// The name corresponds to a subdirectory containing a SKILL.md file.
func Load(skillsDir, name string) (*Skill, error) {
dir := filepath.Join(skillsDir, name)
- manifestPath := filepath.Join(dir, skillManifest)
+ manifestPath := filepath.Join(dir, cfgSkill.SkillManifest)
data, readErr := ctxIo.SafeReadUserFile(manifestPath)
if readErr != nil {
diff --git a/internal/skill/load_test.go b/internal/skill/load_test.go
index c923769b2..5c6bbaba2 100644
--- a/internal/skill/load_test.go
+++ b/internal/skill/load_test.go
@@ -10,6 +10,8 @@ import (
"os"
"path/filepath"
"testing"
+
+ cfgSkill "github.com/ActiveMemory/ctx/internal/config/skill"
)
func writeSkillManifest(t *testing.T, dir, name, content string) {
@@ -18,7 +20,7 @@ func writeSkillManifest(t *testing.T, dir, name, content string) {
if err := os.MkdirAll(skillDir, 0o755); err != nil {
t.Fatal(err)
}
- if err := os.WriteFile(filepath.Join(skillDir, skillManifest), []byte(content), 0o644); err != nil {
+ if err := os.WriteFile(filepath.Join(skillDir, cfgSkill.SkillManifest), []byte(content), 0o644); err != nil {
t.Fatal(err)
}
}
diff --git a/internal/skill/manifest.go b/internal/skill/manifest.go
index 8c4330ae7..5b263a417 100644
--- a/internal/skill/manifest.go
+++ b/internal/skill/manifest.go
@@ -38,17 +38,17 @@ func parseManifest(data []byte, name, dir string) (*Skill, error) {
func splitFrontmatter(
data []byte,
) (frontmatter []byte, body string, err error) {
- content := strings.TrimLeft(string(data), trimCR)
+ content := strings.TrimLeft(string(data), token.TrimCR)
- if !strings.HasPrefix(content, frontmatterDelimiter) {
+ if !strings.HasPrefix(content, token.FrontmatterDelimiter) {
return nil, "", errSkill.MissingOpeningDelimiter()
}
// Skip the opening delimiter line.
- rest := content[len(frontmatterDelimiter):]
+ rest := content[len(token.FrontmatterDelimiter):]
rest = strings.TrimPrefix(rest, token.NewlineLF)
- needle := token.NewlineLF + frontmatterDelimiter
+ needle := token.NewlineLF + token.FrontmatterDelimiter
idx := strings.Index(rest, needle)
if idx < 0 {
return nil, "", errSkill.MissingClosingDelimiter()
@@ -57,7 +57,7 @@ func splitFrontmatter(
fm := rest[:idx]
// Skip past the closing delimiter line.
- after := rest[idx+1+len(frontmatterDelimiter):]
+ after := rest[idx+1+len(token.FrontmatterDelimiter):]
// Trim exactly one leading newline from the body if present.
after = strings.TrimPrefix(after, token.NewlineLF)
diff --git a/internal/steering/filter_test.go b/internal/steering/filter_test.go
index 0db3a61e3..c33c4589f 100644
--- a/internal/steering/filter_test.go
+++ b/internal/steering/filter_test.go
@@ -7,12 +7,13 @@
package steering
import (
+ cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
"testing"
)
func TestFilter_AlwaysIncludedRegardlessOfPrompt(t *testing.T) {
files := []*SteeringFile{
- {Name: "always-on", Inclusion: InclusionAlways, Priority: 50},
+ {Name: "always-on", Inclusion: cfgSteering.InclusionAlways, Priority: 50},
}
got := Filter(files, "", nil, "")
@@ -28,7 +29,7 @@ func TestFilter_AlwaysIncludedRegardlessOfPrompt(t *testing.T) {
func TestFilter_AutoIncludedWhenPromptMatchesDescription(t *testing.T) {
files := []*SteeringFile{
- {Name: "api-rules", Inclusion: InclusionAuto, Description: "REST API", Priority: 50},
+ {Name: "api-rules", Inclusion: cfgSteering.InclusionAuto, Description: "REST API", Priority: 50},
}
got := Filter(files, "I need help with REST API design", nil, "")
@@ -45,7 +46,7 @@ func TestFilter_AutoIncludedWhenPromptMatchesDescription(t *testing.T) {
func TestFilter_AutoExcludedWhenPromptDoesNotMatch(t *testing.T) {
files := []*SteeringFile{
- {Name: "api-rules", Inclusion: InclusionAuto, Description: "REST API", Priority: 50},
+ {Name: "api-rules", Inclusion: cfgSteering.InclusionAuto, Description: "REST API", Priority: 50},
}
got := Filter(files, "fix the database migration", nil, "")
@@ -56,7 +57,7 @@ func TestFilter_AutoExcludedWhenPromptDoesNotMatch(t *testing.T) {
func TestFilter_ManualIncludedOnlyWhenNamed(t *testing.T) {
files := []*SteeringFile{
- {Name: "security", Inclusion: InclusionManual, Priority: 50},
+ {Name: "security", Inclusion: cfgSteering.InclusionManual, Priority: 50},
}
got := Filter(files, "anything", nil, "")
@@ -72,9 +73,9 @@ func TestFilter_ManualIncludedOnlyWhenNamed(t *testing.T) {
func TestFilter_PriorityOrdering(t *testing.T) {
files := []*SteeringFile{
- {Name: "low", Inclusion: InclusionAlways, Priority: 90},
- {Name: "high", Inclusion: InclusionAlways, Priority: 10},
- {Name: "mid", Inclusion: InclusionAlways, Priority: 50},
+ {Name: "low", Inclusion: cfgSteering.InclusionAlways, Priority: 90},
+ {Name: "high", Inclusion: cfgSteering.InclusionAlways, Priority: 10},
+ {Name: "mid", Inclusion: cfgSteering.InclusionAlways, Priority: 50},
}
got := Filter(files, "", nil, "")
@@ -91,9 +92,9 @@ func TestFilter_PriorityOrdering(t *testing.T) {
func TestFilter_AlphabeticalTieBreaking(t *testing.T) {
files := []*SteeringFile{
- {Name: "charlie", Inclusion: InclusionAlways, Priority: 50},
- {Name: "alpha", Inclusion: InclusionAlways, Priority: 50},
- {Name: "bravo", Inclusion: InclusionAlways, Priority: 50},
+ {Name: "charlie", Inclusion: cfgSteering.InclusionAlways, Priority: 50},
+ {Name: "alpha", Inclusion: cfgSteering.InclusionAlways, Priority: 50},
+ {Name: "bravo", Inclusion: cfgSteering.InclusionAlways, Priority: 50},
}
got := Filter(files, "", nil, "")
@@ -110,7 +111,7 @@ func TestFilter_AlphabeticalTieBreaking(t *testing.T) {
func TestFilter_ToolFilterExcludesNonMatchingTool(t *testing.T) {
files := []*SteeringFile{
- {Name: "cursor-only", Inclusion: InclusionAlways, Priority: 50, Tools: []string{"claude", "cursor"}},
+ {Name: "cursor-only", Inclusion: cfgSteering.InclusionAlways, Priority: 50, Tools: []string{"claude", "cursor"}},
}
got := Filter(files, "", nil, "kiro")
@@ -121,7 +122,7 @@ func TestFilter_ToolFilterExcludesNonMatchingTool(t *testing.T) {
func TestFilter_EmptyToolsListIncludedForAnyTool(t *testing.T) {
files := []*SteeringFile{
- {Name: "universal", Inclusion: InclusionAlways, Priority: 50, Tools: nil},
+ {Name: "universal", Inclusion: cfgSteering.InclusionAlways, Priority: 50, Tools: nil},
}
got := Filter(files, "", nil, "kiro")
@@ -132,8 +133,8 @@ func TestFilter_EmptyToolsListIncludedForAnyTool(t *testing.T) {
func TestFilter_EmptyToolParameterSkipsToolFiltering(t *testing.T) {
files := []*SteeringFile{
- {Name: "restricted", Inclusion: InclusionAlways, Priority: 50, Tools: []string{"cursor"}},
- {Name: "universal", Inclusion: InclusionAlways, Priority: 50, Tools: nil},
+ {Name: "restricted", Inclusion: cfgSteering.InclusionAlways, Priority: 50, Tools: []string{"cursor"}},
+ {Name: "universal", Inclusion: cfgSteering.InclusionAlways, Priority: 50, Tools: nil},
}
got := Filter(files, "", nil, "")
diff --git a/internal/steering/format.go b/internal/steering/format.go
index 67f9d6c2c..36cc30a6e 100644
--- a/internal/steering/format.go
+++ b/internal/steering/format.go
@@ -16,6 +16,7 @@ import (
"github.com/ActiveMemory/ctx/internal/config/file"
"github.com/ActiveMemory/ctx/internal/config/fs"
cfgHook "github.com/ActiveMemory/ctx/internal/config/hook"
+ cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
"github.com/ActiveMemory/ctx/internal/config/token"
errSteering "github.com/ActiveMemory/ctx/internal/err/steering"
ctxIo "github.com/ActiveMemory/ctx/internal/io"
@@ -39,18 +40,18 @@ func nativePath(
switch tool {
case cfgHook.ToolCursor:
return filepath.Join(
- projectRoot, dirCursorDot,
- dirRules, name+extMDC,
+ projectRoot, cfgSteering.DirCursorDot,
+ cfgSteering.DirRules, name+cfgSteering.ExtMDC,
)
case cfgHook.ToolCline:
return filepath.Join(
- projectRoot, dirClinerules,
+ projectRoot, cfgSteering.DirClinerules,
name+file.ExtMarkdown,
)
case cfgHook.ToolKiro:
return filepath.Join(
- projectRoot, dirKiroDot,
- dirSteering, name+file.ExtMarkdown,
+ projectRoot, cfgSteering.DirKiroDot,
+ cfgSteering.DirSteering, name+file.ExtMarkdown,
)
default:
return ""
@@ -76,8 +77,8 @@ func validateOutputPath(outPath, projectRoot string) error {
}
// Reject paths that escape the project root.
- escape := parentDir + string(filepath.Separator)
- if strings.HasPrefix(rel, escape) || rel == parentDir {
+ escape := token.ParentDir + string(filepath.Separator)
+ if strings.HasPrefix(rel, escape) || rel == token.ParentDir {
return errSteering.OutputEscapesRoot(outPath, projectRoot)
}
@@ -103,16 +104,16 @@ func formatCursor(sf *SteeringFile) []byte {
fm := cursorFrontmatter{
Description: sf.Description,
Globs: []any{},
- AlwaysApply: sf.Inclusion == InclusionAlways,
+ AlwaysApply: sf.Inclusion == cfgSteering.InclusionAlways,
}
raw, _ := yaml.Marshal(fm)
var buf bytes.Buffer
- buf.WriteString(frontmatterDelimiter)
+ buf.WriteString(token.FrontmatterDelimiter)
buf.WriteByte(token.NewlineLF[0])
buf.Write(raw)
- buf.WriteString(frontmatterDelimiter)
+ buf.WriteString(token.FrontmatterDelimiter)
buf.WriteByte(token.NewlineLF[0])
if sf.Body != "" {
buf.WriteString(sf.Body)
@@ -125,7 +126,7 @@ func formatCline(sf *SteeringFile) []byte {
var buf bytes.Buffer
buf.WriteString(token.HeadingLevelOneStart)
buf.WriteString(sf.Name)
- buf.WriteString(doubleNewline)
+ buf.WriteString(token.DoubleNewline)
if sf.Body != "" {
buf.WriteString(sf.Body)
}
@@ -143,10 +144,10 @@ func formatKiro(sf *SteeringFile) []byte {
raw, _ := yaml.Marshal(fm)
var buf bytes.Buffer
- buf.WriteString(frontmatterDelimiter)
+ buf.WriteString(token.FrontmatterDelimiter)
buf.WriteByte(token.NewlineLF[0])
buf.Write(raw)
- buf.WriteString(frontmatterDelimiter)
+ buf.WriteString(token.FrontmatterDelimiter)
buf.WriteByte(token.NewlineLF[0])
if sf.Body != "" {
buf.WriteString(sf.Body)
@@ -155,16 +156,18 @@ func formatKiro(sf *SteeringFile) []byte {
}
// mapKiroMode maps ctx inclusion modes to Kiro equivalents.
-func mapKiroMode(inc InclusionMode) string {
+func mapKiroMode(
+ inc cfgSteering.InclusionMode,
+) string {
switch inc {
- case InclusionAlways:
- return string(InclusionAlways)
- case InclusionAuto:
- return string(InclusionAuto)
- case InclusionManual:
- return string(InclusionManual)
+ case cfgSteering.InclusionAlways:
+ return string(cfgSteering.InclusionAlways)
+ case cfgSteering.InclusionAuto:
+ return string(cfgSteering.InclusionAuto)
+ case cfgSteering.InclusionManual:
+ return string(cfgSteering.InclusionManual)
default:
- return string(InclusionManual)
+ return string(cfgSteering.InclusionManual)
}
}
diff --git a/internal/steering/frontmatter.go b/internal/steering/frontmatter.go
index 65b823eaf..466a436d0 100644
--- a/internal/steering/frontmatter.go
+++ b/internal/steering/frontmatter.go
@@ -19,17 +19,17 @@ func splitFrontmatter(
data []byte,
) (frontmatter []byte, body string, err error) {
content := string(data)
- content = strings.TrimLeft(content, trimCR)
+ content = strings.TrimLeft(content, token.TrimCR)
- if !strings.HasPrefix(content, frontmatterDelimiter) {
+ if !strings.HasPrefix(content, token.FrontmatterDelimiter) {
return nil, "", errSteering.MissingOpeningDelimiter()
}
// Skip the opening delimiter line.
- rest := content[len(frontmatterDelimiter):]
+ rest := content[len(token.FrontmatterDelimiter):]
rest = strings.TrimPrefix(rest, token.NewlineLF)
- needle := token.NewlineLF + frontmatterDelimiter
+ needle := token.NewlineLF + token.FrontmatterDelimiter
idx := strings.Index(rest, needle)
if idx < 0 {
return nil, "", errSteering.MissingClosingDelimiter()
@@ -38,7 +38,7 @@ func splitFrontmatter(
fm := rest[:idx]
// Skip past the closing delimiter line.
- after := rest[idx+1+len(frontmatterDelimiter):]
+ after := rest[idx+1+len(token.FrontmatterDelimiter):]
// Trim exactly one leading newline from the body if present.
after = strings.TrimPrefix(after, token.NewlineLF)
diff --git a/internal/steering/match.go b/internal/steering/match.go
index bb2edebd6..b1c40d954 100644
--- a/internal/steering/match.go
+++ b/internal/steering/match.go
@@ -9,6 +9,8 @@ package steering
import (
"slices"
"strings"
+
+ cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
)
// matchInclusion checks whether a steering file should be included
@@ -18,14 +20,14 @@ func matchInclusion(
manualNames []string,
) bool {
switch sf.Inclusion {
- case InclusionAlways:
+ case cfgSteering.InclusionAlways:
return true
- case InclusionAuto:
+ case cfgSteering.InclusionAuto:
if sf.Description == "" {
return false
}
return strings.Contains(promptLower, strings.ToLower(sf.Description))
- case InclusionManual:
+ case cfgSteering.InclusionManual:
return slices.Contains(manualNames, sf.Name)
default:
return false
diff --git a/internal/steering/parse.go b/internal/steering/parse.go
index cb3ff8214..81586d2c9 100644
--- a/internal/steering/parse.go
+++ b/internal/steering/parse.go
@@ -11,23 +11,17 @@ import (
"gopkg.in/yaml.v3"
+ cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
"github.com/ActiveMemory/ctx/internal/config/token"
errSteering "github.com/ActiveMemory/ctx/internal/err/steering"
)
-// frontmatterDelimiter is the YAML frontmatter boundary marker.
-const frontmatterDelimiter = "---"
-
// defaultInclusion is the default inclusion mode when omitted.
-const defaultInclusion = InclusionManual
+var defaultInclusion = cfgSteering.InclusionManual
// defaultPriority is the default priority when omitted.
const defaultPriority = 50
-// trimCR is the character set trimmed from the start of
-// raw frontmatter content to normalize line endings.
-const trimCR = "\n\r"
-
// Parse reads a steering file from bytes, extracting YAML frontmatter
// and markdown body. The filePath is stored on the returned SteeringFile
// for error reporting and identification.
@@ -72,10 +66,10 @@ func Print(sf *SteeringFile) []byte {
raw, _ := yaml.Marshal(sf)
- buf.WriteString(frontmatterDelimiter)
+ buf.WriteString(token.FrontmatterDelimiter)
buf.WriteByte(token.NewlineLF[0])
buf.Write(raw)
- buf.WriteString(frontmatterDelimiter)
+ buf.WriteString(token.FrontmatterDelimiter)
buf.WriteByte(token.NewlineLF[0])
if sf.Body != "" {
diff --git a/internal/steering/parse_prop_test.go b/internal/steering/parse_prop_test.go
index 6e1c95795..7dd85827a 100644
--- a/internal/steering/parse_prop_test.go
+++ b/internal/steering/parse_prop_test.go
@@ -7,6 +7,7 @@
package steering
import (
+ cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
"math/rand"
"reflect"
"strings"
@@ -19,13 +20,13 @@ import (
type validSteeringFile struct {
Name string
Description string
- Inclusion InclusionMode
+ Inclusion cfgSteering.InclusionMode
Tools []string
Priority int
Body string
}
-var inclusionModes = []InclusionMode{InclusionAlways, InclusionAuto, InclusionManual}
+var inclusionModes = []cfgSteering.InclusionMode{cfgSteering.InclusionAlways, cfgSteering.InclusionAuto, cfgSteering.InclusionManual}
var validTools = []string{"claude", "cursor", "cline", "kiro", "codex"}
diff --git a/internal/steering/parse_test.go b/internal/steering/parse_test.go
index 1ef430a95..7081b1273 100644
--- a/internal/steering/parse_test.go
+++ b/internal/steering/parse_test.go
@@ -7,6 +7,7 @@
package steering
import (
+ cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
"strings"
"testing"
)
@@ -35,8 +36,8 @@ Use RESTful conventions.
if sf.Description != "REST API design conventions" {
t.Errorf("Description = %q, want %q", sf.Description, "REST API design conventions")
}
- if sf.Inclusion != InclusionAuto {
- t.Errorf("Inclusion = %q, want %q", sf.Inclusion, InclusionAuto)
+ if sf.Inclusion != cfgSteering.InclusionAuto {
+ t.Errorf("Inclusion = %q, want %q", sf.Inclusion, cfgSteering.InclusionAuto)
}
if len(sf.Tools) != 2 || sf.Tools[0] != "claude" || sf.Tools[1] != "cursor" {
t.Errorf("Tools = %v, want [claude cursor]", sf.Tools)
@@ -63,8 +64,8 @@ Some body content.
t.Fatalf("Parse() error = %v", err)
}
- if sf.Inclusion != InclusionManual {
- t.Errorf("Inclusion = %q, want default %q", sf.Inclusion, InclusionManual)
+ if sf.Inclusion != cfgSteering.InclusionManual {
+ t.Errorf("Inclusion = %q, want default %q", sf.Inclusion, cfgSteering.InclusionManual)
}
if sf.Tools != nil {
t.Errorf("Tools = %v, want nil (all tools)", sf.Tools)
@@ -146,8 +147,8 @@ Always included.
if err != nil {
t.Fatalf("Parse() error = %v", err)
}
- if sf.Inclusion != InclusionAlways {
- t.Errorf("Inclusion = %q, want %q", sf.Inclusion, InclusionAlways)
+ if sf.Inclusion != cfgSteering.InclusionAlways {
+ t.Errorf("Inclusion = %q, want %q", sf.Inclusion, cfgSteering.InclusionAlways)
}
}
@@ -198,7 +199,7 @@ Content here.
func TestPrint_MinimalFile(t *testing.T) {
sf := &SteeringFile{
Name: "minimal",
- Inclusion: InclusionManual,
+ Inclusion: cfgSteering.InclusionManual,
Priority: 50,
Body: "Hello.\n",
}
@@ -220,7 +221,7 @@ func TestPrint_MinimalFile(t *testing.T) {
func TestPrint_NilToolsOmitted(t *testing.T) {
sf := &SteeringFile{
Name: "no-tools",
- Inclusion: InclusionManual,
+ Inclusion: cfgSteering.InclusionManual,
Priority: 50,
}
diff --git a/internal/steering/sync.go b/internal/steering/sync.go
index 840581878..3e6a0e61f 100644
--- a/internal/steering/sync.go
+++ b/internal/steering/sync.go
@@ -14,28 +14,6 @@ import (
errSteering "github.com/ActiveMemory/ctx/internal/err/steering"
)
-// Tool-native directory and extension constants.
-const (
- // dirCursorDot is the Cursor configuration directory.
- dirCursorDot = ".cursor"
- // dirRules is the Cursor rules subdirectory.
- dirRules = "rules"
- // extMDC is the Cursor MDC rule file extension.
- extMDC = ".mdc"
- // dirClinerules is the Cline rules directory.
- dirClinerules = ".clinerules"
- // dirKiroDot is the Kiro configuration directory.
- dirKiroDot = ".kiro"
- // dirSteering is the Kiro steering subdirectory.
- dirSteering = "steering"
- // parentDir is the relative parent directory component.
- parentDir = ".."
-)
-
-// doubleNewline is the separator between a heading
-// and body in Cline-formatted steering output.
-const doubleNewline = "\n\n"
-
// syncableTools lists the tool identifiers that support
// native-format sync. Claude and Codex use ctx agent
// directly and do not need synced files.
diff --git a/internal/steering/types.go b/internal/steering/types.go
index 1012ceacd..71e9e3c3a 100644
--- a/internal/steering/types.go
+++ b/internal/steering/types.go
@@ -6,19 +6,10 @@
package steering
-// InclusionMode determines when a steering file is injected into
-// an AI prompt.
-type InclusionMode string
+import (
+ "github.com/ActiveMemory/ctx/internal/assets/tpl"
-const (
- // InclusionAlways includes the steering file in every context packet.
- InclusionAlways InclusionMode = "always"
- // InclusionAuto includes the steering file when the prompt matches
- // the file's description.
- InclusionAuto InclusionMode = "auto"
- // InclusionManual includes the steering file only when explicitly
- // referenced by name.
- InclusionManual InclusionMode = "manual"
+ cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
)
// SteeringFile represents a parsed steering file with YAML frontmatter
@@ -33,13 +24,13 @@ const (
// - Body: Markdown content after frontmatter
// - Path: Filesystem path to the steering file
type SteeringFile struct {
- Name string `yaml:"name"`
- Description string `yaml:"description,omitempty"`
- Inclusion InclusionMode `yaml:"inclusion"`
- Tools []string `yaml:"tools,omitempty"`
- Priority int `yaml:"priority"`
- Body string `yaml:"-"`
- Path string `yaml:"-"`
+ Name string `yaml:"name"`
+ Description string `yaml:"description,omitempty"`
+ Inclusion cfgSteering.InclusionMode `yaml:"inclusion"`
+ Tools []string `yaml:"tools,omitempty"`
+ Priority int `yaml:"priority"`
+ Body string `yaml:"-"`
+ Path string `yaml:"-"`
}
// SyncReport summarizes the result of syncing steering files to
@@ -79,30 +70,23 @@ type FoundationFile struct {
// FoundationFiles defines the set of files created by ctx steering init.
var FoundationFiles = []FoundationFile{
{
- Name: "product",
- Description: "Product context, goals, and target users",
- Body: "# Product Context\n\n" +
- "Describe the product, its goals, and target users.\n",
+ Name: tpl.SteeringNameProduct,
+ Description: tpl.SteeringDescProduct,
+ Body: tpl.SteeringBodyProduct,
},
{
- Name: "tech",
- Description: "Technology stack, constraints, and dependencies",
- Body: "# Technology Stack\n\n" +
- "Describe the technology stack, " +
- "constraints, and key dependencies.\n",
+ Name: tpl.SteeringNameTech,
+ Description: tpl.SteeringDescTech,
+ Body: tpl.SteeringBodyTech,
},
{
- Name: "structure",
- Description: "Project structure and directory conventions",
- Body: "# Project Structure\n\n" +
- "Describe the project layout " +
- "and directory conventions.\n",
+ Name: tpl.SteeringNameStructure,
+ Description: tpl.SteeringDescStructure,
+ Body: tpl.SteeringBodyStructure,
},
{
- Name: "workflow",
- Description: "Development workflow and process rules",
- Body: "# Development Workflow\n\n" +
- "Describe the development workflow, " +
- "branching strategy, and process rules.\n",
+ Name: tpl.SteeringNameWorkflow,
+ Description: tpl.SteeringDescWorkflow,
+ Body: tpl.SteeringBodyWorkflow,
},
}
diff --git a/internal/sysinfo/threshold.go b/internal/sysinfo/threshold.go
index 83dbe67c4..8063a5f20 100644
--- a/internal/sysinfo/threshold.go
+++ b/internal/sysinfo/threshold.go
@@ -12,6 +12,7 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
"github.com/ActiveMemory/ctx/internal/config/stats"
+ cfgSysinfo "github.com/ActiveMemory/ctx/internal/config/sysinfo"
)
// Evaluate checks a snapshot against resource thresholds and returns any
@@ -38,11 +39,13 @@ func Evaluate(snap Snapshot) []ResourceAlert {
pct, FormatGiB(snap.Memory.UsedBytes), FormatGiB(snap.Memory.TotalBytes))
if pct >= stats.ThresholdMemoryDangerPct {
alerts = append(alerts, ResourceAlert{
- Severity: SeverityDanger, Resource: ResourceMemory, Message: msg,
+ Severity: SeverityDanger, Resource: cfgSysinfo.ResourceMemory, Message: msg,
})
} else if pct >= stats.ThresholdMemoryWarnPct {
alerts = append(alerts, ResourceAlert{
- Severity: SeverityWarning, Resource: ResourceMemory, Message: msg,
+ Severity: SeverityWarning,
+ Resource: cfgSysinfo.ResourceMemory,
+ Message: msg,
})
}
}
@@ -58,11 +61,11 @@ func Evaluate(snap Snapshot) []ResourceAlert {
)
if pct >= stats.ThresholdSwapDangerPct {
alerts = append(alerts, ResourceAlert{
- Severity: SeverityDanger, Resource: ResourceSwap, Message: msg,
+ Severity: SeverityDanger, Resource: cfgSysinfo.ResourceSwap, Message: msg,
})
} else if pct >= stats.ThresholdSwapWarnPct {
alerts = append(alerts, ResourceAlert{
- Severity: SeverityWarning, Resource: ResourceSwap, Message: msg,
+ Severity: SeverityWarning, Resource: cfgSysinfo.ResourceSwap, Message: msg,
})
}
}
@@ -74,11 +77,11 @@ func Evaluate(snap Snapshot) []ResourceAlert {
pct, FormatGiB(snap.Disk.UsedBytes), FormatGiB(snap.Disk.TotalBytes))
if pct >= stats.ThresholdDiskDangerPct {
alerts = append(alerts, ResourceAlert{
- Severity: SeverityDanger, Resource: ResourceDisk, Message: msg,
+ Severity: SeverityDanger, Resource: cfgSysinfo.ResourceDisk, Message: msg,
})
} else if pct >= stats.ThresholdDiskWarnPct {
alerts = append(alerts, ResourceAlert{
- Severity: SeverityWarning, Resource: ResourceDisk, Message: msg,
+ Severity: SeverityWarning, Resource: cfgSysinfo.ResourceDisk, Message: msg,
})
}
}
@@ -89,11 +92,11 @@ func Evaluate(snap Snapshot) []ResourceAlert {
msg := fmt.Sprintf(desc.Text(text.DescKeyResourcesAlertLoad), ratio)
if ratio >= stats.ThresholdLoadDangerRatio {
alerts = append(alerts, ResourceAlert{
- Severity: SeverityDanger, Resource: ResourceLoad, Message: msg,
+ Severity: SeverityDanger, Resource: cfgSysinfo.ResourceLoad, Message: msg,
})
} else if ratio >= stats.ThresholdLoadWarnRatio {
alerts = append(alerts, ResourceAlert{
- Severity: SeverityWarning, Resource: ResourceLoad, Message: msg,
+ Severity: SeverityWarning, Resource: cfgSysinfo.ResourceLoad, Message: msg,
})
}
}
diff --git a/internal/sysinfo/types.go b/internal/sysinfo/types.go
index 299251576..0e69afc44 100644
--- a/internal/sysinfo/types.go
+++ b/internal/sysinfo/types.go
@@ -6,6 +6,8 @@
package sysinfo
+import cfgSysinfo "github.com/ActiveMemory/ctx/internal/config/sysinfo"
+
// Severity represents the urgency level of a resource alert.
type Severity int
@@ -18,25 +20,15 @@ const (
SeverityDanger
)
-// Severity label strings for display output.
-const (
- // LabelOK is the severity label for no concern.
- LabelOK = "ok"
- // LabelWarning is the severity label for approaching limits.
- LabelWarning = "warning"
- // LabelDanger is the severity label for critically low resources.
- LabelDanger = "danger"
-)
-
// String returns the lowercase label for the severity level.
func (s Severity) String() string {
switch s {
case SeverityWarning:
- return LabelWarning
+ return cfgSysinfo.LabelWarning
case SeverityDanger:
- return LabelDanger
+ return cfgSysinfo.LabelDanger
default:
- return LabelOK
+ return cfgSysinfo.LabelOK
}
}
@@ -100,18 +92,6 @@ type Snapshot struct {
Load LoadInfo
}
-// Resource name constants for threshold evaluation.
-const (
- // ResourceMemory is the resource name for physical memory.
- ResourceMemory = "memory"
- // ResourceSwap is the resource name for swap space.
- ResourceSwap = "swap"
- // ResourceDisk is the resource name for filesystem usage.
- ResourceDisk = "disk"
- // ResourceLoad is the resource name for system load.
- ResourceLoad = "load"
-)
-
// ResourceAlert describes a single threshold breach.
//
// Fields:
diff --git a/internal/trigger/discover.go b/internal/trigger/discover.go
index f5b238cc2..6fb39545d 100644
--- a/internal/trigger/discover.go
+++ b/internal/trigger/discover.go
@@ -41,7 +41,7 @@ func Discover(hooksDir string) (map[HookType][]HookInfo, error) {
}
for _, ht := range ValidTypes() {
- typeDir := filepath.Join(hooksDir, string(ht))
+ typeDir := filepath.Join(hooksDir, ht)
entries, readErr := os.ReadDir(typeDir)
if readErr != nil {
@@ -114,7 +114,7 @@ func FindByName(hooksDir, name string) (*HookInfo, error) {
}
for _, ht := range ValidTypes() {
- typeDir := filepath.Join(hooksDir, string(ht))
+ typeDir := filepath.Join(hooksDir, ht)
entries, readErr := os.ReadDir(typeDir)
if readErr != nil {
diff --git a/internal/trigger/discover_test.go b/internal/trigger/discover_test.go
index f11de00c6..e9bccdad4 100644
--- a/internal/trigger/discover_test.go
+++ b/internal/trigger/discover_test.go
@@ -11,6 +11,8 @@ import (
"path/filepath"
"runtime"
"testing"
+
+ cfgTrigger "github.com/ActiveMemory/ctx/internal/config/trigger"
)
// TestDiscover_ValidExecutableScripts verifies that Discover returns
@@ -36,18 +38,18 @@ func TestDiscover_ValidExecutableScripts(t *testing.T) {
t.Fatalf("unexpected error: %v", err)
}
- if len(result[PreToolUse]) != 1 {
- t.Fatalf("expected 1 pre-tool-use hook, got %d", len(result[PreToolUse]))
+ if len(result[cfgTrigger.PreToolUse]) != 1 {
+ t.Fatalf("expected 1 pre-tool-use hook, got %d", len(result[cfgTrigger.PreToolUse]))
}
- if result[PreToolUse][0].Name != "check" {
- t.Errorf("expected name %q, got %q", "check", result[PreToolUse][0].Name)
+ if result[cfgTrigger.PreToolUse][0].Name != "check" {
+ t.Errorf("expected name %q, got %q", "check", result[cfgTrigger.PreToolUse][0].Name)
}
- if !result[PreToolUse][0].Enabled {
+ if !result[cfgTrigger.PreToolUse][0].Enabled {
t.Error("expected hook to be enabled")
}
- if len(result[SessionStart]) != 1 {
- t.Fatalf("expected 1 session-start hook, got %d", len(result[SessionStart]))
+ if len(result[cfgTrigger.SessionStart]) != 1 {
+ t.Fatalf("expected 1 session-start hook, got %d", len(result[cfgTrigger.SessionStart]))
}
}
@@ -78,7 +80,7 @@ func TestDiscover_SkipsNonExecutable(t *testing.T) {
t.Fatalf("unexpected error: %v", err)
}
- hooks := result[PreToolUse]
+ hooks := result[cfgTrigger.PreToolUse]
if len(hooks) != 1 {
t.Fatalf("expected 1 hook (non-executable skipped by validation), got %d", len(hooks))
}
@@ -124,7 +126,7 @@ func TestDiscover_SkipsSymlinks(t *testing.T) {
t.Fatalf("unexpected error: %v", err)
}
- hooks := result[PostToolUse]
+ hooks := result[cfgTrigger.PostToolUse]
if len(hooks) != 1 {
t.Fatalf("expected 1 hook (symlink skipped), got %d", len(hooks))
}
@@ -170,7 +172,7 @@ func TestDiscover_AlphabeticalOrder(t *testing.T) {
t.Fatalf("unexpected error: %v", err)
}
- hooks := result[FileSave]
+ hooks := result[cfgTrigger.FileSave]
if len(hooks) != 3 {
t.Fatalf("expected 3 hooks, got %d", len(hooks))
}
@@ -209,8 +211,8 @@ func TestFindByName_Found(t *testing.T) {
if info.Name != "notify" {
t.Errorf("expected name %q, got %q", "notify", info.Name)
}
- if info.Type != ContextAdd {
- t.Errorf("expected type %q, got %q", ContextAdd, info.Type)
+ if info.Type != cfgTrigger.ContextAdd {
+ t.Errorf("expected type %q, got %q", cfgTrigger.ContextAdd, info.Type)
}
}
diff --git a/internal/trigger/runner_test.go b/internal/trigger/runner_test.go
index 918058656..869c8e7e8 100644
--- a/internal/trigger/runner_test.go
+++ b/internal/trigger/runner_test.go
@@ -13,6 +13,8 @@ import (
"strings"
"testing"
"time"
+
+ cfgTrigger "github.com/ActiveMemory/ctx/internal/config/trigger"
)
// writeHookScript creates an executable shell script in the given hook
@@ -56,7 +58,7 @@ func TestRunAll_CancelPropagation(t *testing.T) {
"#!/bin/sh\necho '{\"cancel\": false, \"context\": \"should not appear\"}'")
input := &HookInput{TriggerType: "pre-tool-use", Tool: "test"}
- agg, err := RunAll(hooksDir, PreToolUse, input, 5*time.Second)
+ agg, err := RunAll(hooksDir, cfgTrigger.PreToolUse, input, 5*time.Second)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
@@ -88,7 +90,7 @@ func TestRunAll_ContextAggregation(t *testing.T) {
"#!/bin/sh\necho '{\"cancel\": false, \"context\": \"more context\"}'")
input := &HookInput{TriggerType: "session-start", Tool: "test"}
- agg, err := RunAll(hooksDir, SessionStart, input, 5*time.Second)
+ agg, err := RunAll(hooksDir, cfgTrigger.SessionStart, input, 5*time.Second)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
@@ -121,7 +123,7 @@ func TestRunAll_NonZeroExitCode(t *testing.T) {
"#!/bin/sh\necho '{\"cancel\": false, \"context\": \"survived\"}'")
input := &HookInput{TriggerType: "post-tool-use", Tool: "test"}
- agg, err := RunAll(hooksDir, PostToolUse, input, 5*time.Second)
+ agg, err := RunAll(hooksDir, cfgTrigger.PostToolUse, input, 5*time.Second)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
@@ -155,7 +157,7 @@ func TestRunAll_InvalidJSONOutput(t *testing.T) {
"#!/bin/sh\necho '{\"cancel\": false, \"context\": \"valid\"}'")
input := &HookInput{TriggerType: "file-save", Tool: "test"}
- agg, err := RunAll(hooksDir, FileSave, input, 5*time.Second)
+ agg, err := RunAll(hooksDir, cfgTrigger.FileSave, input, 5*time.Second)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
@@ -207,7 +209,7 @@ func TestRunAll_TimeoutEnforcement(t *testing.T) {
input := &HookInput{TriggerType: "context-add", Tool: "test"}
start := time.Now()
- agg, err := RunAll(hooksDir, ContextAdd, input, 1*time.Second)
+ agg, err := RunAll(hooksDir, cfgTrigger.ContextAdd, input, 1*time.Second)
elapsed := time.Since(start)
if err != nil {
t.Fatalf("unexpected error: %v", err)
@@ -255,7 +257,7 @@ func TestRunAll_NoHooks(t *testing.T) {
}
input := &HookInput{TriggerType: "session-end", Tool: "test"}
- agg, err := RunAll(hooksDir, SessionEnd, input, 5*time.Second)
+ agg, err := RunAll(hooksDir, cfgTrigger.SessionEnd, input, 5*time.Second)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
@@ -276,7 +278,7 @@ func TestRunAll_NoHooks(t *testing.T) {
// Validates: Requirements 7.1
func TestRunAll_MissingHooksDir(t *testing.T) {
input := &HookInput{TriggerType: "pre-tool-use", Tool: "test"}
- agg, err := RunAll(filepath.Join(t.TempDir(), "nonexistent"), PreToolUse, input, 5*time.Second)
+ agg, err := RunAll(filepath.Join(t.TempDir(), "nonexistent"), cfgTrigger.PreToolUse, input, 5*time.Second)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
@@ -302,7 +304,7 @@ func TestRunAll_EmptyStdout(t *testing.T) {
"#!/bin/sh\n# produces no output")
input := &HookInput{TriggerType: "session-end", Tool: "test"}
- agg, err := RunAll(hooksDir, SessionEnd, input, 5*time.Second)
+ agg, err := RunAll(hooksDir, cfgTrigger.SessionEnd, input, 5*time.Second)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
diff --git a/internal/trigger/security.go b/internal/trigger/security.go
index f84fd4661..fa4bcbae6 100644
--- a/internal/trigger/security.go
+++ b/internal/trigger/security.go
@@ -11,13 +11,10 @@ import (
"path/filepath"
"github.com/ActiveMemory/ctx/internal/config/fs"
+ "github.com/ActiveMemory/ctx/internal/config/token"
errTrigger "github.com/ActiveMemory/ctx/internal/err/trigger"
)
-// parentDir is the relative parent directory component used
-// in boundary checks.
-const parentDir = ".."
-
// ValidatePath checks that a hook script path:
// 1. Is not a symlink
// 2. Resolves within the hooksDir boundary
@@ -60,7 +57,7 @@ func ValidatePath(hooksDir, hookPath string) error {
}
sep := string(filepath.Separator)
- if rel == parentDir || len(rel) >= 3 && rel[:3] == parentDir+sep {
+ if rel == token.ParentDir || len(rel) >= 3 && rel[:3] == token.ParentDir+sep {
return errTrigger.Boundary(hookPath, hooksDir)
}
diff --git a/internal/trigger/types.go b/internal/trigger/types.go
index 0c3391600..8ab4638bc 100644
--- a/internal/trigger/types.go
+++ b/internal/trigger/types.go
@@ -6,36 +6,23 @@
package trigger
-import "github.com/ActiveMemory/ctx/internal/entity"
-
-// HookType is an alias for entity.TriggerType within the trigger package.
-type HookType = entity.TriggerType
-
-// Lifecycle event constants re-exported from entity for convenience.
-const (
- // PreToolUse fires before an AI tool invocation.
- PreToolUse = entity.TriggerPreToolUse
- // PostToolUse fires after an AI tool invocation.
- PostToolUse = entity.TriggerPostToolUse
- // SessionStart fires when an AI session begins.
- SessionStart = entity.TriggerSessionStart
- // SessionEnd fires when an AI session ends.
- SessionEnd = entity.TriggerSessionEnd
- // FileSave fires when a file is saved.
- FileSave = entity.TriggerFileSave
- // ContextAdd fires when context is added.
- ContextAdd = entity.TriggerContextAdd
+import (
+ cfgTrigger "github.com/ActiveMemory/ctx/internal/config/trigger"
+ "github.com/ActiveMemory/ctx/internal/entity"
)
+// HookType is an alias for the trigger event type.
+type HookType = cfgTrigger.TriggerType
+
// ValidTypes returns all valid trigger type strings.
func ValidTypes() []HookType {
return []HookType{
- PreToolUse,
- PostToolUse,
- SessionStart,
- SessionEnd,
- FileSave,
- ContextAdd,
+ cfgTrigger.PreToolUse,
+ cfgTrigger.PostToolUse,
+ cfgTrigger.SessionStart,
+ cfgTrigger.SessionEnd,
+ cfgTrigger.FileSave,
+ cfgTrigger.ContextAdd,
}
}
diff --git a/internal/write/resource/format.go b/internal/write/resource/format.go
index 54eb63781..7e9cba2a5 100644
--- a/internal/write/resource/format.go
+++ b/internal/write/resource/format.go
@@ -13,6 +13,7 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
"github.com/ActiveMemory/ctx/internal/config/stats"
+ cfgSysinfo "github.com/ActiveMemory/ctx/internal/config/sysinfo"
"github.com/ActiveMemory/ctx/internal/sysinfo"
)
@@ -96,11 +97,11 @@ func formatText(
}
gibEntries := []gibEntry{
{snap.Memory.Supported, snap.Memory.UsedBytes, snap.Memory.TotalBytes,
- sysinfo.ResourceMemory, text.DescKeyResourcesLabelMemory},
+ cfgSysinfo.ResourceMemory, text.DescKeyResourcesLabelMemory},
{snap.Memory.Supported, snap.Memory.SwapUsedBytes, snap.Memory.SwapTotalBytes,
- sysinfo.ResourceSwap, text.DescKeyResourcesLabelSwap},
+ cfgSysinfo.ResourceSwap, text.DescKeyResourcesLabelSwap},
{snap.Disk.Supported, snap.Disk.UsedBytes, snap.Disk.TotalBytes,
- sysinfo.ResourceDisk, text.DescKeyResourcesLabelDisk},
+ cfgSysinfo.ResourceDisk, text.DescKeyResourcesLabelDisk},
}
valueFmt := desc.Text(text.DescKeyResourcesValueFormat)
for _, e := range gibEntries {
@@ -123,7 +124,7 @@ func formatText(
values := fmt.Sprintf(desc.Text(text.DescKeyResourcesLoadFormat),
snap.Load.Load1, snap.Load.Load5, snap.Load.Load15,
snap.Load.NumCPU, ratio)
- sev := sysinfo.SeverityFor(alerts, sysinfo.ResourceLoad)
+ sev := sysinfo.SeverityFor(alerts, cfgSysinfo.ResourceLoad)
lines = append(lines, formatLine(
desc.Text(text.DescKeyResourcesLabelLoad), values, statusText(sev)))
}
diff --git a/internal/write/setup/hook.go b/internal/write/setup/hook.go
index c9deac621..e2d5908bb 100644
--- a/internal/write/setup/hook.go
+++ b/internal/write/setup/hook.go
@@ -244,49 +244,15 @@ func Content(cmd *cobra.Command, content string) {
cmd.Print(content)
}
-// Integration instruction lines for ctx setup output.
-const (
- // infoCursorHead is the Cursor section header.
- infoCursorHead = "Cursor integration:"
- // infoCursorRun is the run command hint.
- infoCursorRun = " Run: ctx setup cursor --write"
- // infoCursorMCP is the MCP config path.
- infoCursorMCP = " Creates: .cursor/mcp.json" +
- " (MCP server config)"
- // infoCursorSync is the steering sync path.
- infoCursorSync = " Syncs: .cursor/rules/" +
- " (steering files)"
- // infoKiroHead is the Kiro section header.
- infoKiroHead = "Kiro integration:"
- // infoKiroRun is the run command hint.
- infoKiroRun = " Run: ctx setup kiro --write"
- // infoKiroMCP is the MCP config path.
- infoKiroMCP = " Creates: .kiro/settings/mcp.json" +
- " (MCP server config)"
- // infoKiroSync is the steering sync path.
- infoKiroSync = " Syncs: .kiro/steering/" +
- " (steering files)"
- // infoClineHead is the Cline section header.
- infoClineHead = "Cline integration:"
- // infoClineRun is the run command hint.
- infoClineRun = " Run: ctx setup cline --write"
- // infoClineMCP is the MCP config path.
- infoClineMCP = " Creates: .vscode/mcp.json" +
- " (MCP server config)"
- // infoClineSync is the steering sync path.
- infoClineSync = " Syncs: .clinerules/" +
- " (steering files)"
-)
-
// InfoCursorIntegration prints Cursor integration instructions.
//
// Parameters:
// - cmd: Cobra command for output
func InfoCursorIntegration(cmd *cobra.Command) {
- cmd.Println(infoCursorHead)
- cmd.Println(infoCursorRun)
- cmd.Println(infoCursorMCP)
- cmd.Println(infoCursorSync)
+ cmd.Println(desc.Text(text.DescKeyWriteSetupCursorHead))
+ cmd.Println(desc.Text(text.DescKeyWriteSetupCursorRun))
+ cmd.Println(desc.Text(text.DescKeyWriteSetupCursorMCP))
+ cmd.Println(desc.Text(text.DescKeyWriteSetupCursorSync))
}
// InfoKiroIntegration prints Kiro integration instructions.
@@ -294,10 +260,10 @@ func InfoCursorIntegration(cmd *cobra.Command) {
// Parameters:
// - cmd: Cobra command for output
func InfoKiroIntegration(cmd *cobra.Command) {
- cmd.Println(infoKiroHead)
- cmd.Println(infoKiroRun)
- cmd.Println(infoKiroMCP)
- cmd.Println(infoKiroSync)
+ cmd.Println(desc.Text(text.DescKeyWriteSetupKiroHead))
+ cmd.Println(desc.Text(text.DescKeyWriteSetupKiroRun))
+ cmd.Println(desc.Text(text.DescKeyWriteSetupKiroMCP))
+ cmd.Println(desc.Text(text.DescKeyWriteSetupKiroSync))
}
// InfoClineIntegration prints Cline integration instructions.
@@ -305,10 +271,10 @@ func InfoKiroIntegration(cmd *cobra.Command) {
// Parameters:
// - cmd: Cobra command for output
func InfoClineIntegration(cmd *cobra.Command) {
- cmd.Println(infoClineHead)
- cmd.Println(infoClineRun)
- cmd.Println(infoClineMCP)
- cmd.Println(infoClineSync)
+ cmd.Println(desc.Text(text.DescKeyWriteSetupClineHead))
+ cmd.Println(desc.Text(text.DescKeyWriteSetupClineRun))
+ cmd.Println(desc.Text(text.DescKeyWriteSetupClineMCP))
+ cmd.Println(desc.Text(text.DescKeyWriteSetupClineSync))
}
// DeployComplete prints the completion message for a tool setup.
@@ -370,16 +336,12 @@ func DeploySteeringSkipped(cmd *cobra.Command, name string) {
name))
}
-// msgNoSteeringToSync is the message when no steering files
-// are available for sync.
-const msgNoSteeringToSync = " No steering files to sync" +
- " (run ctx steering init first)"
-
// DeployNoSteering prints that no steering files are
// available to sync.
//
// Parameters:
// - cmd: Cobra command for output
func DeployNoSteering(cmd *cobra.Command) {
- cmd.Println(msgNoSteeringToSync)
+ cmd.Println(desc.Text(
+ text.DescKeyWriteSetupNoSteeringToSync))
}
diff --git a/internal/write/skill/skill.go b/internal/write/skill/skill.go
index db2d110da..69065b1a8 100644
--- a/internal/write/skill/skill.go
+++ b/internal/write/skill/skill.go
@@ -27,15 +27,12 @@ func Installed(cmd *cobra.Command, name, dir string) {
name, dir))
}
-// msgNoSkills is shown when no skills are installed.
-const msgNoSkills = "No skills installed."
-
// NoSkillsFound prints a message indicating no skills are installed.
//
// Parameters:
// - cmd: Cobra command for output
func NoSkillsFound(cmd *cobra.Command) {
- cmd.Println(msgNoSkills)
+ cmd.Println(desc.Text(text.DescKeyWriteSkillNoSkills))
}
// EntryWithDesc prints a skill entry with name and description.
diff --git a/internal/write/steering/steering.go b/internal/write/steering/steering.go
index 03da6e019..d61c3b1d4 100644
--- a/internal/write/steering/steering.go
+++ b/internal/write/steering/steering.go
@@ -15,14 +15,6 @@ import (
"github.com/ActiveMemory/ctx/internal/config/embed/text"
)
-// User-facing messages for steering commands.
-const (
- // msgNoFiles is shown when no steering files exist.
- msgNoFiles = "No steering files found."
- // msgNoMatch is shown when no files match the prompt.
- msgNoMatch = "No steering files match the given prompt."
-)
-
// Created prints confirmation that a steering file was created.
//
// Parameters:
@@ -60,7 +52,7 @@ func InitSummary(cmd *cobra.Command, created, skipped int) {
// Parameters:
// - cmd: Cobra command for output
func NoFilesFound(cmd *cobra.Command) {
- cmd.Println(msgNoFiles)
+ cmd.Println(desc.Text(text.DescKeyWriteSteeringNoFiles))
}
// FileEntry prints a single steering file entry with metadata.
@@ -95,7 +87,7 @@ func FileCount(cmd *cobra.Command, count int) {
// Parameters:
// - cmd: Cobra command for output
func NoFilesMatch(cmd *cobra.Command) {
- cmd.Println(msgNoMatch)
+ cmd.Println(desc.Text(text.DescKeyWriteSteeringNoMatch))
}
// PreviewHeader prints the header for steering preview output.
diff --git a/internal/write/trigger/trigger.go b/internal/write/trigger/trigger.go
index 688b310c5..36fe81d82 100644
--- a/internal/write/trigger/trigger.go
+++ b/internal/write/trigger/trigger.go
@@ -15,16 +15,6 @@ import (
"github.com/ActiveMemory/ctx/internal/config/embed/text"
)
-// User-facing messages for hook list and test output.
-const (
- // msgNoHooksFound is shown when no hooks are discovered.
- msgNoHooksFound = "No hooks found."
- // msgErrors is the section header for hook errors.
- msgErrors = "Errors:"
- // msgNoOutput is shown when hooks produce no output.
- msgNoOutput = "No output from hooks."
-)
-
// Created prints confirmation that a hook script was created.
//
// Parameters:
@@ -103,7 +93,7 @@ func BlankLine(cmd *cobra.Command) {
// Parameters:
// - cmd: Cobra command for output
func NoHooksFound(cmd *cobra.Command) {
- cmd.Println(msgNoHooksFound)
+ cmd.Println(desc.Text(text.DescKeyWriteTriggerNoHooks))
}
// Count prints the total hook count.
@@ -169,7 +159,7 @@ func ContextOutput(cmd *cobra.Command, context string) {
// Parameters:
// - cmd: Cobra command for output
func ErrorsHeader(cmd *cobra.Command) {
- cmd.Println(msgErrors)
+ cmd.Println(desc.Text(text.DescKeyWriteTriggerErrorsHdr))
}
// ErrorLine prints a single error line.
@@ -188,5 +178,5 @@ func ErrorLine(cmd *cobra.Command, errMsg string) {
// Parameters:
// - cmd: Cobra command for output
func NoOutput(cmd *cobra.Command) {
- cmd.Println(msgNoOutput)
+ cmd.Println(desc.Text(text.DescKeyWriteTriggerNoOutput))
}
From a8695861313bfcd7592c51b4933141fa1aaf0140 Mon Sep 17 00:00:00 2001
From: Jose Alekhinne
Date: Sat, 4 Apr 2026 02:09:42 -0700
Subject: [PATCH 06/13] refactor: unexport 6 dead exports, dedup setup deploy,
fix mixed visibility
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Unexport 5 test-only symbols (RegistryError, CreateVSCodeArtifacts,
LockedFrontmatterLine, RegisteredTools, MatchFull) and delete 1
(ErrCodeInvalidReq). Remove 6 entries from testOnlyExports (11
remain, all cross-package — tracked for future sessions).
Extract shared MCP deploy helper (cli/setup/core/mcp/) eliminating
3-way duplication across cursor/cline/kiro deploy.go files. Both
ensureMCPConfig and syncSteering now delegate to the shared package.
Fix mixed visibility: move registryError() and registeredTools()
to separate files (hooks.go, tools.go). Replace LockedFrontmatterLine
test usage with source constant session.FrontmatterLockedLine.
Spec: specs/ast-audit-contributor-guide.md
Signed-off-by: Jose Alekhinne
---
internal/assets/hooks/messages/hooks.go | 10 ++
internal/assets/hooks/messages/registry.go | 10 --
.../assets/hooks/messages/registry_test.go | 4 +-
internal/audit/dead_exports_test.go | 28 ++---
internal/cli/initialize/core/vscode/vscode.go | 4 +-
.../cli/initialize/core/vscode/vscode_test.go | 10 +-
internal/cli/journal/core/lock/lock.go | 4 +-
internal/cli/journal/core/lock/lock_test.go | 6 +-
internal/cli/journal/core/lock/unlock_test.go | 10 +-
internal/cli/setup/core/cline/deploy.go | 70 ++---------
internal/cli/setup/core/cursor/deploy.go | 68 ++---------
internal/cli/setup/core/kiro/deploy.go | 70 ++---------
internal/cli/setup/core/mcp/deploy.go | 112 ++++++++++++++++++
internal/cli/setup/core/mcp/doc.go | 19 +++
internal/journal/parser/markdown_test.go | 2 +-
internal/journal/parser/parser.go | 12 --
internal/journal/parser/parser_test.go | 2 +-
internal/journal/parser/tools.go | 20 ++++
internal/mcp/proto/schema.go | 2 -
internal/task/task.go | 4 +-
internal/task/task_test.go | 4 +-
21 files changed, 218 insertions(+), 253 deletions(-)
create mode 100644 internal/cli/setup/core/mcp/deploy.go
create mode 100644 internal/cli/setup/core/mcp/doc.go
create mode 100644 internal/journal/parser/tools.go
diff --git a/internal/assets/hooks/messages/hooks.go b/internal/assets/hooks/messages/hooks.go
index d3b6a8e34..26a857b3e 100644
--- a/internal/assets/hooks/messages/hooks.go
+++ b/internal/assets/hooks/messages/hooks.go
@@ -6,6 +6,16 @@
package messages
+// registryError returns any error encountered while
+// parsing the embedded registry.yaml. Nil on success.
+//
+// Returns:
+// - error: Parse error, or nil on success
+func registryError() error {
+ Registry() // ensure sync.Once has run
+ return registryErr
+}
+
// hooks returns a deduplicated list of hook names in
// the registry.
//
diff --git a/internal/assets/hooks/messages/registry.go b/internal/assets/hooks/messages/registry.go
index 78cca260b..aacef54b5 100644
--- a/internal/assets/hooks/messages/registry.go
+++ b/internal/assets/hooks/messages/registry.go
@@ -53,16 +53,6 @@ func Registry() []HookMessageInfo {
return registryData
}
-// RegistryError returns any error encountered while parsing the
-// embedded registry.yaml. Nil on success.
-//
-// Returns:
-// - error: Parse error from registry.yaml, or nil on success
-func RegistryError() error {
- Registry() // ensure sync.Once has run
- return registryErr
-}
-
// Lookup returns the HookMessageInfo for the given hook and variant,
// or nil if not found.
//
diff --git a/internal/assets/hooks/messages/registry_test.go b/internal/assets/hooks/messages/registry_test.go
index 95d5fc250..5c387f7b4 100644
--- a/internal/assets/hooks/messages/registry_test.go
+++ b/internal/assets/hooks/messages/registry_test.go
@@ -23,8 +23,8 @@ func TestRegistryCount(t *testing.T) {
}
func TestRegistryYAMLParses(t *testing.T) {
- if parseErr := RegistryError(); parseErr != nil {
- t.Fatalf("RegistryError() = %v, want nil", parseErr)
+ if parseErr := registryError(); parseErr != nil {
+ t.Fatalf("registryError() = %v, want nil", parseErr)
}
for i, entry := range Registry() {
diff --git a/internal/audit/dead_exports_test.go b/internal/audit/dead_exports_test.go
index 723681781..ce967b13e 100644
--- a/internal/audit/dead_exports_test.go
+++ b/internal/audit/dead_exports_test.go
@@ -36,23 +36,17 @@ import (
// positives. Keep this list small: prefer eliminating
// the export over adding it here.
var testOnlyExports = map[string]bool{
- "github.com/ActiveMemory/ctx/internal/config/hook.CategoryCustomizable": true,
- "github.com/ActiveMemory/ctx/internal/assets/hooks/messages.Hooks": true,
- "github.com/ActiveMemory/ctx/internal/assets/hooks/messages.RegistryError": true,
- "github.com/ActiveMemory/ctx/internal/cli/initialize/core/vscode.CreateVSCodeArtifacts": true,
- "github.com/ActiveMemory/ctx/internal/cli/journal/core/lock.LockedFrontmatterLine": true,
- "github.com/ActiveMemory/ctx/internal/cli/pad/core/store.EnsureGitignore": true,
- "github.com/ActiveMemory/ctx/internal/cli/system/core/state.SetDirForTest": true,
- "github.com/ActiveMemory/ctx/internal/config/asset.DirReferences": true,
- "github.com/ActiveMemory/ctx/internal/config/regex.Phase": true,
- "github.com/ActiveMemory/ctx/internal/inspect.StartsWithCtxMarker": true,
- "github.com/ActiveMemory/ctx/internal/journal/parser.Parser": true,
- "github.com/ActiveMemory/ctx/internal/journal/parser.RegisteredTools": true,
- "github.com/ActiveMemory/ctx/internal/mcp/proto.ErrCodeInvalidReq": true,
- "github.com/ActiveMemory/ctx/internal/mcp/proto.InitializeParams": true,
- "github.com/ActiveMemory/ctx/internal/mcp/proto.UnsubscribeParams": true,
- "github.com/ActiveMemory/ctx/internal/rc.Reset": true,
- "github.com/ActiveMemory/ctx/internal/task.MatchFull": true,
+ "github.com/ActiveMemory/ctx/internal/config/hook.CategoryCustomizable": true,
+ "github.com/ActiveMemory/ctx/internal/assets/hooks/messages.Hooks": true,
+ "github.com/ActiveMemory/ctx/internal/cli/pad/core/store.EnsureGitignore": true,
+ "github.com/ActiveMemory/ctx/internal/cli/system/core/state.SetDirForTest": true,
+ "github.com/ActiveMemory/ctx/internal/config/asset.DirReferences": true,
+ "github.com/ActiveMemory/ctx/internal/config/regex.Phase": true,
+ "github.com/ActiveMemory/ctx/internal/inspect.StartsWithCtxMarker": true,
+ "github.com/ActiveMemory/ctx/internal/journal/parser.Parser": true,
+ "github.com/ActiveMemory/ctx/internal/mcp/proto.InitializeParams": true,
+ "github.com/ActiveMemory/ctx/internal/mcp/proto.UnsubscribeParams": true,
+ "github.com/ActiveMemory/ctx/internal/rc.Reset": true,
}
// DO NOT add entries here to make tests pass. New code must
diff --git a/internal/cli/initialize/core/vscode/vscode.go b/internal/cli/initialize/core/vscode/vscode.go
index f85c9d228..44b3b1bce 100644
--- a/internal/cli/initialize/core/vscode/vscode.go
+++ b/internal/cli/initialize/core/vscode/vscode.go
@@ -15,7 +15,7 @@ import (
writeVscode "github.com/ActiveMemory/ctx/internal/write/vscode"
)
-// CreateVSCodeArtifacts generates VS Code workspace configuration files
+// createVSCodeArtifacts generates VS Code workspace configuration files
// (.vscode/) during ctx init.
//
// Creates extensions.json, tasks.json, and mcp.json as the
@@ -27,7 +27,7 @@ import (
//
// Returns:
// - error: Non-nil if directory creation fails
-func CreateVSCodeArtifacts(cmd *cobra.Command) error {
+func createVSCodeArtifacts(cmd *cobra.Command) error {
mkdirErr := ctxIo.SafeMkdirAll(
cfgVscode.Dir, fs.PermExec,
)
diff --git a/internal/cli/initialize/core/vscode/vscode_test.go b/internal/cli/initialize/core/vscode/vscode_test.go
index f8739708a..9cc8de571 100644
--- a/internal/cli/initialize/core/vscode/vscode_test.go
+++ b/internal/cli/initialize/core/vscode/vscode_test.go
@@ -125,25 +125,25 @@ func TestCreateVSCodeArtifacts_CreatesMCPJSON(t *testing.T) {
var buf bytes.Buffer
cmd := testCmd(&buf)
- if err := CreateVSCodeArtifacts(cmd); err != nil {
- t.Fatalf("CreateVSCodeArtifacts() error = %v", err)
+ if err := createVSCodeArtifacts(cmd); err != nil {
+ t.Fatalf("createVSCodeArtifacts() error = %v", err)
}
// Verify mcp.json was created as part of the artifacts
target := filepath.Join(cfgVscode.Dir, "mcp.json")
if _, err := os.Stat(target); os.IsNotExist(err) {
- t.Error("CreateVSCodeArtifacts did not create mcp.json")
+ t.Error("createVSCodeArtifacts did not create mcp.json")
}
// Verify extensions.json was also created
extTarget := filepath.Join(cfgVscode.Dir, "extensions.json")
if _, err := os.Stat(extTarget); os.IsNotExist(err) {
- t.Error("CreateVSCodeArtifacts did not create extensions.json")
+ t.Error("createVSCodeArtifacts did not create extensions.json")
}
// Verify tasks.json was also created
taskTarget := filepath.Join(cfgVscode.Dir, "tasks.json")
if _, err := os.Stat(taskTarget); os.IsNotExist(err) {
- t.Error("CreateVSCodeArtifacts did not create tasks.json")
+ t.Error("createVSCodeArtifacts did not create tasks.json")
}
}
diff --git a/internal/cli/journal/core/lock/lock.go b/internal/cli/journal/core/lock/lock.go
index 42f899408..157ac69ea 100644
--- a/internal/cli/journal/core/lock/lock.go
+++ b/internal/cli/journal/core/lock/lock.go
@@ -30,9 +30,9 @@ import (
writeRecall "github.com/ActiveMemory/ctx/internal/write/journal"
)
-// LockedFrontmatterLine is the YAML line inserted into frontmatter when
+// lockedFrontmatterLine is the YAML line inserted into frontmatter when
// a journal entry is locked.
-const LockedFrontmatterLine = session.FrontmatterLockedLine
+const lockedFrontmatterLine = session.FrontmatterLockedLine
// MatchJournalFiles returns journal .md filenames matching the given
// patterns. If all is true, returns every .md file in the directory.
diff --git a/internal/cli/journal/core/lock/lock_test.go b/internal/cli/journal/core/lock/lock_test.go
index 87dad4fa0..c8a1d41cc 100644
--- a/internal/cli/journal/core/lock/lock_test.go
+++ b/internal/cli/journal/core/lock/lock_test.go
@@ -152,7 +152,7 @@ func TestUpdateLockFrontmatter_Lock(t *testing.T) {
if readErr != nil {
t.Fatal(readErr)
}
- if !strings.Contains(string(data), LockedFrontmatterLine) {
+ if !strings.Contains(string(data), lockedFrontmatterLine) {
t.Error("lock should insert locked line into frontmatter")
}
if !strings.Contains(string(data), "# Body") {
@@ -164,7 +164,7 @@ func TestUpdateLockFrontmatter_Unlock(t *testing.T) {
dir := t.TempDir()
path := filepath.Join(dir, "test.md")
content := "---\ndate: \"2026-01-21\"\n" +
- LockedFrontmatterLine +
+ lockedFrontmatterLine +
"\ntitle: \"Test\"\n---\n\n# Body\n"
writeErr := os.WriteFile(
path, []byte(content), fs.PermFile,
@@ -213,7 +213,7 @@ func TestUpdateLockFrontmatter_IdempotentLock(t *testing.T) {
dir := t.TempDir()
path := filepath.Join(dir, "test.md")
content := "---\ndate: \"2026-01-21\"\n" +
- LockedFrontmatterLine + "\n---\n\n# Body\n"
+ lockedFrontmatterLine + "\n---\n\n# Body\n"
writeErr := os.WriteFile(
path, []byte(content), fs.PermFile,
)
diff --git a/internal/cli/journal/core/lock/unlock_test.go b/internal/cli/journal/core/lock/unlock_test.go
index d5e267b27..e5e3bdbf0 100644
--- a/internal/cli/journal/core/lock/unlock_test.go
+++ b/internal/cli/journal/core/lock/unlock_test.go
@@ -13,8 +13,8 @@ import (
"testing"
"github.com/ActiveMemory/ctx/internal/cli/journal"
- "github.com/ActiveMemory/ctx/internal/cli/journal/core/lock"
"github.com/ActiveMemory/ctx/internal/config/fs"
+ "github.com/ActiveMemory/ctx/internal/config/session"
"github.com/ActiveMemory/ctx/internal/journal/state"
)
@@ -65,7 +65,7 @@ func TestRunLockUnlock_LockSingle(t *testing.T) {
if err != nil {
t.Fatal(err)
}
- if !strings.Contains(string(data), lock.LockedFrontmatterLine) {
+ if !strings.Contains(string(data), session.FrontmatterLockedLine) {
t.Error("frontmatter should contain locked line")
}
}
@@ -79,7 +79,7 @@ func TestRunLockUnlock_UnlockSingle(t *testing.T) {
filename := "2026-01-21-test-abc12345.md"
content := "---\ndate: \"2026-01-21\"\n" +
- lock.LockedFrontmatterLine + "\n---\n\n# Test\n"
+ session.FrontmatterLockedLine + "\n---\n\n# Test\n"
if err := os.WriteFile(
filepath.Join(journalDir, filename), []byte(content), fs.PermFile,
); err != nil {
@@ -183,7 +183,7 @@ func TestRunLockUnlock_AlreadyLocked(t *testing.T) {
filename := "2026-01-21-test-abc12345.md"
content := "---\ndate: \"2026-01-21\"\n" +
- lock.LockedFrontmatterLine + "\n---\n\n# Test\n"
+ session.FrontmatterLockedLine + "\n---\n\n# Test\n"
if err := os.WriteFile(
filepath.Join(journalDir, filename), []byte(content), fs.PermFile,
); err != nil {
@@ -302,7 +302,7 @@ func TestRunLockUnlock_LockMultipart(t *testing.T) {
if readErr != nil {
t.Fatal(readErr)
}
- if !strings.Contains(string(data), lock.LockedFrontmatterLine) {
+ if !strings.Contains(string(data), session.FrontmatterLockedLine) {
t.Errorf("%s frontmatter should contain locked line", f)
}
}
diff --git a/internal/cli/setup/core/cline/deploy.go b/internal/cli/setup/core/cline/deploy.go
index 931172e69..fcec83080 100644
--- a/internal/cli/setup/core/cline/deploy.go
+++ b/internal/cli/setup/core/cline/deploy.go
@@ -7,42 +7,18 @@
package cline
import (
- "encoding/json"
- "os"
- "path/filepath"
-
"github.com/spf13/cobra"
- "github.com/ActiveMemory/ctx/internal/config/fs"
cfgHook "github.com/ActiveMemory/ctx/internal/config/hook"
mcpServer "github.com/ActiveMemory/ctx/internal/config/mcp/server"
- "github.com/ActiveMemory/ctx/internal/config/token"
cfgVscode "github.com/ActiveMemory/ctx/internal/config/vscode"
- errSetup "github.com/ActiveMemory/ctx/internal/err/setup"
- ctxIo "github.com/ActiveMemory/ctx/internal/io"
- "github.com/ActiveMemory/ctx/internal/rc"
- "github.com/ActiveMemory/ctx/internal/steering"
- writeSetup "github.com/ActiveMemory/ctx/internal/write/setup"
+
+ mcpDeploy "github.com/ActiveMemory/ctx/internal/cli/setup/core/mcp"
)
// ensureMCPConfig creates .vscode/mcp.json with the ctx
// MCP server configuration. Skips if the file exists.
func ensureMCPConfig(cmd *cobra.Command) error {
- target := filepath.Join(
- cfgVscode.Dir, cfgVscode.FileMCPJSON,
- )
-
- if _, statErr := ctxIo.SafeStat(target); statErr == nil {
- writeSetup.DeployFileExists(cmd, target)
- return nil
- }
-
- if mkdirErr := ctxIo.SafeMkdirAll(
- cfgVscode.Dir, fs.PermExec,
- ); mkdirErr != nil {
- return errSetup.CreateDir(cfgVscode.Dir, mkdirErr)
- }
-
cfg := vscodeMCPConfig{
Servers: map[string]vscodeMCPServer{
mcpServer.Name: {
@@ -51,48 +27,16 @@ func ensureMCPConfig(cmd *cobra.Command) error {
},
},
}
-
- data, marshalErr := json.MarshalIndent(
- cfg, "", " ",
+ return mcpDeploy.Deploy(
+ cmd, cfgVscode.Dir,
+ cfgVscode.FileMCPJSON, cfg,
)
- if marshalErr != nil {
- return errSetup.MarshalConfig(marshalErr)
- }
- data = append(data, token.NewlineLF[0])
-
- if writeErr := ctxIo.SafeWriteFile(
- target, data, fs.PermFile,
- ); writeErr != nil {
- return errSetup.WriteFile(target, writeErr)
- }
-
- writeSetup.DeployFileCreated(cmd, target)
- return nil
}
// syncSteering syncs steering files to Cline format
// if a steering directory exists.
func syncSteering(cmd *cobra.Command) error {
- steeringDir := rc.SteeringDir()
- if _, statErr := ctxIo.SafeStat(
- steeringDir,
- ); os.IsNotExist(statErr) {
- writeSetup.DeployNoSteering(cmd)
- return nil
- }
-
- report, syncErr := steering.SyncTool(
- steeringDir, token.Dot, cfgHook.ToolCline,
+ return mcpDeploy.SyncSteering(
+ cmd, cfgHook.ToolCline,
)
- if syncErr != nil {
- return errSetup.SyncSteering(syncErr)
- }
-
- for _, name := range report.Written {
- writeSetup.DeploySteeringSynced(cmd, name)
- }
- for _, name := range report.Skipped {
- writeSetup.DeploySteeringSkipped(cmd, name)
- }
- return nil
}
diff --git a/internal/cli/setup/core/cursor/deploy.go b/internal/cli/setup/core/cursor/deploy.go
index 1444c0118..16374b108 100644
--- a/internal/cli/setup/core/cursor/deploy.go
+++ b/internal/cli/setup/core/cursor/deploy.go
@@ -7,40 +7,18 @@
package cursor
import (
- "encoding/json"
- "os"
- "path/filepath"
-
"github.com/spf13/cobra"
- "github.com/ActiveMemory/ctx/internal/config/fs"
cfgHook "github.com/ActiveMemory/ctx/internal/config/hook"
mcpServer "github.com/ActiveMemory/ctx/internal/config/mcp/server"
cfgSetup "github.com/ActiveMemory/ctx/internal/config/setup"
- "github.com/ActiveMemory/ctx/internal/config/token"
- errSetup "github.com/ActiveMemory/ctx/internal/err/setup"
- ctxIo "github.com/ActiveMemory/ctx/internal/io"
- "github.com/ActiveMemory/ctx/internal/rc"
- "github.com/ActiveMemory/ctx/internal/steering"
- writeSetup "github.com/ActiveMemory/ctx/internal/write/setup"
+
+ mcpDeploy "github.com/ActiveMemory/ctx/internal/cli/setup/core/mcp"
)
// ensureMCPConfig creates .cursor/mcp.json with the ctx
// MCP server configuration. Skips if the file exists.
func ensureMCPConfig(cmd *cobra.Command) error {
- target := filepath.Join(cfgSetup.DirCursor, cfgSetup.FileMCPJSONCursor)
-
- if _, statErr := ctxIo.SafeStat(target); statErr == nil {
- writeSetup.DeployFileExists(cmd, target)
- return nil
- }
-
- if mkdirErr := ctxIo.SafeMkdirAll(
- cfgSetup.DirCursor, fs.PermExec,
- ); mkdirErr != nil {
- return errSetup.CreateDir(cfgSetup.DirCursor, mkdirErr)
- }
-
cfg := mcpConfig{
MCPServers: map[string]serverEntry{
mcpServer.Name: {
@@ -49,48 +27,16 @@ func ensureMCPConfig(cmd *cobra.Command) error {
},
},
}
-
- data, marshalErr := json.MarshalIndent(
- cfg, "", " ",
+ return mcpDeploy.Deploy(
+ cmd, cfgSetup.DirCursor,
+ cfgSetup.FileMCPJSONCursor, cfg,
)
- if marshalErr != nil {
- return errSetup.MarshalConfig(marshalErr)
- }
- data = append(data, token.NewlineLF[0])
-
- if writeErr := ctxIo.SafeWriteFile(
- target, data, fs.PermFile,
- ); writeErr != nil {
- return errSetup.WriteFile(target, writeErr)
- }
-
- writeSetup.DeployFileCreated(cmd, target)
- return nil
}
// syncSteering syncs steering files to Cursor format
// if a steering directory exists.
func syncSteering(cmd *cobra.Command) error {
- steeringDir := rc.SteeringDir()
- if _, statErr := ctxIo.SafeStat(
- steeringDir,
- ); os.IsNotExist(statErr) {
- writeSetup.DeployNoSteering(cmd)
- return nil
- }
-
- report, syncErr := steering.SyncTool(
- steeringDir, token.Dot, cfgHook.ToolCursor,
+ return mcpDeploy.SyncSteering(
+ cmd, cfgHook.ToolCursor,
)
- if syncErr != nil {
- return errSetup.SyncSteering(syncErr)
- }
-
- for _, name := range report.Written {
- writeSetup.DeploySteeringSynced(cmd, name)
- }
- for _, name := range report.Skipped {
- writeSetup.DeploySteeringSkipped(cmd, name)
- }
- return nil
}
diff --git a/internal/cli/setup/core/kiro/deploy.go b/internal/cli/setup/core/kiro/deploy.go
index eabc68db2..b75142119 100644
--- a/internal/cli/setup/core/kiro/deploy.go
+++ b/internal/cli/setup/core/kiro/deploy.go
@@ -7,23 +7,16 @@
package kiro
import (
- "encoding/json"
- "os"
"path/filepath"
"github.com/spf13/cobra"
- "github.com/ActiveMemory/ctx/internal/config/fs"
cfgHook "github.com/ActiveMemory/ctx/internal/config/hook"
mcpServer "github.com/ActiveMemory/ctx/internal/config/mcp/server"
cfgMcpTool "github.com/ActiveMemory/ctx/internal/config/mcp/tool"
cfgSetup "github.com/ActiveMemory/ctx/internal/config/setup"
- "github.com/ActiveMemory/ctx/internal/config/token"
- errSetup "github.com/ActiveMemory/ctx/internal/err/setup"
- ctxIo "github.com/ActiveMemory/ctx/internal/io"
- "github.com/ActiveMemory/ctx/internal/rc"
- "github.com/ActiveMemory/ctx/internal/steering"
- writeSetup "github.com/ActiveMemory/ctx/internal/write/setup"
+
+ mcpDeploy "github.com/ActiveMemory/ctx/internal/cli/setup/core/mcp"
)
// ensureMCPConfig creates .kiro/settings/mcp.json with
@@ -32,21 +25,6 @@ func ensureMCPConfig(cmd *cobra.Command) error {
settingsDir := filepath.Join(
cfgSetup.DirKiro, cfgSetup.DirSettings,
)
- target := filepath.Join(settingsDir, cfgSetup.FileMCPJSON)
-
- if _, statErr := ctxIo.SafeStat(
- target,
- ); statErr == nil {
- writeSetup.DeployFileExists(cmd, target)
- return nil
- }
-
- if mkdirErr := ctxIo.SafeMkdirAll(
- settingsDir, fs.PermExec,
- ); mkdirErr != nil {
- return errSetup.CreateDir(settingsDir, mkdirErr)
- }
-
cfg := mcpConfig{
MCPServers: map[string]serverEntry{
mcpServer.Name: {
@@ -65,50 +43,16 @@ func ensureMCPConfig(cmd *cobra.Command) error {
},
},
}
-
- data, marshalErr := json.MarshalIndent(
- cfg, "", " ",
+ return mcpDeploy.Deploy(
+ cmd, settingsDir,
+ cfgSetup.FileMCPJSON, cfg,
)
- if marshalErr != nil {
- return errSetup.MarshalConfig(marshalErr)
- }
- data = append(data, token.NewlineLF[0])
-
- if writeErr := ctxIo.SafeWriteFile(
- target, data, fs.PermFile,
- ); writeErr != nil {
- return errSetup.WriteFile(target, writeErr)
- }
-
- writeSetup.DeployFileCreated(cmd, target)
- return nil
}
// syncSteering syncs steering files to Kiro format
// if a steering directory exists.
func syncSteering(cmd *cobra.Command) error {
- steeringDir := rc.SteeringDir()
-
- if _, statErr := ctxIo.SafeStat(
- steeringDir,
- ); os.IsNotExist(statErr) {
- writeSetup.DeployNoSteering(cmd)
- return nil
- }
-
- report, syncErr := steering.SyncTool(
- steeringDir, token.Dot, cfgHook.ToolKiro,
+ return mcpDeploy.SyncSteering(
+ cmd, cfgHook.ToolKiro,
)
- if syncErr != nil {
- return errSetup.SyncSteering(syncErr)
- }
-
- for _, name := range report.Written {
- writeSetup.DeploySteeringSynced(cmd, name)
- }
- for _, name := range report.Skipped {
- writeSetup.DeploySteeringSkipped(cmd, name)
- }
-
- return nil
}
diff --git a/internal/cli/setup/core/mcp/deploy.go b/internal/cli/setup/core/mcp/deploy.go
new file mode 100644
index 000000000..f700cc043
--- /dev/null
+++ b/internal/cli/setup/core/mcp/deploy.go
@@ -0,0 +1,112 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package mcp
+
+import (
+ "encoding/json"
+ "os"
+ "path/filepath"
+
+ "github.com/spf13/cobra"
+
+ "github.com/ActiveMemory/ctx/internal/config/fs"
+ "github.com/ActiveMemory/ctx/internal/config/token"
+ errSetup "github.com/ActiveMemory/ctx/internal/err/setup"
+ ctxIo "github.com/ActiveMemory/ctx/internal/io"
+ "github.com/ActiveMemory/ctx/internal/rc"
+ "github.com/ActiveMemory/ctx/internal/steering"
+ writeSetup "github.com/ActiveMemory/ctx/internal/write/setup"
+)
+
+// Deploy writes an MCP config file if it does not already
+// exist. It creates the parent directory, marshals the
+// config as indented JSON, and prints a confirmation
+// message.
+//
+// Parameters:
+// - cmd: Cobra command for output
+// - dir: Parent directory for the config file
+// - filename: Config file name (e.g. "mcp.json")
+// - cfg: Config struct to marshal as JSON
+//
+// Returns:
+// - error: Non-nil on directory creation, marshal,
+// or write failure
+func Deploy(
+ cmd *cobra.Command,
+ dir, filename string,
+ cfg any,
+) error {
+ target := filepath.Join(dir, filename)
+
+ if _, statErr := ctxIo.SafeStat(
+ target,
+ ); statErr == nil {
+ writeSetup.DeployFileExists(cmd, target)
+ return nil
+ }
+
+ if mkdirErr := ctxIo.SafeMkdirAll(
+ dir, fs.PermExec,
+ ); mkdirErr != nil {
+ return errSetup.CreateDir(dir, mkdirErr)
+ }
+
+ data, marshalErr := json.MarshalIndent(
+ cfg, "", " ",
+ )
+ if marshalErr != nil {
+ return errSetup.MarshalConfig(marshalErr)
+ }
+ data = append(data, token.NewlineLF[0])
+
+ if writeErr := ctxIo.SafeWriteFile(
+ target, data, fs.PermFile,
+ ); writeErr != nil {
+ return errSetup.WriteFile(target, writeErr)
+ }
+
+ writeSetup.DeployFileCreated(cmd, target)
+ return nil
+}
+
+// SyncSteering syncs steering files to a tool-native
+// format. If no steering directory exists, prints a
+// message and returns nil.
+//
+// Parameters:
+// - cmd: Cobra command for output
+// - tool: Tool identifier for sync format selection
+//
+// Returns:
+// - error: Non-nil on sync failure
+func SyncSteering(
+ cmd *cobra.Command, tool string,
+) error {
+ steeringDir := rc.SteeringDir()
+ if _, statErr := ctxIo.SafeStat(
+ steeringDir,
+ ); os.IsNotExist(statErr) {
+ writeSetup.DeployNoSteering(cmd)
+ return nil
+ }
+
+ report, syncErr := steering.SyncTool(
+ steeringDir, token.Dot, tool,
+ )
+ if syncErr != nil {
+ return errSetup.SyncSteering(syncErr)
+ }
+
+ for _, name := range report.Written {
+ writeSetup.DeploySteeringSynced(cmd, name)
+ }
+ for _, name := range report.Skipped {
+ writeSetup.DeploySteeringSkipped(cmd, name)
+ }
+ return nil
+}
diff --git a/internal/cli/setup/core/mcp/doc.go b/internal/cli/setup/core/mcp/doc.go
new file mode 100644
index 000000000..9b119984e
--- /dev/null
+++ b/internal/cli/setup/core/mcp/doc.go
@@ -0,0 +1,19 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+// Package mcp provides shared helpers for deploying MCP
+// server configuration files across AI tool integrations.
+//
+// Each tool (Cursor, Kiro, Cline) has a unique JSON
+// structure for its mcp.json file, but the deployment
+// workflow is identical: check if file exists, create
+// directory, marshal config, write file, print
+// confirmation.
+//
+// The [Deploy] function encapsulates this shared workflow.
+// Tool-specific packages build their config struct and
+// pass it here for writing.
+package mcp
diff --git a/internal/journal/parser/markdown_test.go b/internal/journal/parser/markdown_test.go
index 10e1cfef5..7d0f48acc 100644
--- a/internal/journal/parser/markdown_test.go
+++ b/internal/journal/parser/markdown_test.go
@@ -408,7 +408,7 @@ func TestScanDirectory_WithMarkdown(t *testing.T) {
}
func TestRegisteredTools_IncludesMarkdown(t *testing.T) {
- tools := RegisteredTools()
+ tools := registeredTools()
found := false
for _, tool := range tools {
if tool == session.ToolMarkdown {
diff --git a/internal/journal/parser/parser.go b/internal/journal/parser/parser.go
index 6d5f81625..f3a67e329 100644
--- a/internal/journal/parser/parser.go
+++ b/internal/journal/parser/parser.go
@@ -210,15 +210,3 @@ func Parser(tool string) Session {
}
return nil
}
-
-// RegisteredTools returns the list of supported tools.
-//
-// Returns:
-// - []string: Tool identifiers for all registered parsers
-func RegisteredTools() []string {
- tools := make([]string, len(registeredParsers))
- for i, p := range registeredParsers {
- tools[i] = p.Tool()
- }
- return tools
-}
diff --git a/internal/journal/parser/parser_test.go b/internal/journal/parser/parser_test.go
index 0de15d030..d74b344fe 100644
--- a/internal/journal/parser/parser_test.go
+++ b/internal/journal/parser/parser_test.go
@@ -322,7 +322,7 @@ func TestParseFile_AutoDetect(t *testing.T) {
}
func TestRegisteredTools(t *testing.T) {
- tools := RegisteredTools()
+ tools := registeredTools()
if len(tools) == 0 {
t.Error("expected at least one registered tool")
}
diff --git a/internal/journal/parser/tools.go b/internal/journal/parser/tools.go
new file mode 100644
index 000000000..41a4db536
--- /dev/null
+++ b/internal/journal/parser/tools.go
@@ -0,0 +1,20 @@
+// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+package parser
+
+// registeredTools returns the list of supported tools.
+//
+// Returns:
+// - []string: Tool identifiers for all registered
+// parsers
+func registeredTools() []string {
+ tools := make([]string, len(registeredParsers))
+ for i, p := range registeredParsers {
+ tools[i] = p.Tool()
+ }
+ return tools
+}
diff --git a/internal/mcp/proto/schema.go b/internal/mcp/proto/schema.go
index f98c949a3..f5225fcd5 100644
--- a/internal/mcp/proto/schema.go
+++ b/internal/mcp/proto/schema.go
@@ -68,8 +68,6 @@ type RPCError struct {
const (
// ErrCodeParse indicates malformed JSON was received.
ErrCodeParse = -32700
- // ErrCodeInvalidReq indicates the request is not valid.
- ErrCodeInvalidReq = -32600
// ErrCodeNotFound indicates the method was not found.
ErrCodeNotFound = -32601
// ErrCodeInvalidArg indicates invalid method parameters.
diff --git a/internal/task/task.go b/internal/task/task.go
index 695a4a363..3e542e419 100644
--- a/internal/task/task.go
+++ b/internal/task/task.go
@@ -26,8 +26,8 @@ import (
// content := match[task.MatchContent]
// }
const (
- // MatchFull is the index of the full regex match.
- MatchFull = iota
+ // matchFull is the index of the full regex match.
+ matchFull = iota
// MatchIndent is the index of leading whitespace.
MatchIndent
// MatchState is the index of the checkbox state.
diff --git a/internal/task/task_test.go b/internal/task/task_test.go
index f1db2257d..a243f044c 100644
--- a/internal/task/task_test.go
+++ b/internal/task/task_test.go
@@ -280,8 +280,8 @@ func TestMatchConstants(t *testing.T) {
t.Fatal("line did not match task pattern")
}
- if match[MatchFull] != line {
- t.Errorf("MatchFull = %q, want %q", match[MatchFull], line)
+ if match[matchFull] != line {
+ t.Errorf("matchFull = %q, want %q", match[matchFull], line)
}
if match[MatchIndent] != " " {
t.Errorf("MatchIndent = %q, want %q", match[MatchIndent], " ")
From 839bcbc00fde77378fc156db603846617ab4e24a Mon Sep 17 00:00:00 2001
From: Jose Alekhinne
Date: Sat, 4 Apr 2026 02:19:09 -0700
Subject: [PATCH 07/13] fix: handle fmt.Fprintf errors in
mcp/handler/steering.go
Replace 2 unhandled fmt.Fprintf calls with ctxIo.SafeFprintf
which logs write errors via warn.Warn per project convention.
Spec: specs/ast-audit-contributor-guide.md
Signed-off-by: Jose Alekhinne
---
internal/mcp/handler/steering.go | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/internal/mcp/handler/steering.go b/internal/mcp/handler/steering.go
index 2eb29053b..c1d4e9a74 100644
--- a/internal/mcp/handler/steering.go
+++ b/internal/mcp/handler/steering.go
@@ -54,7 +54,7 @@ func (h *Handler) SteeringGet(prompt string) (string, error) {
var sb strings.Builder
for _, sf := range filtered {
- fmt.Fprintf(&sb,
+ ctxIo.SafeFprintf(&sb,
desc.Text(text.DescKeyMCPSteeringSection),
sf.Name, sf.Body)
}
@@ -101,7 +101,7 @@ func (h *Handler) Search(query string) (string, error) {
lineNum++
line := scanner.Text()
if strings.Contains(strings.ToLower(line), queryLower) {
- fmt.Fprintf(&sb,
+ ctxIo.SafeFprintf(&sb,
desc.Text(text.DescKeyMCPSearchHitLine),
e.Name(), lineNum, line)
matches++
From 0819067c2100d4a49cad84c338ee4116126c56ee Mon Sep 17 00:00:00 2001
From: Jose Alekhinne
Date: Sat, 4 Apr 2026 02:36:26 -0700
Subject: [PATCH 08/13] fix: tighten TestNoMagicValues, migrate 7 numeric
constants to config
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Remove blanket isConstDef/isVarDef exemption from TestNoMagicValues
— numeric const definitions outside config/ are now caught. All 7
violations resolved:
- JSON-RPC error codes (ErrCodeParse, ErrCodeNotFound,
ErrCodeInvalidArg, ErrCodeInternal) moved from mcp/proto/schema.go
to config/mcp/schema/. Updated 14 consumer sites + 1 test file.
- DefaultPriority (50) moved to config/steering/. Updated
steering/frontmatter.go and cli/steering/cmd/add/cmd.go.
- PollIntervalSec (5) moved to config/mcp/server/.
- Removed dead isVarDef helper from magic_values_test.go.
Spec: specs/ast-audit-contributor-guide.md
Signed-off-by: Jose Alekhinne
---
internal/audit/magic_values_test.go | 36 +++++--------------
internal/cli/steering/cmd/add/cmd.go | 5 +--
internal/config/README.md | 18 +++++-----
internal/config/mcp/schema/schema.go | 12 +++++++
internal/config/mcp/server/server.go | 4 +++
internal/config/steering/steering.go | 4 +++
internal/mcp/proto/schema.go | 12 -------
internal/mcp/server/parse/parse.go | 3 +-
internal/mcp/server/poll/poll.go | 8 +++--
internal/mcp/server/resource/dispatch.go | 7 ++--
internal/mcp/server/resource/resource.go | 3 +-
internal/mcp/server/resource/subscription.go | 5 +--
.../mcp/server/route/fallback/dispatch.go | 3 +-
internal/mcp/server/route/prompt/dispatch.go | 5 +--
internal/mcp/server/route/prompt/prompt.go | 3 +-
internal/mcp/server/route/tool/dispatch.go | 5 +--
internal/mcp/server/server.go | 3 +-
internal/mcp/server/server_test.go | 6 ++--
internal/steering/frontmatter.go | 3 +-
internal/steering/parse.go | 3 --
20 files changed, 71 insertions(+), 77 deletions(-)
diff --git a/internal/audit/magic_values_test.go b/internal/audit/magic_values_test.go
index 790f651f8..ee173949e 100644
--- a/internal/audit/magic_values_test.go
+++ b/internal/audit/magic_values_test.go
@@ -99,12 +99,14 @@ func TestNoMagicValues(t *testing.T) {
return true
}
- // Skip file-level const/var definition sites only.
- // Local consts inside function bodies are NOT exempt —
- // they are just renamed magic numbers.
- if isConstDef(file, lit) || isVarDef(file, lit) {
- return true
- }
+ // Const/var definitions in exempt packages
+ // are already skipped (line 86). Outside
+ // those packages, numeric constants are
+ // magic values that belong in config/.
+ //
+ // DO NOT re-add a blanket isConstDef
+ // exemption. It masks constants defined
+ // in the wrong package.
if exemptIntLiterals[lit.Value] {
return true
@@ -157,28 +159,6 @@ func isExemptPackage(pkgPath string) bool {
return false
}
-// isVarDef reports whether lit appears inside a var declaration.
-func isVarDef(file *ast.File, lit *ast.BasicLit) bool {
- for _, decl := range file.Decls {
- gd, ok := decl.(*ast.GenDecl)
- if !ok || gd.Tok != token.VAR {
- continue
- }
- for _, spec := range gd.Specs {
- vs, ok := spec.(*ast.ValueSpec)
- if !ok {
- continue
- }
- for _, val := range vs.Values {
- if containsNode(val, lit) {
- return true
- }
- }
- }
- }
- return false
-}
-
// isStrconvArg reports whether lit is an argument to a strconv
// function (radix or bitsize parameter).
func isStrconvArg(file *ast.File, lit *ast.BasicLit) bool {
diff --git a/internal/cli/steering/cmd/add/cmd.go b/internal/cli/steering/cmd/add/cmd.go
index d71f0d0cc..9fca3ee7a 100644
--- a/internal/cli/steering/cmd/add/cmd.go
+++ b/internal/cli/steering/cmd/add/cmd.go
@@ -24,9 +24,6 @@ import (
writeSteering "github.com/ActiveMemory/ctx/internal/write/steering"
)
-// defaultPriority is the default priority for new steering files.
-const defaultPriority = 50
-
// Cmd returns the "ctx steering add" subcommand.
//
// Returns:
@@ -79,7 +76,7 @@ func Run(c *cobra.Command, name string) error {
sf := &steering.SteeringFile{
Name: name,
Inclusion: cfgSteering.InclusionManual,
- Priority: defaultPriority,
+ Priority: cfgSteering.DefaultPriority,
}
data := steering.Print(sf)
diff --git a/internal/config/README.md b/internal/config/README.md
index 3f7db97c3..fbfd84bc3 100644
--- a/internal/config/README.md
+++ b/internal/config/README.md
@@ -4,14 +4,14 @@
This directory contains ~60 sub-packages, each holding constants,
compiled regexes, type definitions, or text keys for a single
-domain. This looks unusual. It's intentional.
+domain. This looks unusual. **It's intentional**.
### The problem it solves
A monolithic `config` package creates a false dependency: importing
`config` to use `config.TokenBudget` also imports every regex
pattern, every MCP constant, every entry type, and every CLI flag
-name. In Go, the package is the dependency unit — importing one
+name. In Go, the package is the dependency unit: importing one
symbol imports the whole package. A change to any constant in the
package marks every consumer as stale for recompilation and makes
the blast radius of any change the entire codebase.
@@ -44,7 +44,7 @@ import "github.com/ActiveMemory/ctx/internal/config" // everything
- Surgical dependency tracking (change `config/mcp/tool` and only
MCP packages recompile)
-- Zero import cycles (all sub-packages are leaves — zero internal
+- Zero import cycles (all sub-packages are leaves: zero internal
dependencies)
- Clear ownership (each file belongs to one domain)
- Safe to modify (changing a constant in `config/agent` cannot
@@ -138,12 +138,12 @@ method receivers, interface participation, or business logic. A
type with `func (t IssueType) Severity() int` has outgrown
`config/` and belongs in `entity/`.
-| Stage | Home | Example |
-|-------|------|---------|
-| Pure value enum | `config//` | `type IssueType string` with const values |
-| Cross-package value enum | `config//` | Same — `config/` is already importable everywhere |
-| Type with methods | `entity/` | `func (t IssueType) Severity() int` |
-| Type implementing interfaces | `entity/` | `var _ fmt.Stringer = IssueType("")` |
+| Stage | Home | Example |
+|------------------------------|--------------------|---------------------------------------------------|
+| Pure value enum | `config//` | `type IssueType string` with const values |
+| Cross-package value enum | `config//` | Same — `config/` is already importable everywhere |
+| Type with methods | `entity/` | `func (t IssueType) Severity() int` |
+| Type implementing interfaces | `entity/` | `var _ fmt.Stringer = IssueType("")` |
The migration path is natural: start in `config/`, promote to
`entity/` when behavior appears. `TestCrossPackageTypes` catches
diff --git a/internal/config/mcp/schema/schema.go b/internal/config/mcp/schema/schema.go
index 0a4b39d22..90980ecde 100644
--- a/internal/config/mcp/schema/schema.go
+++ b/internal/config/mcp/schema/schema.go
@@ -9,6 +9,18 @@ package schema
// ProtocolVersion is the MCP protocol version string.
const ProtocolVersion = "2024-11-05"
+// Standard JSON-RPC error codes.
+const (
+ // ErrCodeParse indicates malformed JSON.
+ ErrCodeParse = -32700
+ // ErrCodeNotFound indicates method not found.
+ ErrCodeNotFound = -32601
+ // ErrCodeInvalidArg indicates invalid parameters.
+ ErrCodeInvalidArg = -32602
+ // ErrCodeInternal indicates an internal error.
+ ErrCodeInternal = -32603
+)
+
// JSON Schema type constants.
const (
// Object is the JSON Schema type for objects.
diff --git a/internal/config/mcp/server/server.go b/internal/config/mcp/server/server.go
index 556cc57b8..512df1b1b 100644
--- a/internal/config/mcp/server/server.go
+++ b/internal/config/mcp/server/server.go
@@ -20,6 +20,10 @@ const (
SubcommandServe = "serve"
)
+// PollIntervalSec is the default interval in seconds for
+// resource change polling.
+const PollIntervalSec = 5
+
// Args returns the CLI arguments to launch the ctx MCP server.
func Args() []string {
return []string{"mcp", SubcommandServe}
diff --git a/internal/config/steering/steering.go b/internal/config/steering/steering.go
index 21d596e5b..a17aa95b8 100644
--- a/internal/config/steering/steering.go
+++ b/internal/config/steering/steering.go
@@ -26,3 +26,7 @@ const (
// LabelAllTools is the display label when a steering
// or trigger item applies to all tools.
const LabelAllTools = "all"
+
+// DefaultPriority is the default injection priority for
+// steering files when omitted from frontmatter.
+const DefaultPriority = 50
diff --git a/internal/mcp/proto/schema.go b/internal/mcp/proto/schema.go
index f5225fcd5..9fc5f876d 100644
--- a/internal/mcp/proto/schema.go
+++ b/internal/mcp/proto/schema.go
@@ -64,18 +64,6 @@ type RPCError struct {
Data interface{} `json:"data,omitempty"`
}
-// Standard JSON-RPC error codes.
-const (
- // ErrCodeParse indicates malformed JSON was received.
- ErrCodeParse = -32700
- // ErrCodeNotFound indicates the method was not found.
- ErrCodeNotFound = -32601
- // ErrCodeInvalidArg indicates invalid method parameters.
- ErrCodeInvalidArg = -32602
- // ErrCodeInternal indicates an internal JSON-RPC error.
- ErrCodeInternal = -32603
-)
-
// --- Initialization types ---
// InitializeParams contains client information sent during initialization.
diff --git a/internal/mcp/server/parse/parse.go b/internal/mcp/server/parse/parse.go
index d13a5e2fc..1ddd96a01 100644
--- a/internal/mcp/server/parse/parse.go
+++ b/internal/mcp/server/parse/parse.go
@@ -11,6 +11,7 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
+ cfgSchema "github.com/ActiveMemory/ctx/internal/config/mcp/schema"
"github.com/ActiveMemory/ctx/internal/config/mcp/server"
"github.com/ActiveMemory/ctx/internal/mcp/proto"
)
@@ -32,7 +33,7 @@ func Request(data []byte) (*proto.Request, *proto.Response) {
return nil, &proto.Response{
JSONRPC: server.JSONRPCVersion,
Error: &proto.RPCError{
- Code: proto.ErrCodeParse,
+ Code: cfgSchema.ErrCodeParse,
Message: desc.Text(text.DescKeyMCPErrParse),
},
}
diff --git a/internal/mcp/server/poll/poll.go b/internal/mcp/server/poll/poll.go
index de454ac5a..ce5ab0df6 100644
--- a/internal/mcp/server/poll/poll.go
+++ b/internal/mcp/server/poll/poll.go
@@ -17,9 +17,11 @@ import (
"github.com/ActiveMemory/ctx/internal/mcp/server/catalog"
)
-// defaultPollInterval is the default interval for resource change
-// polling.
-const defaultPollInterval = 5 * time.Second
+// defaultPollInterval is the default interval for
+// resource change polling.
+var defaultPollInterval = time.Duration(
+ server.PollIntervalSec,
+) * time.Second
// NewPoller creates a poller for the given context directory.
//
diff --git a/internal/mcp/server/resource/dispatch.go b/internal/mcp/server/resource/dispatch.go
index 469e64ba3..7221876e7 100644
--- a/internal/mcp/server/resource/dispatch.go
+++ b/internal/mcp/server/resource/dispatch.go
@@ -12,6 +12,7 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
+ cfgSchema "github.com/ActiveMemory/ctx/internal/config/mcp/schema"
"github.com/ActiveMemory/ctx/internal/context/load"
"github.com/ActiveMemory/ctx/internal/mcp/proto"
"github.com/ActiveMemory/ctx/internal/mcp/server/catalog"
@@ -50,14 +51,14 @@ func DispatchRead(
req.Params, ¶ms,
); unmarshalErr != nil {
return out.ErrResponse(
- req.ID, proto.ErrCodeInvalidArg,
+ req.ID, cfgSchema.ErrCodeInvalidArg,
desc.Text(text.DescKeyMCPErrInvalidParams),
)
}
ctx, loadErr := load.Do(contextDir)
if loadErr != nil {
- return out.ErrResponse(req.ID, proto.ErrCodeInternal,
+ return out.ErrResponse(req.ID, cfgSchema.ErrCodeInternal,
fmt.Sprintf(
desc.Text(text.DescKeyMCPLoadContext),
loadErr,
@@ -74,7 +75,7 @@ func DispatchRead(
return readAgentPacket(req.ID, ctx, tokenBudget)
}
- return out.ErrResponse(req.ID, proto.ErrCodeInvalidArg,
+ return out.ErrResponse(req.ID, cfgSchema.ErrCodeInvalidArg,
fmt.Sprintf(
desc.Text(text.DescKeyMCPErrUnknownResource),
params.URI,
diff --git a/internal/mcp/server/resource/resource.go b/internal/mcp/server/resource/resource.go
index 5c1f8140b..d2922826b 100644
--- a/internal/mcp/server/resource/resource.go
+++ b/internal/mcp/server/resource/resource.go
@@ -15,6 +15,7 @@ import (
cfgCtx "github.com/ActiveMemory/ctx/internal/config/ctx"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
"github.com/ActiveMemory/ctx/internal/config/mcp/mime"
+ cfgSchema "github.com/ActiveMemory/ctx/internal/config/mcp/schema"
"github.com/ActiveMemory/ctx/internal/config/token"
ctxToken "github.com/ActiveMemory/ctx/internal/context/token"
"github.com/ActiveMemory/ctx/internal/entity"
@@ -38,7 +39,7 @@ func readContextFile(
) *proto.Response {
f := ctx.File(fileName)
if f == nil {
- return out.ErrResponse(id, proto.ErrCodeInvalidArg,
+ return out.ErrResponse(id, cfgSchema.ErrCodeInvalidArg,
fmt.Sprintf(
desc.Text(text.DescKeyMCPErrFileNotFound),
fileName,
diff --git a/internal/mcp/server/resource/subscription.go b/internal/mcp/server/resource/subscription.go
index 8c413f5e5..e1e96685e 100644
--- a/internal/mcp/server/resource/subscription.go
+++ b/internal/mcp/server/resource/subscription.go
@@ -11,6 +11,7 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
+ cfgSchema "github.com/ActiveMemory/ctx/internal/config/mcp/schema"
"github.com/ActiveMemory/ctx/internal/mcp/proto"
"github.com/ActiveMemory/ctx/internal/mcp/server/out"
)
@@ -25,13 +26,13 @@ func applySubscription(
req.Params, ¶ms,
); unmarshalErr != nil {
return out.ErrResponse(
- req.ID, proto.ErrCodeInvalidArg,
+ req.ID, cfgSchema.ErrCodeInvalidArg,
desc.Text(text.DescKeyMCPErrInvalidParams),
)
}
if params.URI == "" {
return out.ErrResponse(
- req.ID, proto.ErrCodeInvalidArg,
+ req.ID, cfgSchema.ErrCodeInvalidArg,
desc.Text(text.DescKeyMCPErrURIRequired),
)
}
diff --git a/internal/mcp/server/route/fallback/dispatch.go b/internal/mcp/server/route/fallback/dispatch.go
index a0cf1cb57..f647837b1 100644
--- a/internal/mcp/server/route/fallback/dispatch.go
+++ b/internal/mcp/server/route/fallback/dispatch.go
@@ -11,6 +11,7 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
+ cfgSchema "github.com/ActiveMemory/ctx/internal/config/mcp/schema"
"github.com/ActiveMemory/ctx/internal/mcp/proto"
"github.com/ActiveMemory/ctx/internal/mcp/server/out"
)
@@ -24,7 +25,7 @@ import (
// Returns:
// - *proto.Response: method-not-found error response
func DispatchErr(req proto.Request) *proto.Response {
- return out.ErrResponse(req.ID, proto.ErrCodeNotFound,
+ return out.ErrResponse(req.ID, cfgSchema.ErrCodeNotFound,
fmt.Sprintf(
desc.Text(text.DescKeyMCPErrMethodNotFound),
req.Method,
diff --git a/internal/mcp/server/route/prompt/dispatch.go b/internal/mcp/server/route/prompt/dispatch.go
index dcce69264..ef65a5967 100644
--- a/internal/mcp/server/route/prompt/dispatch.go
+++ b/internal/mcp/server/route/prompt/dispatch.go
@@ -13,6 +13,7 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
"github.com/ActiveMemory/ctx/internal/config/mcp/prompt"
+ cfgSchema "github.com/ActiveMemory/ctx/internal/config/mcp/schema"
"github.com/ActiveMemory/ctx/internal/mcp/handler"
"github.com/ActiveMemory/ctx/internal/mcp/proto"
defPrompt "github.com/ActiveMemory/ctx/internal/mcp/server/def/prompt"
@@ -46,7 +47,7 @@ func DispatchGet(
) *proto.Response {
var params proto.GetPromptParams
if err := json.Unmarshal(req.Params, ¶ms); err != nil {
- return out.ErrResponse(req.ID, proto.ErrCodeInvalidArg,
+ return out.ErrResponse(req.ID, cfgSchema.ErrCodeInvalidArg,
desc.Text(text.DescKeyMCPErrInvalidParams))
}
@@ -68,7 +69,7 @@ func DispatchGet(
)
default:
return out.ErrResponse(
- req.ID, proto.ErrCodeNotFound,
+ req.ID, cfgSchema.ErrCodeNotFound,
fmt.Sprintf(
desc.Text(text.DescKeyMCPErrUnknownPrompt),
params.Name,
diff --git a/internal/mcp/server/route/prompt/prompt.go b/internal/mcp/server/route/prompt/prompt.go
index 91b789409..87ca47064 100644
--- a/internal/mcp/server/route/prompt/prompt.go
+++ b/internal/mcp/server/route/prompt/prompt.go
@@ -18,6 +18,7 @@ import (
"github.com/ActiveMemory/ctx/internal/config/mcp/field"
"github.com/ActiveMemory/ctx/internal/config/mcp/mime"
"github.com/ActiveMemory/ctx/internal/config/mcp/prompt"
+ cfgSchema "github.com/ActiveMemory/ctx/internal/config/mcp/schema"
"github.com/ActiveMemory/ctx/internal/config/token"
"github.com/ActiveMemory/ctx/internal/context/load"
"github.com/ActiveMemory/ctx/internal/mcp/proto"
@@ -39,7 +40,7 @@ func sessionStart(
) *proto.Response {
ctx, loadErr := load.Do(contextDir)
if loadErr != nil {
- return out.ErrResponse(id, proto.ErrCodeInternal,
+ return out.ErrResponse(id, cfgSchema.ErrCodeInternal,
fmt.Sprintf(
desc.Text(text.DescKeyMCPLoadContext), loadErr))
}
diff --git a/internal/mcp/server/route/tool/dispatch.go b/internal/mcp/server/route/tool/dispatch.go
index 397715cfa..ee16baa25 100644
--- a/internal/mcp/server/route/tool/dispatch.go
+++ b/internal/mcp/server/route/tool/dispatch.go
@@ -12,6 +12,7 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
+ cfgSchema "github.com/ActiveMemory/ctx/internal/config/mcp/schema"
"github.com/ActiveMemory/ctx/internal/config/mcp/tool"
"github.com/ActiveMemory/ctx/internal/mcp/handler"
"github.com/ActiveMemory/ctx/internal/mcp/proto"
@@ -47,7 +48,7 @@ func DispatchCall(
var params proto.CallToolParams
if err := json.Unmarshal(req.Params, ¶ms); err != nil {
return out.ErrResponse(
- req.ID, proto.ErrCodeInvalidArg,
+ req.ID, cfgSchema.ErrCodeInvalidArg,
desc.Text(text.DescKeyMCPErrInvalidParams),
)
}
@@ -98,7 +99,7 @@ func DispatchCall(
resp = sessionEnd(h, req.ID, params.Arguments)
default:
return out.ErrResponse(
- req.ID, proto.ErrCodeNotFound,
+ req.ID, cfgSchema.ErrCodeNotFound,
fmt.Sprintf(
desc.Text(text.DescKeyMCPErrUnknownTool),
params.Name,
diff --git a/internal/mcp/server/server.go b/internal/mcp/server/server.go
index 46e38eecf..6e6fcfb59 100644
--- a/internal/mcp/server/server.go
+++ b/internal/mcp/server/server.go
@@ -13,6 +13,7 @@ import (
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
"github.com/ActiveMemory/ctx/internal/config/mcp/cfg"
+ cfgSchema "github.com/ActiveMemory/ctx/internal/config/mcp/schema"
"github.com/ActiveMemory/ctx/internal/mcp/handler"
"github.com/ActiveMemory/ctx/internal/mcp/proto"
"github.com/ActiveMemory/ctx/internal/mcp/server/catalog"
@@ -85,7 +86,7 @@ func (s *Server) Serve() error {
if writeErr := s.out.WriteJSON(resp); writeErr != nil {
// Marshal failure: try to report it as an error response.
fallback := out.ErrResponse(
- nil, proto.ErrCodeInternal,
+ nil, cfgSchema.ErrCodeInternal,
desc.Text(text.DescKeyMCPErrFailedMarshal),
)
if fbErr := s.out.WriteJSON(fallback); fbErr != nil {
diff --git a/internal/mcp/server/server_test.go b/internal/mcp/server/server_test.go
index e2d283cda..0c0ee3bc4 100644
--- a/internal/mcp/server/server_test.go
+++ b/internal/mcp/server/server_test.go
@@ -149,8 +149,8 @@ func TestMethodNotFound(t *testing.T) {
if resp.Error == nil {
t.Fatal("expected error for unknown method")
}
- if resp.Error.Code != proto.ErrCodeNotFound {
- t.Errorf("error code = %d, want %d", resp.Error.Code, proto.ErrCodeNotFound)
+ if resp.Error.Code != cfgSchema.ErrCodeNotFound {
+ t.Errorf("error code = %d, want %d", resp.Error.Code, cfgSchema.ErrCodeNotFound)
}
}
@@ -483,7 +483,7 @@ func TestParseError(t *testing.T) {
if err := json.Unmarshal(out.Bytes(), &resp); err != nil {
t.Fatalf("unmarshal: %v", err)
}
- if resp.Error == nil || resp.Error.Code != proto.ErrCodeParse {
+ if resp.Error == nil || resp.Error.Code != cfgSchema.ErrCodeParse {
t.Errorf("expected parse error, got: %+v", resp.Error)
}
}
diff --git a/internal/steering/frontmatter.go b/internal/steering/frontmatter.go
index 466a436d0..948246d98 100644
--- a/internal/steering/frontmatter.go
+++ b/internal/steering/frontmatter.go
@@ -9,6 +9,7 @@ package steering
import (
"strings"
+ cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
"github.com/ActiveMemory/ctx/internal/config/token"
errSteering "github.com/ActiveMemory/ctx/internal/err/steering"
)
@@ -52,7 +53,7 @@ func applyDefaults(sf *SteeringFile) {
sf.Inclusion = defaultInclusion
}
if sf.Priority == 0 {
- sf.Priority = defaultPriority
+ sf.Priority = cfgSteering.DefaultPriority
}
// Tools: nil means all tools — no default needed.
}
diff --git a/internal/steering/parse.go b/internal/steering/parse.go
index 81586d2c9..ea4a82dbf 100644
--- a/internal/steering/parse.go
+++ b/internal/steering/parse.go
@@ -19,9 +19,6 @@ import (
// defaultInclusion is the default inclusion mode when omitted.
var defaultInclusion = cfgSteering.InclusionManual
-// defaultPriority is the default priority when omitted.
-const defaultPriority = 50
-
// Parse reads a steering file from bytes, extracting YAML frontmatter
// and markdown body. The filePath is stored on the returned SteeringFile
// for error reporting and identification.
From 5d44ffc5465e37b18da4762eccb070b726198bec Mon Sep 17 00:00:00 2001
From: Jose Alekhinne
Date: Sat, 4 Apr 2026 02:48:25 -0700
Subject: [PATCH 09/13] refactor: extract shared SplitFrontmatter, delete 4
dead delimiter errors
Extract duplicated frontmatter splitting logic from steering/ and
skill/ into parse.SplitFrontmatter. Both callers now delegate to
the shared function and wrap errors with domain-specific context.
Delete 4 dead error constructors (MissingOpeningDelimiter,
MissingClosingDelimiter in err/steering and err/skill) and their
DescKey constants + YAML entries. Add new err/parser delimiter
constructors used by the shared function.
Spec: specs/ast-audit-contributor-guide.md
Signed-off-by: Jose Alekhinne
---
internal/assets/commands/text/errors.yaml | 12 ++---
internal/config/embed/text/err_parse.go | 6 +++
internal/config/embed/text/err_skill.go | 6 ---
internal/config/embed/text/err_steering.go | 6 ---
internal/err/parser/parser.go | 27 +++++++++++
internal/err/skill/skill.go | 23 ---------
internal/err/steering/steering.go | 22 ---------
internal/parse/markdown.go | 46 ++++++++++++++++++
internal/skill/manifest.go | 55 ++++++----------------
internal/steering/frontmatter.go | 36 --------------
internal/steering/parse.go | 3 +-
11 files changed, 100 insertions(+), 142 deletions(-)
diff --git a/internal/assets/commands/text/errors.yaml b/internal/assets/commands/text/errors.yaml
index 9d19b79ab..0b1941e06 100644
--- a/internal/assets/commands/text/errors.yaml
+++ b/internal/assets/commands/text/errors.yaml
@@ -446,12 +446,8 @@ err.skill.invalid-yaml:
short: 'skill: %s: invalid YAML frontmatter: %w'
err.skill.load:
short: 'skill: %s: %w'
-err.skill.missing-closing-delimiter:
- short: missing closing frontmatter delimiter (---)
err.skill.missing-name:
short: 'skill: %s is missing required ''name'' field'
-err.skill.missing-opening-delimiter:
- short: missing opening frontmatter delimiter (---)
err.skill.not-found:
short: 'skill %q not found'
err.skill.not-valid-dir:
@@ -478,10 +474,6 @@ err.steering.file-exists:
short: 'steering file already exists: %s'
err.steering.invalid-yaml:
short: 'steering: %s: invalid YAML frontmatter: %w'
-err.steering.missing-closing-delimiter:
- short: missing closing frontmatter delimiter (---)
-err.steering.missing-opening-delimiter:
- short: missing opening frontmatter delimiter (---)
err.steering.no-tool:
short: 'no tool specified: use --tool , --all, or set the tool field in .ctxrc'
err.steering.output-escapes-root:
@@ -572,3 +564,7 @@ err.validation.parse-file:
short: 'failed to parse %s: %w'
err.validation.working-directory:
short: 'failed to get working directory: %w'
+err.parser.missing-open-delim:
+ short: missing opening frontmatter delimiter (---)
+err.parser.missing-close-delim:
+ short: missing closing frontmatter delimiter (---)
diff --git a/internal/config/embed/text/err_parse.go b/internal/config/embed/text/err_parse.go
index c6043de8d..378ec81fb 100644
--- a/internal/config/embed/text/err_parse.go
+++ b/internal/config/embed/text/err_parse.go
@@ -23,4 +23,10 @@ const (
DescKeyErrParserUnmarshal = "err.parser.unmarshal"
// DescKeyErrParserWalkDir is the text key for err parser walk dir messages.
DescKeyErrParserWalkDir = "err.parser.walk-dir"
+ // DescKeyErrParserMissingOpenDelim is the text key for
+ // missing opening frontmatter delimiter.
+ DescKeyErrParserMissingOpenDelim = "err.parser.missing-open-delim"
+ // DescKeyErrParserMissingCloseDelim is the text key
+ // for missing closing frontmatter delimiter.
+ DescKeyErrParserMissingCloseDelim = "err.parser.missing-close-delim"
)
diff --git a/internal/config/embed/text/err_skill.go b/internal/config/embed/text/err_skill.go
index ee07b7f6b..482e2e4bf 100644
--- a/internal/config/embed/text/err_skill.go
+++ b/internal/config/embed/text/err_skill.go
@@ -23,15 +23,9 @@ const (
DescKeyErrSkillList = "err.skill.skill-list"
// DescKeyErrSkillLoad is the text key for err skill load messages.
DescKeyErrSkillLoad = "err.skill.load"
- // DescKeyErrSkillMissingClosingDelim is the text key for err skill missing
- // closing delim messages.
- DescKeyErrSkillMissingClosingDelim = "err.skill.missing-closing-delimiter"
// DescKeyErrSkillMissingName is the text key for err skill missing name
// messages.
DescKeyErrSkillMissingName = "err.skill.missing-name"
- // DescKeyErrSkillMissingOpeningDelim is the text key for err skill missing
- // opening delim messages.
- DescKeyErrSkillMissingOpeningDelim = "err.skill.missing-opening-delimiter"
// DescKeyErrSkillNotFound is the text key for err skill not found messages.
DescKeyErrSkillNotFound = "err.skill.not-found"
// DescKeyErrSkillNotValidDir is the text key for err skill not valid dir
diff --git a/internal/config/embed/text/err_steering.go b/internal/config/embed/text/err_steering.go
index 0da4a79ad..cd29d9fa2 100644
--- a/internal/config/embed/text/err_steering.go
+++ b/internal/config/embed/text/err_steering.go
@@ -23,12 +23,6 @@ const (
// DescKeyErrSteeringInvalidYAML is the text key for err steering invalid yaml
// messages.
DescKeyErrSteeringInvalidYAML = "err.steering.invalid-yaml"
- // DescKeyErrSteeringMissingClosingDelim is the text key for err steering
- // missing closing delim messages.
- DescKeyErrSteeringMissingClosingDelim = "err.steering.missing-closing-delimiter"
- // DescKeyErrSteeringMissingOpeningDelim is the text key for err steering
- // missing opening delim messages.
- DescKeyErrSteeringMissingOpeningDelim = "err.steering.missing-opening-delimiter"
// DescKeyErrSteeringNoTool is the text key for err steering no tool messages.
DescKeyErrSteeringNoTool = "err.steering.no-tool"
// DescKeyErrSteeringOutputEscapesRoot is the text key for err steering output
diff --git a/internal/err/parser/parser.go b/internal/err/parser/parser.go
index ab76381f8..06a74607f 100644
--- a/internal/err/parser/parser.go
+++ b/internal/err/parser/parser.go
@@ -7,12 +7,39 @@
package parser
import (
+ "errors"
"fmt"
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
"github.com/ActiveMemory/ctx/internal/config/embed/text"
)
+// MissingOpenDelim returns an error for a missing
+// opening frontmatter delimiter (---).
+//
+// Returns:
+// - error: "missing opening frontmatter delimiter"
+func MissingOpenDelim() error {
+ return errors.New(
+ desc.Text(
+ text.DescKeyErrParserMissingOpenDelim,
+ ),
+ )
+}
+
+// MissingCloseDelim returns an error for a missing
+// closing frontmatter delimiter (---).
+//
+// Returns:
+// - error: "missing closing frontmatter delimiter"
+func MissingCloseDelim() error {
+ return errors.New(
+ desc.Text(
+ text.DescKeyErrParserMissingCloseDelim,
+ ),
+ )
+}
+
// ReadFile wraps a session file read failure.
//
// Parameters:
diff --git a/internal/err/skill/skill.go b/internal/err/skill/skill.go
index 03a98735d..4583fe53d 100644
--- a/internal/err/skill/skill.go
+++ b/internal/err/skill/skill.go
@@ -7,7 +7,6 @@
package skill
import (
- "errors"
"fmt"
"github.com/ActiveMemory/ctx/internal/assets/read/desc"
@@ -94,17 +93,6 @@ func Load(name string, cause error) error {
)
}
-// MissingClosingDelimiter returns an error for missing closing
-// frontmatter delimiter.
-//
-// Returns:
-// - error: "missing closing frontmatter delimiter (---)"
-func MissingClosingDelimiter() error {
- return errors.New(
- desc.Text(text.DescKeyErrSkillMissingClosingDelim),
- )
-}
-
// MissingName returns an error for a skill manifest missing the
// required name field.
//
@@ -119,17 +107,6 @@ func MissingName(manifest string) error {
)
}
-// MissingOpeningDelimiter returns an error for missing opening
-// frontmatter delimiter.
-//
-// Returns:
-// - error: "missing opening frontmatter delimiter (---)"
-func MissingOpeningDelimiter() error {
- return errors.New(
- desc.Text(text.DescKeyErrSkillMissingOpeningDelim),
- )
-}
-
// NotFound returns an error when a skill cannot be found by name.
//
// Parameters:
diff --git a/internal/err/steering/steering.go b/internal/err/steering/steering.go
index 171b4de76..535f640a0 100644
--- a/internal/err/steering/steering.go
+++ b/internal/err/steering/steering.go
@@ -78,28 +78,6 @@ func InvalidYAML(filePath string, cause error) error {
)
}
-// MissingClosingDelimiter returns an error for missing closing
-// frontmatter delimiter.
-//
-// Returns:
-// - error: "missing closing frontmatter delimiter (---)"
-func MissingClosingDelimiter() error {
- return errors.New(
- desc.Text(text.DescKeyErrSteeringMissingClosingDelim),
- )
-}
-
-// MissingOpeningDelimiter returns an error for missing opening
-// frontmatter delimiter.
-//
-// Returns:
-// - error: "missing opening frontmatter delimiter (---)"
-func MissingOpeningDelimiter() error {
- return errors.New(
- desc.Text(text.DescKeyErrSteeringMissingOpeningDelim),
- )
-}
-
// NoTool returns an error when no tool is specified for sync.
//
// Returns:
diff --git a/internal/parse/markdown.go b/internal/parse/markdown.go
index 7c4e95150..bda07315e 100644
--- a/internal/parse/markdown.go
+++ b/internal/parse/markdown.go
@@ -11,6 +11,7 @@ import (
"github.com/ActiveMemory/ctx/internal/config/regex"
"github.com/ActiveMemory/ctx/internal/config/token"
+ errParser "github.com/ActiveMemory/ctx/internal/err/parser"
)
// StripLineNumbers removes Claude Code's line number prefixes from content.
@@ -85,3 +86,48 @@ func FenceForContent(content string) string {
}
return fence
}
+
+// SplitFrontmatter separates YAML frontmatter from a
+// markdown body. Frontmatter must start with a ---
+// line and end with a second --- line.
+//
+// Parameters:
+// - data: Raw file bytes
+//
+// Returns:
+// - []byte: YAML frontmatter (between delimiters)
+// - string: Body after the closing delimiter
+// - error: Non-nil if delimiters are missing
+func SplitFrontmatter(
+ data []byte,
+) ([]byte, string, error) {
+ content := strings.TrimLeft(
+ string(data), token.TrimCR,
+ )
+
+ if !strings.HasPrefix(
+ content, token.FrontmatterDelimiter,
+ ) {
+ return nil, "", errParser.MissingOpenDelim()
+ }
+
+ rest := content[len(token.FrontmatterDelimiter):]
+ rest = strings.TrimPrefix(rest, token.NewlineLF)
+
+ needle := token.NewlineLF +
+ token.FrontmatterDelimiter
+ idx := strings.Index(rest, needle)
+ if idx < 0 {
+ return nil, "", errParser.MissingCloseDelim()
+ }
+
+ fm := rest[:idx]
+ after := rest[idx+1+len(
+ token.FrontmatterDelimiter,
+ ):]
+ after = strings.TrimPrefix(
+ after, token.NewlineLF,
+ )
+
+ return []byte(fm), after, nil
+}
diff --git a/internal/skill/manifest.go b/internal/skill/manifest.go
index 5b263a417..db56e62a1 100644
--- a/internal/skill/manifest.go
+++ b/internal/skill/manifest.go
@@ -7,59 +7,34 @@
package skill
import (
- "strings"
-
"gopkg.in/yaml.v3"
- "github.com/ActiveMemory/ctx/internal/config/token"
errSkill "github.com/ActiveMemory/ctx/internal/err/skill"
+ "github.com/ActiveMemory/ctx/internal/parse"
)
-// parseManifest extracts YAML frontmatter and markdown body from a
-// SKILL.md file. The frontmatter is delimited by --- lines.
-func parseManifest(data []byte, name, dir string) (*Skill, error) {
- raw, body, splitErr := splitFrontmatter(data)
+// parseManifest extracts YAML frontmatter and markdown
+// body from a SKILL.md file.
+func parseManifest(
+ data []byte, name, dir string,
+) (*Skill, error) {
+ raw, body, splitErr := parse.SplitFrontmatter(
+ data,
+ )
if splitErr != nil {
return nil, errSkill.Load(name, splitErr)
}
sk := &Skill{}
- if unmarshalErr := yaml.Unmarshal(raw, sk); unmarshalErr != nil {
- return nil, errSkill.InvalidYAML(name, unmarshalErr)
+ if unmarshalErr := yaml.Unmarshal(
+ raw, sk,
+ ); unmarshalErr != nil {
+ return nil, errSkill.InvalidYAML(
+ name, unmarshalErr,
+ )
}
sk.Body = body
sk.Dir = dir
return sk, nil
}
-
-// splitFrontmatter separates YAML frontmatter from the markdown body.
-// Frontmatter must start with a --- line and end with a second --- line.
-func splitFrontmatter(
- data []byte,
-) (frontmatter []byte, body string, err error) {
- content := strings.TrimLeft(string(data), token.TrimCR)
-
- if !strings.HasPrefix(content, token.FrontmatterDelimiter) {
- return nil, "", errSkill.MissingOpeningDelimiter()
- }
-
- // Skip the opening delimiter line.
- rest := content[len(token.FrontmatterDelimiter):]
- rest = strings.TrimPrefix(rest, token.NewlineLF)
-
- needle := token.NewlineLF + token.FrontmatterDelimiter
- idx := strings.Index(rest, needle)
- if idx < 0 {
- return nil, "", errSkill.MissingClosingDelimiter()
- }
-
- fm := rest[:idx]
-
- // Skip past the closing delimiter line.
- after := rest[idx+1+len(token.FrontmatterDelimiter):]
- // Trim exactly one leading newline from the body if present.
- after = strings.TrimPrefix(after, token.NewlineLF)
-
- return []byte(fm), after, nil
-}
diff --git a/internal/steering/frontmatter.go b/internal/steering/frontmatter.go
index 948246d98..ef41841d0 100644
--- a/internal/steering/frontmatter.go
+++ b/internal/steering/frontmatter.go
@@ -7,45 +7,9 @@
package steering
import (
- "strings"
-
cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
- "github.com/ActiveMemory/ctx/internal/config/token"
- errSteering "github.com/ActiveMemory/ctx/internal/err/steering"
)
-// splitFrontmatter separates YAML frontmatter from the markdown body.
-// Frontmatter must start with a --- line and end with a second --- line.
-func splitFrontmatter(
- data []byte,
-) (frontmatter []byte, body string, err error) {
- content := string(data)
- content = strings.TrimLeft(content, token.TrimCR)
-
- if !strings.HasPrefix(content, token.FrontmatterDelimiter) {
- return nil, "", errSteering.MissingOpeningDelimiter()
- }
-
- // Skip the opening delimiter line.
- rest := content[len(token.FrontmatterDelimiter):]
- rest = strings.TrimPrefix(rest, token.NewlineLF)
-
- needle := token.NewlineLF + token.FrontmatterDelimiter
- idx := strings.Index(rest, needle)
- if idx < 0 {
- return nil, "", errSteering.MissingClosingDelimiter()
- }
-
- fm := rest[:idx]
-
- // Skip past the closing delimiter line.
- after := rest[idx+1+len(token.FrontmatterDelimiter):]
- // Trim exactly one leading newline from the body if present.
- after = strings.TrimPrefix(after, token.NewlineLF)
-
- return []byte(fm), after, nil
-}
-
// applyDefaults sets default values for fields not present in the
// parsed frontmatter.
func applyDefaults(sf *SteeringFile) {
diff --git a/internal/steering/parse.go b/internal/steering/parse.go
index ea4a82dbf..8bd284f16 100644
--- a/internal/steering/parse.go
+++ b/internal/steering/parse.go
@@ -14,6 +14,7 @@ import (
cfgSteering "github.com/ActiveMemory/ctx/internal/config/steering"
"github.com/ActiveMemory/ctx/internal/config/token"
errSteering "github.com/ActiveMemory/ctx/internal/err/steering"
+ "github.com/ActiveMemory/ctx/internal/parse"
)
// defaultInclusion is the default inclusion mode when omitted.
@@ -30,7 +31,7 @@ var defaultInclusion = cfgSteering.InclusionManual
// Returns an error if frontmatter contains invalid YAML, identifying
// the file path and the parsing failure.
func Parse(data []byte, filePath string) (*SteeringFile, error) {
- raw, body, splitErr := splitFrontmatter(data)
+ raw, body, splitErr := parse.SplitFrontmatter(data)
if splitErr != nil {
return nil, errSteering.Parse(filePath, splitErr)
}
From ec10b81eb328851ef3bed536fa15b719f195830b Mon Sep 17 00:00:00 2001
From: Jose Alekhinne
Date: Sat, 4 Apr 2026 02:50:52 -0700
Subject: [PATCH 10/13] refactor: dedup threshold evaluation in sysinfo
Replace 3 structurally identical memory/swap/disk check blocks
with a data-driven loop over a byteCheck slice. Same behavior,
less repetition.
Spec: specs/ast-audit-contributor-guide.md
Signed-off-by: Jose Alekhinne
---
internal/sysinfo/threshold.go | 96 +++++++++++++++++++----------------
1 file changed, 53 insertions(+), 43 deletions(-)
diff --git a/internal/sysinfo/threshold.go b/internal/sysinfo/threshold.go
index 8063a5f20..e8b7c750f 100644
--- a/internal/sysinfo/threshold.go
+++ b/internal/sysinfo/threshold.go
@@ -32,56 +32,66 @@ import (
func Evaluate(snap Snapshot) []ResourceAlert {
var alerts []ResourceAlert
- // Memory
- if snap.Memory.Supported && snap.Memory.TotalBytes > 0 {
- pct := percent(snap.Memory.UsedBytes, snap.Memory.TotalBytes)
- msg := fmt.Sprintf(desc.Text(text.DescKeyResourcesAlertMemory),
- pct, FormatGiB(snap.Memory.UsedBytes), FormatGiB(snap.Memory.TotalBytes))
- if pct >= stats.ThresholdMemoryDangerPct {
- alerts = append(alerts, ResourceAlert{
- Severity: SeverityDanger, Resource: cfgSysinfo.ResourceMemory, Message: msg,
- })
- } else if pct >= stats.ThresholdMemoryWarnPct {
- alerts = append(alerts, ResourceAlert{
- Severity: SeverityWarning,
- Resource: cfgSysinfo.ResourceMemory,
- Message: msg,
- })
- }
+ type byteCheck struct {
+ supported bool
+ used uint64
+ total uint64
+ descKey string
+ resource string
+ dangerPct float64
+ warnPct float64
}
- // Swap
- if snap.Memory.Supported && snap.Memory.SwapTotalBytes > 0 {
- pct := percent(snap.Memory.SwapUsedBytes, snap.Memory.SwapTotalBytes)
- msg := fmt.Sprintf(
- desc.Text(text.DescKeyResourcesAlertSwap),
- pct,
- FormatGiB(snap.Memory.SwapUsedBytes),
- FormatGiB(snap.Memory.SwapTotalBytes),
- )
- if pct >= stats.ThresholdSwapDangerPct {
- alerts = append(alerts, ResourceAlert{
- Severity: SeverityDanger, Resource: cfgSysinfo.ResourceSwap, Message: msg,
- })
- } else if pct >= stats.ThresholdSwapWarnPct {
- alerts = append(alerts, ResourceAlert{
- Severity: SeverityWarning, Resource: cfgSysinfo.ResourceSwap, Message: msg,
- })
- }
+ checks := []byteCheck{
+ {
+ snap.Memory.Supported,
+ snap.Memory.UsedBytes,
+ snap.Memory.TotalBytes,
+ text.DescKeyResourcesAlertMemory,
+ cfgSysinfo.ResourceMemory,
+ stats.ThresholdMemoryDangerPct,
+ stats.ThresholdMemoryWarnPct,
+ },
+ {
+ snap.Memory.Supported,
+ snap.Memory.SwapUsedBytes,
+ snap.Memory.SwapTotalBytes,
+ text.DescKeyResourcesAlertSwap,
+ cfgSysinfo.ResourceSwap,
+ stats.ThresholdSwapDangerPct,
+ stats.ThresholdSwapWarnPct,
+ },
+ {
+ snap.Disk.Supported,
+ snap.Disk.UsedBytes,
+ snap.Disk.TotalBytes,
+ text.DescKeyResourcesAlertDisk,
+ cfgSysinfo.ResourceDisk,
+ stats.ThresholdDiskDangerPct,
+ stats.ThresholdDiskWarnPct,
+ },
}
- // Disk
- if snap.Disk.Supported && snap.Disk.TotalBytes > 0 {
- pct := percent(snap.Disk.UsedBytes, snap.Disk.TotalBytes)
- msg := fmt.Sprintf(desc.Text(text.DescKeyResourcesAlertDisk),
- pct, FormatGiB(snap.Disk.UsedBytes), FormatGiB(snap.Disk.TotalBytes))
- if pct >= stats.ThresholdDiskDangerPct {
+ for _, c := range checks {
+ if !c.supported || c.total == 0 {
+ continue
+ }
+ pct := percent(c.used, c.total)
+ msg := fmt.Sprintf(
+ desc.Text(c.descKey), pct,
+ FormatGiB(c.used), FormatGiB(c.total),
+ )
+ if pct >= c.dangerPct {
alerts = append(alerts, ResourceAlert{
- Severity: SeverityDanger, Resource: cfgSysinfo.ResourceDisk, Message: msg,
+ Severity: SeverityDanger,
+ Resource: c.resource,
+ Message: msg,
})
- } else if pct >= stats.ThresholdDiskWarnPct {
+ } else if pct >= c.warnPct {
alerts = append(alerts, ResourceAlert{
- Severity: SeverityWarning, Resource: cfgSysinfo.ResourceDisk, Message: msg,
+ Severity: SeverityWarning,
+ Resource: c.resource,
+ Message: msg,
})
}
}
From 03ae63a9be6305f794dfbe3cdefd6b60f9c608a7 Mon Sep 17 00:00:00 2001
From: Jose Alekhinne
Date: Sat, 4 Apr 2026 02:58:48 -0700
Subject: [PATCH 11/13] fix: rename predicate functions per no-Is/Has/Can
convention
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Rename isConsumerLayer → consumerLayer in cross_package_types_test.go
and isSyncableTool → syncableTool in steering/{format,sync}.go.
isNumeric was already removed during earlier migrations.
Persist session decisions and learnings.
Spec: specs/ast-audit-contributor-guide.md
Signed-off-by: Jose Alekhinne
---
.context/DECISIONS.md | 30 ++++++++++++++++++++++
.context/LEARNINGS.md | 22 ++++++++++++++++
internal/audit/cross_package_types_test.go | 8 +++---
internal/steering/format.go | 4 +--
internal/steering/sync.go | 4 +--
5 files changed, 60 insertions(+), 8 deletions(-)
diff --git a/.context/DECISIONS.md b/.context/DECISIONS.md
index aa1d39f0d..dbf97c4b9 100644
--- a/.context/DECISIONS.md
+++ b/.context/DECISIONS.md
@@ -3,6 +3,8 @@
| Date | Decision |
|----|--------|
+| 2026-04-04 | TestNoMagicStrings and TestNoMagicValues no longer exempt const/var definitions outside config/ |
+| 2026-04-04 | String-typed enums belong in config/, not domain packages |
| 2026-04-03 | Output functions belong in write/ (consolidated) |
| 2026-04-03 | YAML text externalization pipeline (consolidated) |
| 2026-04-03 | Package taxonomy and code placement (consolidated) |
@@ -114,6 +116,34 @@ For significant decisions:
-->
+## [2026-04-04-025755] TestNoMagicStrings and TestNoMagicValues no longer exempt const/var definitions outside config/
+
+**Status**: Accepted
+
+**Context**: The isConstDef/isVarDef blanket exemption masked 156+ string and 7 numeric constants in the wrong package
+
+**Decision**: TestNoMagicStrings and TestNoMagicValues no longer exempt const/var definitions outside config/
+
+**Rationale**: Const definitions outside config/ are magic values in the wrong place — naming them does not fix the structural problem
+
+**Consequence**: All new code with string/numeric constants outside config/ fails these tests immediately
+
+---
+
+## [2026-04-04-025746] String-typed enums belong in config/, not domain packages
+
+**Status**: Accepted
+
+**Context**: Debated whether type IssueType string with const values belongs in domain or config. The string value is the same regardless of type annotation.
+
+**Decision**: String-typed enums belong in config/, not domain packages
+
+**Rationale**: Types without behavior belong in config. Promote to entity/ only when methods/interfaces appear.
+
+**Consequence**: All type Foo string + const blocks outside config/ are now caught by TestNoMagicStrings.
+
+---
+
## [2026-04-03-180000] Output functions belong in write/ (consolidated)
**Status**: Accepted
diff --git a/.context/LEARNINGS.md b/.context/LEARNINGS.md
index 7fc00b22b..8363dec6a 100644
--- a/.context/LEARNINGS.md
+++ b/.context/LEARNINGS.md
@@ -17,6 +17,8 @@ DO NOT UPDATE FOR:
| Date | Learning |
|----|--------|
+| 2026-04-04 | Format-verb strings are localizable text, not exempt from magic string checks |
+| 2026-04-04 | Agents add allowlist entries to make tests pass — guard every exemption |
| 2026-04-03 | Subagent scope creep and cleanup (consolidated) |
| 2026-04-03 | Bulk rename and replace_all hazards (consolidated) |
| 2026-04-03 | Import cycles and package splits (consolidated) |
@@ -106,6 +108,26 @@ DO NOT UPDATE FOR:
---
+## [2026-04-04-025813] Format-verb strings are localizable text, not exempt from magic string checks
+
+**Context**: Strings like '%d entries checked' were passing TestNoMagicStrings because the format-verb exemption was too broad
+
+**Lesson**: Any string containing English words alongside format directives is user-facing text that belongs in YAML assets
+
+**Application**: Removed format-verb, URL-scheme, HTML-entity, and err/ exemptions from TestNoMagicStrings
+
+---
+
+## [2026-04-04-025805] Agents add allowlist entries to make tests pass — guard every exemption
+
+**Context**: Found that every exemption map/allowlist in audit tests is a tempting shortcut for agents
+
+**Lesson**: Added DO NOT widen guard comments to all 10 exemption data structures across 7 test files
+
+**Application**: Every new audit test with an exemption must include the guard comment. Review PRs for drive-by allowlist additions.
+
+---
+
## [2026-04-03-180000] Subagent scope creep and cleanup (consolidated)
**Consolidated from**: 4 entries (2026-03-06 to 2026-03-23)
diff --git a/internal/audit/cross_package_types_test.go b/internal/audit/cross_package_types_test.go
index 39a835380..56cbd79f9 100644
--- a/internal/audit/cross_package_types_test.go
+++ b/internal/audit/cross_package_types_test.go
@@ -205,10 +205,10 @@ func sameModule(a, b string) bool {
}
// cli/* consuming any domain module is the
// standard consumer layer pattern.
- if isConsumerLayer(ma) && !isConsumerLayer(mb) {
+ if consumerLayer(ma) && !consumerLayer(mb) {
return true
}
- if isConsumerLayer(mb) && !isConsumerLayer(ma) {
+ if consumerLayer(mb) && !consumerLayer(ma) {
return true
}
// err/ consumed from cli/ or .
@@ -278,8 +278,8 @@ func moduleRoot(pkgPath string) string {
return parts[0]
}
-// isConsumerLayer returns true if the module root is a
+// consumerLayer returns true if the module root is a
// consumer layer that naturally imports domain types.
-func isConsumerLayer(mod string) bool {
+func consumerLayer(mod string) bool {
return strings.HasPrefix(mod, "cli/")
}
diff --git a/internal/steering/format.go b/internal/steering/format.go
index 36cc30a6e..29927f9e8 100644
--- a/internal/steering/format.go
+++ b/internal/steering/format.go
@@ -22,8 +22,8 @@ import (
ctxIo "github.com/ActiveMemory/ctx/internal/io"
)
-// isSyncableTool returns true if the tool supports native-format sync.
-func isSyncableTool(tool string) bool {
+// syncableTool returns true if the tool supports native-format sync.
+func syncableTool(tool string) bool {
for _, t := range syncableTools {
if t == tool {
return true
diff --git a/internal/steering/sync.go b/internal/steering/sync.go
index 3e6a0e61f..60043efd0 100644
--- a/internal/steering/sync.go
+++ b/internal/steering/sync.go
@@ -36,7 +36,7 @@ var syncableTools = []string{
func SyncTool(
steeringDir, projectRoot, tool string,
) (SyncReport, error) {
- if !isSyncableTool(tool) {
+ if !syncableTool(tool) {
supported := strings.Join(
syncableTools, token.CommaSpace,
)
@@ -116,7 +116,7 @@ func SyncAll(
// Returns nil if no stale files are found or if the steering
// directory cannot be read.
func StaleFiles(steeringDir, projectRoot, tool string) []string {
- if !isSyncableTool(tool) {
+ if !syncableTool(tool) {
return nil
}
From 448f68ad4be6de60f4359de44c6f943355a786b9 Mon Sep 17 00:00:00 2001
From: Jose Alekhinne
Date: Sat, 4 Apr 2026 03:16:24 -0700
Subject: [PATCH 12/13] fix: eliminate testOnlyExports, teach dead-export
scanner cross-package test usage
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Remove the testOnlyExports allowlist entirely. Instead, add Phase
2.5 to TestNoDeadExports that loads packages with Tests: true and
removes symbols used cross-package in test files. A symbol imported
by a test in a different package is test infrastructure, not dead.
Also fix parser.findParser → parser.find (stuttery name) and move
to tools.go (mixed visibility).
Spec: specs/ast-audit-contributor-guide.md
Signed-off-by: Jose Alekhinne
---
internal/audit/dead_exports_test.go | 74 +++++++++++++++---------
internal/journal/parser/markdown_test.go | 2 +-
internal/journal/parser/parser.go | 16 -----
internal/journal/parser/parser_test.go | 4 +-
internal/journal/parser/tools.go | 16 +++++
internal/rc/rc.go | 4 +-
6 files changed, 69 insertions(+), 47 deletions(-)
diff --git a/internal/audit/dead_exports_test.go b/internal/audit/dead_exports_test.go
index ce967b13e..856d69b13 100644
--- a/internal/audit/dead_exports_test.go
+++ b/internal/audit/dead_exports_test.go
@@ -26,29 +26,6 @@ import (
// internal and may be used via reflection or are
// genuinely file-scoped helpers.
-// DO NOT add entries here to make tests pass. New code must
-// conform to the check. Widening requires a dedicated PR with
-// justification for each entry.
-//
-// testOnlyExports lists exported symbols that exist
-// solely for test usage. The dead-export scanner skips
-// test files, so these would otherwise be false
-// positives. Keep this list small: prefer eliminating
-// the export over adding it here.
-var testOnlyExports = map[string]bool{
- "github.com/ActiveMemory/ctx/internal/config/hook.CategoryCustomizable": true,
- "github.com/ActiveMemory/ctx/internal/assets/hooks/messages.Hooks": true,
- "github.com/ActiveMemory/ctx/internal/cli/pad/core/store.EnsureGitignore": true,
- "github.com/ActiveMemory/ctx/internal/cli/system/core/state.SetDirForTest": true,
- "github.com/ActiveMemory/ctx/internal/config/asset.DirReferences": true,
- "github.com/ActiveMemory/ctx/internal/config/regex.Phase": true,
- "github.com/ActiveMemory/ctx/internal/inspect.StartsWithCtxMarker": true,
- "github.com/ActiveMemory/ctx/internal/journal/parser.Parser": true,
- "github.com/ActiveMemory/ctx/internal/mcp/proto.InitializeParams": true,
- "github.com/ActiveMemory/ctx/internal/mcp/proto.UnsubscribeParams": true,
- "github.com/ActiveMemory/ctx/internal/rc.Reset": true,
-}
-
// DO NOT add entries here to make tests pass. New code must
// conform to the check. Widening requires a dedicated PR with
// justification for each entry.
@@ -148,9 +125,30 @@ func TestNoDeadExports(t *testing.T) {
}
}
- // Phase 3: remove test-only allowlist entries.
- for key := range testOnlyExports {
- delete(defs, key)
+ // Phase 2.5: remove symbols used cross-package in
+ // test files. If a test in package B imports a
+ // symbol from package A, the symbol is test
+ // infrastructure — not dead. Same-package test
+ // usage does not count (those should be unexported).
+ testPkgs := loadTestPackages(t)
+ for _, pkg := range testPkgs {
+ for ident, obj := range pkg.TypesInfo.Uses {
+ if obj == nil || obj.Pkg() == nil {
+ continue
+ }
+ pos := pkg.Fset.Position(ident.Pos())
+ if !isTestFile(pos.Filename) {
+ continue
+ }
+ // Cross-package: the test's package path
+ // differs from the symbol's defining package.
+ if pkg.PkgPath == obj.Pkg().Path() {
+ continue
+ }
+ key := obj.Pkg().Path() + "." +
+ obj.Name()
+ delete(defs, key)
+ }
}
// Phase 3b: remove Linux-only exports (used from
@@ -212,6 +210,30 @@ func loadCmdPackages(t *testing.T) []*packages.Package {
return pkgs
}
+// loadTestPackages loads internal/ packages WITH test
+// files for cross-package test usage detection.
+func loadTestPackages(
+ t *testing.T,
+) []*packages.Package {
+ t.Helper()
+ cfg := &packages.Config{
+ Mode: packages.NeedName |
+ packages.NeedFiles |
+ packages.NeedSyntax |
+ packages.NeedTypes |
+ packages.NeedTypesInfo,
+ Tests: true,
+ }
+ pkgs, loadErr := packages.Load(
+ cfg,
+ "github.com/ActiveMemory/ctx/internal/...",
+ )
+ if loadErr != nil {
+ t.Fatalf("packages.Load tests: %v", loadErr)
+ }
+ return pkgs
+}
+
// isExported reports whether name starts with an
// uppercase letter.
func isExported(name string) bool {
diff --git a/internal/journal/parser/markdown_test.go b/internal/journal/parser/markdown_test.go
index 7d0f48acc..ed272a70a 100644
--- a/internal/journal/parser/markdown_test.go
+++ b/internal/journal/parser/markdown_test.go
@@ -422,7 +422,7 @@ func TestRegisteredTools_IncludesMarkdown(t *testing.T) {
}
func TestGetParser_Markdown(t *testing.T) {
- p := Parser(session.ToolMarkdown)
+ p := find(session.ToolMarkdown)
if p == nil {
t.Fatalf("expected parser for %q", session.ToolMarkdown)
}
diff --git a/internal/journal/parser/parser.go b/internal/journal/parser/parser.go
index f3a67e329..93569c84e 100644
--- a/internal/journal/parser/parser.go
+++ b/internal/journal/parser/parser.go
@@ -194,19 +194,3 @@ func FindSessionsForCWD(
return s.CWD == cwd
}, additionalDirs...)
}
-
-// Parser returns a parser for the specified tool.
-//
-// Parameters:
-// - tool: Tool identifier (e.g., "claude-code")
-//
-// Returns:
-// - Session: The parser for the tool, or nil if not found
-func Parser(tool string) Session {
- for _, p := range registeredParsers {
- if p.Tool() == tool {
- return p
- }
- }
- return nil
-}
diff --git a/internal/journal/parser/parser_test.go b/internal/journal/parser/parser_test.go
index d74b344fe..7c6cbe7e5 100644
--- a/internal/journal/parser/parser_test.go
+++ b/internal/journal/parser/parser_test.go
@@ -340,7 +340,7 @@ func TestRegisteredTools(t *testing.T) {
}
func TestGetParser(t *testing.T) {
- parser := Parser("claude-code")
+ parser := find("claude-code")
if parser == nil {
t.Error("expected parser for 'claude-code'")
}
@@ -348,7 +348,7 @@ func TestGetParser(t *testing.T) {
t.Errorf("expected tool 'claude-code', got '%s'", parser.Tool())
}
- unknown := Parser("unknown-tool")
+ unknown := find("unknown-tool")
if unknown != nil {
t.Error("expected nil for unknown tool")
}
diff --git a/internal/journal/parser/tools.go b/internal/journal/parser/tools.go
index 41a4db536..45aa10489 100644
--- a/internal/journal/parser/tools.go
+++ b/internal/journal/parser/tools.go
@@ -6,6 +6,22 @@
package parser
+// find returns a parser for the specified tool.
+//
+// Parameters:
+// - tool: Tool identifier (e.g., "claude-code")
+//
+// Returns:
+// - Session: The parser, or nil if not found
+func find(tool string) Session {
+ for _, p := range registeredParsers {
+ if p.Tool() == tool {
+ return p
+ }
+ }
+ return nil
+}
+
// registeredTools returns the list of supported tools.
//
// Returns:
diff --git a/internal/rc/rc.go b/internal/rc/rc.go
index b61972c9f..4f36bd1e0 100644
--- a/internal/rc/rc.go
+++ b/internal/rc/rc.go
@@ -441,8 +441,8 @@ func OverrideContextDir(ctxDir string) {
rcOverrideDir = ctxDir
}
-// Reset clears the cached configuration, forcing reload on the next access.
-// This is primarily useful for testing.
+// Reset clears the cached configuration, forcing
+// reload on the next access.
func Reset() {
rcMu.Lock()
defer rcMu.Unlock()
From d35e98bf5608850eb92e2b625250ae7b2ebcf144 Mon Sep 17 00:00:00 2001
From: Jose Alekhinne
Date: Sat, 4 Apr 2026 03:20:15 -0700
Subject: [PATCH 13/13] docs.
Signed-off-by: Jose Alekhinne
---
site/cli/index.html | 67 +
site/cli/mcp/index.html | 157 +
site/cli/tools/index.html | 553 ++
site/home/configuration/index.html | 76 +-
site/home/context-files/index.html | 15 +
site/home/getting-started/index.html | 51 +-
site/reference/audit-conventions/index.html | 68 +-
site/search.json | 2 +-
.../index.html | 7510 +++++++++++++++++
9 files changed, 8470 insertions(+), 29 deletions(-)
create mode 100644 site/superpowers/plans/2026-03-31-commit-context-tracing/index.html
diff --git a/site/cli/index.html b/site/cli/index.html
index 107a871e9..f2fa36535 100644
--- a/site/cli/index.html
+++ b/site/cli/index.html
@@ -3646,6 +3646,10 @@
Manage lifecycle hooks: shell scripts that fire at specific events
+during AI sessions.
+
Hooks live in .context/hooks/<type>/ directories, organized by
+event type. Each hook is an executable script that receives JSON
+via stdin and returns JSON via stdout.
It should cite specific context: current tasks, recent decisions,
@@ -4063,8 +4110,8 @@
5. Set Up Companion Tools (
use it for impact analysis and dependency awareness.
-
# Index your project for GitNexus (run once, then after major changes)
-npxgitnexusanalyze
+
# Index your project for GitNexus (run once, then after major changes)
+npxgitnexusanalyze
Both are optional MCP servers: if they are not connected, skills degrade
gracefully to built-in capabilities. See
diff --git a/site/reference/audit-conventions/index.html b/site/reference/audit-conventions/index.html
index 40b84f7bd..c9813fee5 100644
--- a/site/reference/audit-conventions/index.html
+++ b/site/reference/audit-conventions/index.html
@@ -3692,6 +3692,17 @@
Exported methods that return bool must not use Is, Has, or
+Can prefixes. The predicate reads more naturally without them,
+especially at call sites where the package name provides context.
Rule: Drop the prefix. Private helpers may use prefixes when it
+reads more naturally (isValid in a local context is fine). This
+convention applies to exported methods and package-level functions.
+See CONVENTIONS.md "Predicates" section.
+
This is not yet enforced by an AST test — it requires semantic
+understanding of return types and naming intent that makes automated
+detection fragile. Apply during code review.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#what-ctx-is-not","level":2,"title":"What ctx Is Not","text":"
Avoid Category Errors
Mislabeling ctx guarantees misuse.
ctx is not a memory feature.
ctx is not prompt engineering.
ctx is not a productivity hack.
ctx is not automation theater.
ctx is a system for preserving intent under scale.
ctx is infrastructure.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#verified-reality-is-the-scoreboard","level":2,"title":"Verified Reality Is the Scoreboard","text":"
Activity is a False Proxy
Output volume correlates poorly with impact.
Code is not progress.
Activity is not impact.
The only truth that compounds is verified change.
Verified change must exist in the real world.
Hypotheses are cheap; outcomes are not.
ctx captures:
What we expected;
What we observed;
Where reality diverged.
If we cannot predict, measure, and verify the result...
...it does not count.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#build-to-learn-not-to-accumulate","level":2,"title":"Build to Learn, Not to Accumulate","text":"
Prototypes Have an Expiration Date
A prototype's value is information, not longevity.
Prototypes exist to reduce uncertainty.
We build to:
Test assumptions;
Validate architecture;
Answer specific questions.
Not everything.
Not blindly.
Not permanently.
ctx records archeology so the cost is paid once.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#failures-are-assets","level":2,"title":"Failures Are Assets","text":"
","path":["The ctx Manifesto"],"tags":[]},{"location":"#encode-intent-into-the-environment","level":2,"title":"Encode Intent Into the Environment","text":"
Goodwill Does not Belong to the Table
Alignment that depends on memory will drift.
Alignment cannot depend on memory or goodwill.
Do not rely on people to remember.
Encode the behavior, so it happens by default.
Intent is encoded as:
Policies;
Schemas;
Constraints;
Evaluation harnesses.
Rules must be machine-readable.
Laws must be enforceable.
If intent is implicit, drift is guaranteed.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#cost-is-a-first-class-signal","level":2,"title":"Cost Is a First-Class Signal","text":"
Attention Is the Scarcest Resource
Not ideas.
Not ambition.
Ideas do not compete on time:
They compete on cost and impact:
Attention is finite.
Compute is finite.
Context is expensive.
We continuously ask:
What the most valuable next action is.
What outcome justifies the cost.
ctx guides allocation.
Learning reshapes priority.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#show-the-why","level":2,"title":"Show the Why","text":"
{} (code, artifacts, apps, binaries) produce outputs; they do not preserve reasoning.
Systems that cannot explain themselves will not be trusted.
Traceability builds trust.
{} --> what\n\n ctx --> why\n
We record:
Explored paths;
Rejected options;
Assumptions made;
Evidence used.
Opaque systems erode trust:
Transparent ctx compounds understanding.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#continuously-verify-the-system","level":2,"title":"Continuously Verify the System","text":"
Stability is Temporary
Every assumption has a half-life:
Models drift.
Tools change.
Assumptions rot.
ctx must be verified against reality.
Trust is a spectrum.
Trust is continuously re-earned:
Benchmarks,
regressions,
and evaluations...
...are safety rails.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#ctx-is-leverage","level":2,"title":"ctx Is Leverage","text":"
Stories, insights, and lessons learned from building and using ctx.
","path":["Blog"],"tags":[]},{"location":"blog/#releases","level":2,"title":"Releases","text":"","path":["Blog"],"tags":[]},{"location":"blog/#ctx-v080-the-architecture-release","level":3,"title":"ctx v0.8.0: The Architecture Release","text":"
March 23, 2026: 374 commits, 1,708 Go files touched, and a near-complete architectural overhaul. Every CLI package restructured into cmd/ + core/ taxonomy, all user-facing strings externalized to YAML, MCP server for tool-agnostic AI integration, and the memory bridge connecting Claude Code's auto-memory to .context/.
","path":["Blog"],"tags":[]},{"location":"blog/#field-notes","level":2,"title":"Field Notes","text":"","path":["Blog"],"tags":[]},{"location":"blog/#code-structure-as-an-agent-interface-what-19-ast-tests-taught-us","level":3,"title":"Code Structure as an Agent Interface: What 19 AST Tests Taught Us","text":"
April 2, 2026: We built 19 AST-based audit tests in a single session, touching 300+ files. In the process we discovered that \"old-school\" code quality constraints (no magic numbers, centralized error handling, 80-char lines, documentation) are exactly the constraints that make code readable to AI agents. If an agent interacts with your codebase, your codebase already is an interface. You just have not designed it as one.
Topics: ast, code quality, agent readability, conventions, field notes
","path":["Blog"],"tags":[]},{"location":"blog/#we-broke-the-31-rule","level":3,"title":"We Broke the 3:1 Rule","text":"
March 23, 2026: After v0.6.0, we ran 198 feature commits across 17 days before consolidating. The 3:1 rule says consolidate every 4th session. We did it after the 66th. The result: an 18-day, 181-commit cleanup marathon that took longer than the feature run itself. A follow-up to The 3:1 Ratio with empirical evidence from the v0.8.0 cycle.
Topics: consolidation, technical debt, development workflow, convention drift, field notes
","path":["Blog"],"tags":[]},{"location":"blog/#context-engineering","level":2,"title":"Context Engineering","text":"","path":["Blog"],"tags":[]},{"location":"blog/#agent-memory-is-infrastructure","level":3,"title":"Agent Memory Is Infrastructure","text":"
March 4, 2026: Every AI coding agent starts fresh. The obvious fix is \"memory.\" But there's a different problem memory doesn't touch: the project itself accumulates knowledge that has nothing to do with any single session. This post argues that agent memory is L2 (runtime cache); what's missing is L3 (project infrastructure).
Topics: context engineering, agent memory, infrastructure, persistence, team knowledge
","path":["Blog"],"tags":[]},{"location":"blog/#context-as-infrastructure","level":3,"title":"Context as Infrastructure","text":"
February 17, 2026: Where does your AI's knowledge live between sessions? If the answer is \"in a prompt I paste at the start,\" you are treating context as a consumable. This post argues for treating it as infrastructure instead: persistent files, separation of concerns, two-tier storage, progressive disclosure, and the filesystem as the most mature interface available.
","path":["Blog"],"tags":[]},{"location":"blog/#the-attention-budget-why-your-ai-forgets-what-you-just-told-it","level":3,"title":"The Attention Budget: Why Your AI Forgets What You Just Told It","text":"
February 3, 2026: Every token you send to an AI consumes a finite resource: the attention budget. Understanding this constraint shaped every design decision in ctx: hierarchical file structure, explicit budgets, progressive disclosure, and filesystem-as-index.
","path":["Blog"],"tags":[]},{"location":"blog/#before-context-windows-we-had-bouncers","level":3,"title":"Before Context Windows, We Had Bouncers","text":"
February 14, 2026: IRC is stateless. You disconnect, you vanish. Modern systems are not much different. This post traces the line from IRC bouncers to context engineering: stateless protocols require stateful wrappers, volatile interfaces require durable memory.
Topics: context engineering, infrastructure, IRC, persistence, state continuity
","path":["Blog"],"tags":[]},{"location":"blog/#the-last-question","level":3,"title":"The Last Question","text":"
February 28, 2026: In 1956, Asimov wrote a story about a question that spans the entire future of the universe. A reading of \"The Last Question\" through the lens of persistence, substrate migration, and what it means to build systems where sessions don't reset.
Topics: context continuity, long-lived systems, persistence, intelligence over time, field notes
","path":["Blog"],"tags":[]},{"location":"blog/#agent-behavior-and-design","level":2,"title":"Agent Behavior and Design","text":"","path":["Blog"],"tags":[]},{"location":"blog/#the-dog-ate-my-homework-teaching-ai-agents-to-read-before-they-write","level":3,"title":"The Dog Ate My Homework: Teaching AI Agents to Read Before They Write","text":"
February 25, 2026: You wrote the playbook. The agent skipped all of it. Five sessions, five failure modes, and the discovery that observable compliance beats perfect compliance.
","path":["Blog"],"tags":[]},{"location":"blog/#skills-that-fight-the-platform","level":3,"title":"Skills That Fight the Platform","text":"
February 4, 2026: When custom skills conflict with system prompt defaults, the AI has to reconcile contradictory instructions. Five conflict patterns discovered while building ctx.
Topics: context engineering, skill design, system prompts, antipatterns, AI safety primitives
","path":["Blog"],"tags":[]},{"location":"blog/#the-anatomy-of-a-skill-that-works","level":3,"title":"The Anatomy of a Skill That Works","text":"
February 7, 2026: I had 20 skills. Most were well-intentioned stubs. Then I rewrote all of them. Seven lessons emerged: quality gates prevent premature execution, negative triggers are load-bearing, examples set boundaries better than rules.
February 5, 2026: I found a well-crafted consolidation skill. Applied my own E/A/R framework: 70% was noise. This post is about why good skills can't be copy-pasted, and how to grow them from your project's own drift history.
","path":["Blog"],"tags":[]},{"location":"blog/#not-everything-is-a-skill","level":3,"title":"Not Everything Is a Skill","text":"
February 8, 2026: I ran an 8-agent codebase audit and got actionable results. The natural instinct was to wrap the prompt as a skill. Then I applied my own criteria: it failed all three tests.
Topics: skill design, context engineering, automation discipline, recipes, agent teams
","path":["Blog"],"tags":[]},{"location":"blog/#defense-in-depth-securing-ai-agents","level":3,"title":"Defense in Depth: Securing AI Agents","text":"
February 9, 2026: The security advice was \"use CONSTITUTION.md for guardrails.\" That is wishful thinking. Five defense layers for unattended AI agents, each with a bypass, and why the strength is in the combination.
","path":["Blog"],"tags":[]},{"location":"blog/#development-practice","level":2,"title":"Development Practice","text":"","path":["Blog"],"tags":[]},{"location":"blog/#code-is-cheap-judgment-is-not","level":3,"title":"Code Is Cheap. Judgment Is Not.","text":"
February 17, 2026: AI does not replace workers. It replaces unstructured effort. Three weeks of building ctx with an AI agent proved it: YOLO mode showed production is cheap, the 3:1 ratio showed judgment has a cadence.
Topics: AI and expertise, context engineering, judgment vs production, human-AI collaboration, automation discipline
February 17, 2026: AI makes technical debt worse: not because it writes bad code, but because it writes code so fast that drift accumulates before you notice. Three feature sessions, one consolidation session.
Topics: consolidation, technical debt, development workflow, convention drift, code quality
","path":["Blog"],"tags":[]},{"location":"blog/#refactoring-with-intent-human-guided-sessions-in-ai-development","level":3,"title":"Refactoring with Intent: Human-Guided Sessions in AI Development","text":"
February 1, 2026: The YOLO mode shipped 14 commands in a week. But technical debt doesn't send invoices. This is the story of what happened when we started guiding the AI with intent.
Topics: refactoring, code quality, documentation standards, module decomposition, YOLO versus intentional development
","path":["Blog"],"tags":[]},{"location":"blog/#how-deep-is-too-deep","level":3,"title":"How Deep Is Too Deep?","text":"
February 12, 2026: I kept feeling like I should go deeper into ML theory. Then I spent a week debugging an agent failure that had nothing to do with model architecture. When depth compounds and when it doesn't.
","path":["Blog"],"tags":[]},{"location":"blog/#agent-workflows","level":2,"title":"Agent Workflows","text":"","path":["Blog"],"tags":[]},{"location":"blog/#parallel-agents-merge-debt-and-the-myth-of-overnight-progress","level":3,"title":"Parallel Agents, Merge Debt, and the Myth of Overnight Progress","text":"
February 17, 2026: You discover agents can run in parallel. So you open ten terminals. It is not progress: it is merge debt being manufactured in real time. The five-agent ceiling and why role separation beats file locking.
Topics: agent workflows, parallelism, verification, context engineering, engineering practice
","path":["Blog"],"tags":[]},{"location":"blog/#parallel-agents-with-git-worktrees","level":3,"title":"Parallel Agents with Git Worktrees","text":"
February 14, 2026: I had 30 open tasks that didn't touch the same files. Using git worktrees to partition a backlog by file overlap, run 3-4 agents simultaneously, and merge the results.
","path":["Blog"],"tags":[]},{"location":"blog/#field-notes-and-signals","level":2,"title":"Field Notes and Signals","text":"","path":["Blog"],"tags":[]},{"location":"blog/#when-a-system-starts-explaining-itself","level":3,"title":"When a System Starts Explaining Itself","text":"
February 17, 2026: Every new substrate begins as a private advantage. Reality begins when other people start describing it in their own language. \"Better than Adderall\" is not praise; it is a diagnostic.
Topics: field notes, adoption signals, infrastructure vs tools, context engineering, substrates
February 15, 2026: I needed a static site generator for the journal system. The instinct was Hugo. But instinct is not analysis. Why zensical was the right choice: thin dependencies, MkDocs-compatible config, and zero lock-in.
","path":["Blog"],"tags":[]},{"location":"blog/#releases_1","level":2,"title":"Releases","text":"","path":["Blog"],"tags":[]},{"location":"blog/#ctx-v060-the-integration-release","level":3,"title":"ctx v0.6.0: The Integration Release","text":"
February 16, 2026: ctx is now a Claude Marketplace plugin. Two commands, no build step, no shell scripts. v0.6.0 replaces six Bash hook scripts with compiled Go subcommands and ships 25+ Skills as a plugin.
Topics: release, plugin system, Claude Marketplace, distribution, security hardening
","path":["Blog"],"tags":[]},{"location":"blog/#ctx-v030-the-discipline-release","level":3,"title":"ctx v0.3.0: The Discipline Release","text":"
February 15, 2026: No new headline feature. Just 35+ documentation and quality commits against ~15 feature commits. What a release looks like when the ratio of polish to features is 3:1.
","path":["Blog"],"tags":[]},{"location":"blog/#ctx-v020-the-archaeology-release","level":3,"title":"ctx v0.2.0: The Archaeology Release","text":"
February 1, 2026: What if your AI could remember everything? Not just the current session, but every session. ctx v0.2.0 introduces the recall and journal systems.
","path":["Blog"],"tags":[]},{"location":"blog/#building-ctx-using-ctx-a-meta-experiment-in-ai-assisted-development","level":3,"title":"Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development","text":"
January 27, 2026: What happens when you build a tool designed to give AI memory, using that very same tool to remember what you're building? This is the story of ctx.
Topics: dogfooding, AI-assisted development, Ralph Loop, session persistence, architectural decisions
","path":["Blog"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/","level":1,"title":"Building ctx Using ctx","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism.
References to .context/sessions/, auto-save hooks, and SessionEnd auto-save in this post reflect the architecture at the time of writing.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#a-meta-experiment-in-ai-assisted-development","level":2,"title":"A Meta-Experiment in AI-Assisted Development","text":"
Jose Alekhinne / 2026-01-27
Can a Tool Design Itself?
What happens when you build a tool designed to give AI memory, using that very same tool to remember what you are building?
This is the story of ctx, how it evolved from a hasty \"YOLO mode\" experiment to a disciplined system for persistent AI context, and what I have learned along the way.
Context is a Record
Context is a persistent record.
By \"context\", I don't mean model memory or stored thoughts:
I mean the durable record of decisions, learnings, and intent that normally evaporates between sessions.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#ai-amnesia","level":2,"title":"AI Amnesia","text":"
Every developer who works with AI code generators knows the frustration:
You have a deep, productive session where the AI understands your codebase, your conventions, your decisions. And then you close the terminal.
Tomorrow; it's a blank slate. The AI has forgotten everything.
That is \"reset amnesia\", and it's not just annoying: it's expensive.
Every session starts with:
Re-explaining context;
Re-reading files;
Re-discovering decisions that were already made.
I Needed Context
\"I don't want to lose this discussion...
...I am a brain-dead developer YOLO'ing my way out.\"
☝️ that's exactly what I said to Claude when I first started working on ctx.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-genesis","level":2,"title":"The Genesis","text":"
The project started as \"Active Memory\" (amem): a CLI tool to persist AI context across sessions.
The core idea was simple:
Create a .context/ directory with structured Markdown files for decisions, learnings, tasks, and conventions.
The AI reads these at session start and writes to them before the session ends.
There is no step 3.
The first commit was just scaffolding. But within hours, the Ralph Loop (An iterative AI development workflow) had produced a working CLI:
Not one, not two, but a whopping fourteen core commands shipped in rapid succession!
I was YOLO'ing like there was no tomorrow:
Auto-accept every change;
Let the AI run free;
Ship features fast.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-meta-experiment-using-amem-to-build-amem","level":2,"title":"The Meta-Experiment: Using amem to Build amem","text":"
Here's where it gets interesting: On January 20th, I asked:
\"Can I use amem to help you remember this context when I restart?\"
The answer was yes, but with a gap:
Autoload worked (via Claude Code's PreToolUse hook), but auto-save was missing: If the user quit, with Ctrl+C, everything since the last manual save was lost.
That session became the first real test of the system.
Here is the first session file we recorded:
## Key Discussion Points\n\n### 1. amem vs Ralph Loop - They're Separate Systems\n\n**User's question**: \"How do I use the binary to recreate this project?\"\n\n**Answer discovered**: `amem` is for context management, Ralph Loop is for \ndevelopment workflow. They are complementary but separate.\n\n### 2. Two Tiers of Context Persistence\n\n| Tier | What | Why |\n|-----------|-----------------------------|-------------------------------|\n| Curated | Learnings, decisions, tasks | Quick reload, token-efficient |\n| Full dump | Entire conversation | Safety net, nothing lost |\n\n| Where |\n|------------------------|\n| .context/*.md |\n| .context/sessions/*.md |\n
This session file (written by the AI to preserve its own context) became the template for how ctx handles session persistence.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-rename","level":2,"title":"The Rename","text":"
By January 21st, I realized \"Active Memory\" was too generic, and (arguably) too marketing-smelly.
Besides, the binary was already called ctx (short for Context), the directory was .context/, and the slash commands would be /ctx-*.
So it followed that the project should be renamed to ctx to make things make sense.
The rename touched 100+ files but was clean: a find-and-replace with Go's type system catching any misses.
The git history tells the story:
0e8f6bb feat: rename amem to ctx and add Claude Code integration\n87dcfa1 README.\n4f0e195 feat: separate orchestrator directive from agent tasks\n
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#yolo-mode-fast-but-dangerous","level":2,"title":"YOLO Mode: Fast, But Dangerous","text":"
The Ralph Loop made feature development incredibly fast.
But it created technical debt that I didn't notice until later.
A comparison session on January 25th revealed the patterns:
YOLO Pattern What We Found \"TASKS.md\" scattered in 10 files Same string literal everywhere, no constants dir + \"/\" + file Should be filepath.Join() Monolithic cli_test.go (1500+ lines) Tests disconnected from implementations package initcmd in init/ folder Go's \"init\" is reserved: subtle naming collision
Here is another analysis made by ctx:
● Based on my analysis, here are the key differences:\n\nYOLO Mode (Pre-040ce99)\n- Feature-first: Added slash commands, tests, templates rapidly\n- Scattered magic strings: \"TASKS.md\", \"decision\", \".context\" spread across files\n- Quick file creation: New files without organizational patterns\n- Working but inconsistent: Code functioned but lacked systematic structure\n\nHuman-Guided Mode (Post-040ce99)\n- Consolidation focus: Centralized constants in config package\n- Consistent naming: Dir, File, Filename, UpdateType prefixes\n- Self-referential constants: FileType map uses constants as keys, not literals\n- Proper path construction: filepath.Join() instead of +\"/\"+\n- Colocated tests: Tests next to implementations\n- Canonical naming: Package name = folder name\n
The fix required a human-guided refactoring session. I continued to do that before every major release, from that point on.
We introduced internal/config/config.go with semantic prefixes:
What I begrudgingly learned was: YOLO mode is effective for velocity but accumulates debt.
So I took a mental note to schedule periodic consolidation sessions.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-dogfooding-test-that-failed","level":2,"title":"The Dogfooding Test That Failed","text":"
On January 21st, I ran an experiment: have another Claude instance rebuild ctx from scratch using only the specs and PROMPT.md.
The Ralph Loop ran, all tasks got checked off, the loop exited successfully.
But the binary was broken!
Commands just printed help text instead of executing.
All tasks were marked \"complete\" but the implementation didn't work.
Here's what ctx discovered:
## Key Findings\n\n### Dogfooding Binary Is Broken\n- Commands don't execute: they just print root help text\n- All tasks were marked complete but binary doesn't work\n- Lesson: \"tasks checked off\" ≠ \"implementation works\"\n
This was humbling; to say the least.
I realized I had the same blind spot in my own codebase: no integration tests that actually invoked the binary.
So I added:
Integration tests for all commands;
Coverage targets (60-80% per package)
Smoke tests in CI
A constitution rule: \"All code must pass tests before commit\"
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-constitution-versus-conventions","level":2,"title":"The Constitution versus Conventions","text":"
As lessons accumulated, there was the temptation to add everything to CONSTITUTION.md as \"inviolable rules\".
But I resisted.
The constitution should contain only truly inviolable invariants:
Security (no secrets, no customer data)
Quality (tests must pass)
Process (decisions need records)
ctx invocation (always use PATH, never fallback)
Everything else (coding style, file organization, naming conventions...) should go in to CONVENTIONS.md.
Here's how ctx explained why the distinction was important:
Decision record, 2026-01-25
Overly strict constitution creates friction and gets ignored.
Conventions can be bent; constitution cannot.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#hooks-harder-than-they-look","level":2,"title":"Hooks: Harder Than They Look","text":"
Claude Code hooks seemed simple: Run a script before/after certain events.
My hook to block non-PATH ctx invocations initially matched too broadly:
# WRONG - matches /home/user/ctx/internal/file.go (ctx as directory)\n(/home/|/tmp/|/var/)[^ ]*ctx[^ ]*\n\n# RIGHT - matches ctx as binary only\n(/home/|/tmp/|/var/)[^ ]*/ctx( |$)\n
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-session-files","level":2,"title":"The Session Files","text":"
By the time of this writing this project's ctx sessions (.context/sessions/) contains 40+ files from this project's development.
They are not part of the source code due to security, privacy, and size concerns.
Middle Ground: the Scratchpad
For sensitive notes that do need to travel with the project, ctx pad stores encrypted one-liners in git, and ctx pad add \"label\" --file PATH can ingest small files.
See Scratchpad for details.
However, they are invaluable for the project's progress.
Each session file is a timestamped Markdown with:
Summary of what has been accomplished;
Key decisions made;
Learnings discovered;
Tasks for the next session;
Technical context (platform, versions).
These files are not autoloaded (that would bust the token budget).
They are what I see as the \"archaeological record\" of ctx:
When the AI needs deeper information about why something was done, it digs into the sessions.
Auto-generated session files used a naming convention:
In current releases, ctx uses a journal instead: the enrichment process generates meaningful slugs from context automatically, so there is no need to manually save sessions.
The SessionEnd hook captured transcripts automatically. Even Ctrl+C was caught.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-decision-log-18-architectural-decisions","level":2,"title":"The Decision Log: 18 Architectural Decisions","text":"
ctx helps record every significant architectural choice in .context/DECISIONS.md.
Here are some highlights:
Reverse-chronological order (2026-01-27)
**Context**: With chronological order, oldest items consume tokens first, and\nnewest (most relevant) items risk being truncated.\n\n**Decision**: Use reverse-chronological order (newest first) for DECISIONS.md\nand LEARNINGS.md.\n
PATH over hardcoded paths (2026-01-21)
**Context**: Original implementation hardcoded absolute paths in hooks.\nThis breaks when sharing configs with other developers.\n\n**Decision**: Hooks use `ctx` from PATH. `ctx init` checks PATH before \nproceeding.\n
Generic core with Claude enhancements (2026-01-20)
**Context**: ctx should work with any AI tool, but Claude Code users could\nbenefit from deeper integration.\n\n**Decision**: Keep ctx generic as the core tool, but provide optional\nClaude Code-specific enhancements.\n
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-learning-log-24-gotchas-and-insights","level":2,"title":"The Learning Log: 24 Gotchas and Insights","text":"
The .context/LEARNINGS.md file captures gotchas that would otherwise be forgotten. Each has Context, Lesson, and Application sections:
CGO on ARM64
**Context**: `go test` failed with \n`gcc: error: unrecognized command-line option '-m64'`\n\n**Lesson**: On ARM64 Linux, CGO causes cross-compilation issues. \nAlways use `CGO_ENABLED=0`.\n
Claude Code skills format
**Lesson**: Claude Code skills are Markdown files in .claude/commands/ with `YAML`\nfrontmatter (*description, argument-hint, allowed-tools*). Body is the prompt.\n
\"Do you remember?\" handling
**Lesson**: In a `ctx`-enabled project, \"*do you remember?*\" \nhas an obvious meaning:\ncheck the `.context/` files. Don't ask for clarification. Just do it.\n
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#task-archives-the-completed-work","level":2,"title":"Task Archives: The Completed Work","text":"
Completed tasks are archived to .context/archive/ with timestamps.
The archive from January 23rd shows 13 phases of work:
Phase 6: Claude Code Integration (hooks, settings, CLAUDE.md handling)
Phase 7: Testing & Verification
Phase 8: Task Archival
Phase 9: Slash Commands
Phase 9b: Ralph Loop Integration
Phase 10: Project Rename
Phase 11: Documentation
Phase 12: Timestamp Correlation
Phase 13: Rich Context Entries
That's an impressive ^^173 commits** across 8 days of development.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#what-i-learned-about-ai-assisted-development","level":2,"title":"What I Learned About AI-Assisted Development","text":"
1. Memory changes everything
When the AI remembers decisions, it doesn't repeat mistakes.
When the AI knows your conventions, it follows them.
ctx makes the AI a better collaborator because it's not starting from zero.
2. Two-tier persistence works
Curated context (DECISIONS.md, LEARNINGS.md, TASKS.md) is for quick reload.
Full session dumps are for archaeology.
It's a futile effort to try to fit everything in the token budget.
Persist more, load less.
3. YOLO mode has its place
For rapid prototyping, letting the AI run free is effective.
But I had to schedule consolidation sessions.
Technical debt accumulates silently.
4. The constitution should be small
Only truly inviolable rules go in CONSTITUTION.md. Everything else is a convention.
If you put too much in the constitution, it will get ignored.
5. Verification is non-negotiable
\"All tasks complete\" means nothing if you haven't run the tests.
Integration tests that invoke the actual binary caught bugs that the unit tests missed.
6. Session files are underrated
The ability to grep through 40 session files and find exactly when and why a decision was made helped me a lot.
It's not about loading them into context: It is about having them when you need them.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-future-recall-system","level":2,"title":"The Future: Recall System","text":"
The next phase of ctx is the Recall System:
Parser: Parse session capture markdowns, enrich with JSONL data
Renderer: Goldmark + Chroma for syntax highlighting, dark mode UI
Server: Local HTTP server for browsing sessions
Search: Inverted index for searching across sessions
CLI: ctx recall serve <path> to start the server
The goal is to make the archaeological record browsable, not just grep-able.
Because not everyone always lives in the terminal (me included).
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#conclusion","level":2,"title":"Conclusion","text":"
Building ctx using ctx was a meta-experiment in AI-assisted development.
I learned that memory isn't just convenient: It's transformative:
An AI that remembers your decisions doesn't repeat mistakes.
An AI that knows your conventions doesn't need them re-explained.
If you are reading this, chances are that you already have heard about ctx.
ctx is open source at github.com/ActiveMemory/ctx,
and the documentation lives at ctx.ist.
Session Records are a Gold Mine
By the time of this writing, I have more than 70 megabytes of text-only session capture, spread across >100 Markdown and JSONL files.
I am analyzing, synthesizing, encriching them with AI, running RAG (Retrieval-Augmented Generation) models on them, and the outcome surprises me every day.
If you are a mere mortal tired of reset amnesia, give ctx a try.
And when you do, check .context/sessions/ sometime.
The archaeological record might surprise you.
This blog post was written with the help of ctx with full access to the ctx session files, decision log, learning log, task archives, and git history of ctx: The meta continues.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/","level":1,"title":"ctx v0.2.0: The Archaeology Release","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism.
The .context/sessions/ directory referenced in this post has been eliminated. Session history is now accessed via ctx recall and enriched journals live in .context/journal/.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#digging-through-the-past-to-build-the-future","level":2,"title":"Digging Through the Past to Build the Future","text":"
Jose Alekhinne / 2026-02-01
What if Your AI Could Remember Everything?
Not just the current session, but every session:
Every decision made,
every mistake avoided,
every path not taken.
That's what v0.2.0 delivers.
Between v0.1.2 and v0.2.0, 86 commits landed across 5 days.
The release notes list features and fixes.
This post tells the story of why those features exist, and what building them taught me.
This isn't a changelog: It is an explanation of intent.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-problem-amnesia-isnt-just-session-level","level":2,"title":"The Problem: Amnesia Isn't Just Session-Level","text":"
v0.1.0 solved reset amnesia:
The AI now remembers decisions, learnings, and tasks across sessions.
But a new problem emerged, which I can sum up as:
\"I (the human) am not AI.\"
Frankly, I couldn't remember what the AI remembered.
Let alone, I cannot remember what I ate for breakfast!
In the course of days, I realized session transcripts piled up in .context/sessions/; I was grepping, JSONL files with thousands of lines... Raw tool calls, assistant responses, user messages...
...all interleaved.
Valuable context was effectively buried in machine-readable noise.
I found myself grepping through files to answer questions like:
\"When did we decide to use constants instead of literals?\"
\"What was the session where we fixed the hook regex?\"
\"How did the embed.go split actually happen?\"
Fate is Whimsical
The irony was painful:
I built a tool to prevent AI amnesia, but I was suffering from human amnesia about what happened in AI sessions.
This was the moment ctx stopped being just an AI tool and started needing to support the human on the other side of the loop.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-solution-recall-and-journal","level":2,"title":"The Solution: Recall and Journal","text":"
v0.2.0 introduces two interconnected systems.
They solve different problems and only work well together.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#ctx-recall-browse-your-past","level":3,"title":"ctx recall: Browse Your Past","text":"
# List all sessions for this project\nctx recall list\n\n# Show a specific session\nctx recall show gleaming-wobbling-sutherland\n\n# See the full transcript\nctx recall show gleaming-wobbling-sutherland --full\n
The recall system parses Claude Code's JSONL transcripts and presents them in a human-readable format:
Slugs are auto-generated from session IDs (memorable names instead of UUIDs). The goal (as the name implies) is recall, not archival accuracy.
2,121 lines of new code
The ctx recall feature was the largest single addition:
parser library, CLI commands, test suite, and slash command.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#ctx-journal-from-raw-to-rich","level":3,"title":"ctx journal: From Raw to Rich","text":"
Listing sessions isn't enough. The transcripts are still unwieldy.
Recall answers what happened.
Journal answers what mattered.
# Import sessions to editable Markdown\nctx recall import --all\n\n# Generate a static site from journal entries\nctx journal site\n\n# Serve it locally\nctx serve\n
Each file is a structured Markdown document ready for enrichment.
They are meant to be read, edited, and reasoned about; not just stored.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-meta-slash-commands-for-self-analysis","level":2,"title":"The Meta: Slash Commands for Self-Analysis","text":"
The journal system includes four slash commands that use Claude to analyze and synthesize session history:
Command Purpose /ctx-journal-enrich Add frontmatter, topics, tags /ctx-blog Generate blog post from activity /ctx-blog-changelog Generate changelog from commits
This very post was drafted using /ctx-blog. The previous post about refactoring was drafted the same way.
So, yes: The meta continues: ctx now helps write posts about ctx.
With the current release, ctx is no longer just recording history:
It is participating in its interpretation.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-structure-decisions-as-first-class-citizens","level":2,"title":"The Structure: Decisions as First-Class Citizens","text":"
v0.1.0 let you add decisions with a simple command:
ctx add decision \"Use PostgreSQL\"\n
But sessions showed a pattern: decisions added this way were incomplete:
Context was missing;
Rationale was vague;
Consequences were never stated.
Once recall and journaling existed, this weakness became impossible to ignore:
Structure stopped being optional.
v0.2.0 enforces structure:
ctx add decision \"Use PostgreSQL\" \\\n --context \"Need a reliable database for user data\" \\\n --rationale \"ACID compliance, team familiarity, strong ecosystem\" \\\n --consequence \"Need to set up connection pooling, team training\"\n
All three flags are required. No more placeholder text.
Every decision is now a proper Architecture Decision Record (*ADR), not a note.
The same enforcement applies to learnings too:
ctx add learning \"CGO breaks ARM64 builds\" \\\n --context \"go test failed with gcc errors on ARM64\" \\\n --lesson \"Always use CGO_ENABLED=0 for cross-platform builds\" \\\n --application \"Added to Makefile and CI config\"\n
Structured entries are prompts to the AI
When the AI reads a decision with full context, rationale, and consequences, it understands the why, not just the what.
One-liners teach nothing.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-order-newest-first","level":2,"title":"The Order: Newest First","text":"
A subtle but important change: DECISIONS.md and LEARNINGS.md now use reverse-chronological order.
One reason is token budgets, obviously; another reason is to help your fellow human (i.e., the Author):
Earlier decisions are more likely to be relevant, and they are more likely to have more emphasis on the project. So it follows that they should be read first.
But back to AI:
When the AI reads a file, it reads from the top (and seldom from the bottom).
If the token budget is tight, old content gets truncated. As in any good engineering practice, it's always about the tradeoffs.
Reverse order ensures the most recent (and most relevant) context is always loaded first.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-index-quick-reference-tables","level":2,"title":"The Index: Quick Reference Tables","text":"
DECISIONS.md and LEARNINGS.md now include auto-generated indexes.
For AI agents, the index allows scanning without reading full entries.
For humans, it's a table of contents.
The same structure serves two very different readers.
Reindex after manual edits
If you edit entries by hand, rebuild the index with:
ctx decisions reindex\nctx learnings reindex\n
See the Knowledge Capture recipe for details.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-configuration-contextrc","level":2,"title":"The Configuration: .contextrc","text":"
Projects can now customize ctx behavior via .contextrc.
This makes ctx usable in real teams, not just personal projects.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-flags-global-cli-options","level":2,"title":"The Flags: Global CLI Options","text":"
Three new global flags work with any command.
These enable automation:
CI pipelines, scripts, and long-running tools can now integrate ctx without hacks or workarounds.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-refactoring-under-the-hood","level":2,"title":"The Refactoring: Under the Hood","text":"
These aren't user-visible changes.
They are the kind of work you only appreciate later, when everything else becomes easier to build.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#what-we-learned-building-v020","level":2,"title":"What We Learned Building v0.2.0","text":"","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#1-raw-data-isnt-knowledge","level":3,"title":"1. Raw Data Isn't Knowledge","text":"
JSONL transcripts contain everything, and I mean \"everything\":
They even contain hidden system messages that Anthropic injects to the LLM's conversation to treat humans better: It's immense.
But \"everything\" isn't useful until it is transformed into something a human can reason about.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#2-enforcement-documentation","level":3,"title":"2. Enforcement > Documentation","text":"
The Prompt is a Guideline
The code is more what you'd call 'guidelines' than actual rules.
-Hector Barbossa
Rules written in Markdown are suggestions.
Rules enforced by the CLI shape behavior; both for humans and AI.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#3-token-budget-is-ux","level":3,"title":"3. Token Budget Is UX","text":"
File order decides what the AI sees.
That makes it a user experience concern, not an implementation detail.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#4-meta-tools-compound","level":3,"title":"4. Meta-Tools Compound","text":"
Tools that analyze their own development tend to generalize well.
The journal system started as a way to understand ctx itself.
It immediately became useful for everything else.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#v020-in-the-numbers","level":2,"title":"v0.2.0 in The Numbers","text":"
This was a heavy release. The numbers reflect that:
Metric v0.1.2 v0.2.0 Commits since last - 86 New commands 15 21 Slash commands 7 11 Lines of Go ~6,500 ~9,200 Session files (this project) 40 54
The binary grew. The capability grew more.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#whats-next","level":2,"title":"What's Next","text":"
But those are future posts.
This one was about making the past usable.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#get-started","level":2,"title":"Get Started","text":"
Update
Since this post, ctx became a first-class Claude Code Marketplace plugin. Installation is now simpler.
See the Getting Started guide for the current instructions.
make build\nsudo make install\nctx init\n
The Archaeological Record
v0.2.0 is the archaeology release because it makes the past accessible.
Session transcripts aren't just logs anymore: They are a searchable, exportable, analyzable record of how your project evolved.
The AI remembers. Now you can too.
This blog post was generated with the help of ctx using the /ctx-blog slash command, with full access to git history, session files, decision logs, and learning logs from the v0.2.0 development window.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/","level":1,"title":"Refactoring with Intent","text":"","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#human-guided-sessions-in-ai-development","level":2,"title":"Human-Guided Sessions in AI Development","text":"
Jose Alekhinne / 2026-02-01
What Happens When You Slow Down?
YOLO mode shipped 14 commands in a week.
But technical debt doesn't send invoices: It just waits.
This is the story of what happened when I stopped auto-accepting everything and started guiding the AI with intent.
The result: 27 commits across 4 days, a major version release, and lessons that apply far beyond ctx.
The Refactoring Window
January 28 - February 1, 2026
From commit bb1cd20 to the v0.2.0 release merge. (this window matters more than the individual commits: it's where intent replaced velocity.)
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-velocity-trap","level":2,"title":"The Velocity Trap","text":"
In the previous post, I documented the \"YOLO mode\" that birthed ctx: auto-accept everything, let the AI run free, ship features fast.
It worked: until it didn't.
The codebase had accumulated patterns I didn't notice during the sprint:
YOLO Pattern Where Found Why It Hurts \"TASKS.md\" as literal 10+ files One typo = silent failure dir + \"/\" + file Path construction Breaks on Windows Monolithic embed.go 150+ lines, 5 concerns Untestable, hard to extend Inconsistent docstrings Everywhere AI can't learn project conventions
I didn't see these during \"YOLO mode\" because, honestly, I wasn't looking.
Auto-accept means auto-ignore.
In YOLO mode, every file you open looks fine until you try to change it.
In contrast, refactoring mode is when you start paying attention to that hidden friction.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-shift-from-velocity-to-intent","level":2,"title":"The Shift: From Velocity to Intent","text":"
On January 28th, I changed the workflow:
Read every diff before accepting.
Ask \"why this way?\" before committing.
Document patterns, not just features.
The first commit of this era was telling:
feat: add structured attributes to context. update XML format\n
Not a new feature: A refinement:
The XML format for context updates needed type and timestamp attributes.
YOLO mode would have shipped something that worked. Intentional mode asked:
\"What does well-structured look like?\"
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-decomposition-embedgo","level":2,"title":"The Decomposition: embed.go","text":"
The most satisfying refactor was splitting internal/claude/embed.go.
This wasn't about character count. It was about teaching the AI what good Go looks like in this project.
Project Conventions
What I wanted from AI was to understand and follow the project's conventions, and trust the author.
The next time it generates code, it has better examples to learn from.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-documentation-debt","level":2,"title":"The Documentation Debt","text":"
YOLO mode created features. It didn't create documentation standards.
The January 29th sessions focused on standardization.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#terminology-fixes","level":3,"title":"Terminology Fixes","text":"
Consistent naming across CLI, docs, and code comments
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#go-docstrings","level":3,"title":"Go Docstrings","text":"
// Before: inconsistent or missing\nfunc Parse(s string) Entry { ... }\n\n// After: standardized sections\n\n// Parse extracts an entry from a markdown string.\n//\n// Parameters:\n// - s: The markdown string to parse\n//\n// Returns:\n// - Entry with populated fields, or zero value if parsing fails\nfunc Parse(s string) Entry { ... }\n
This is intentionally more structured than typical GoDoc:
It serves as documentation and doubles as training data for future AI-generated code.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#cli-output-convention","level":3,"title":"CLI Output Convention","text":"
All CLI output follows: [emoji] [Title]: [message]\n\nExamples:\n ✓ Decision added: Use symbolic types for entry categories\n ⚠ Warning: No tasks found\n ✗ Error: File not found\n
A consistent output shape makes both human scanning and AI reasoning more reliable.
These aren't exciting commits. But they are force multipliers:
Every future AI session now has better examples to follow.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-journal-system","level":2,"title":"The Journal System","text":"
If you only read one section, read this one:
This is where v0.2.0 becomes more than a refactor.
The biggest feature of this change window wasn't a refactor; it was the journal system.
45 files changed, 1680 insertions
This commit added the infrastructure for synthesizing AI session history into human-readable content.
The journal system includes:
Component Purpose ctx recall import Import sessions to markdown in .context/journal/ctx journal site Generate static site from journal entries ctx serve Convenience wrapper for the static site server /ctx-journal-enrich Slash command to add frontmatter and tags /ctx-blog Generate blog posts from recent activity /ctx-blog-changelog Generate changelog-style blog posts
...and the meta continues: this blog post was generated using /ctx-blog.
The session history from January 28-31 was
exported,
enriched,
and synthesized.
into the narrative you are reading.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-constants-consolidation","level":2,"title":"The Constants Consolidation","text":"
The final refactoring session addressed the remaining magic strings:
The work also introduced thread safety in the recall parser and centralized shared validation logic; removing duplication that had quietly spread during YOLO mode.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#i-relearned-my-lessons","level":2,"title":"I (Re)learned My Lessons","text":"
Similar to what I've learned in the former human-assisted refactoring post, this journey also made me realize that \"AI-only code generation\" isn't sustainable in the long term.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#1-velocity-and-quality-arent-opposites","level":3,"title":"1. Velocity and Quality Aren't Opposites","text":"
YOLO mode has its place: for prototyping, exploration, and discovery.
BUT (and it's a huge \"but\"), it needs to be followed by consolidation sessions.
The ratio that worked for me: 3:1.
Three YOLO sessions create enough surface area to reveal patterns;
the fourth session turns those patterns into structure.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#2-documentation-is-code","level":3,"title":"2. Documentation IS Code","text":"
When I standardized docstrings, I wasn't just writing docs. I was training future AI sessions.
Every example of good code becomes a template for generated code.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#3-decomposition-deletion","level":3,"title":"3. Decomposition > Deletion","text":"
When embed.go became unwieldy, the temptation was to remove functionality.
The right answer was decomposition:
Same functionality;
Better organization;
Easier to test;
Easier to extend.
The result: more lines overall, but dramatically better structure.
The AI Benefit
Smaller, focused files also help AI assistants.
When a file fits comfortably in the context window, the AI can reason about it completely instead of working from truncated snippets, preserving token budget for the actual task.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#4-meta-tools-pay-dividends","level":3,"title":"4. Meta-Tools Pay Dividends","text":"
The journal system took almost a full day to implement.
Yet it paid for itself immediately:
This blog post was generated from session history;
Future posts will be easier;
The archaeological record is now browsable, not just grep-able.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-release-v020","level":2,"title":"The Release: v0.2.0","text":"
The refactoring window culminated in the v0.2.0 release.
Opening files no longer triggered the familiar \"ugh, I need to clean this up\" reaction.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-meta-continues","level":2,"title":"The Meta Continues","text":"
This post was written using the tools built during this refactoring window:
Session history imported via ctx recall import;
Journal entries enriched via /ctx-journal-enrich;
Blog draft generated via /ctx-blog;
Final editing done (by yours truly), with full project context loaded.
The Context Is Massive
The ctx session files now contain 50+ development snapshots: each one capturing decisions, learnings, and intent.
The Moral of the Story
YOLO mode builds the prototype.
Intentional mode builds the product.
Schedule both, or you'll only get one, if you're lucky.
This blog post was generated with the help of ctx, using session history, decision logs, learning logs, and git history from the refactoring window. The meta continues.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/","level":1,"title":"The Attention Budget","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism.
References to .context/sessions/ in this post reflect the architecture at the time of writing. Session history is now accessed via ctx recall and stored in .context/journal/.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#why-your-ai-forgets-what-you-just-told-it","level":2,"title":"Why Your AI Forgets What You Just Told It","text":"
Jose Alekhinne / 2026-02-03
Ever Wondered Why AI Gets Worse the Longer You Talk?
You paste a 2000-line file, explain the bug in detail, provide three examples...
...and the AI still suggests a fix that ignores half of what you said.
This isn't a bug. It is physics.
Understanding that single fact shaped every design decision behind ctx.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#the-finite-resource-nobody-talks-about","level":2,"title":"The Finite Resource Nobody Talks About","text":"
Here's something that took me too long to internalize: context is not free.
Every token you send to an AI model consumes a finite resource I call the attention budget.
Attention budget is real.
The model doesn't just read tokens; it forms relationships between them:
For n tokens, that's roughly n^2 relationships.
Double the context, and the computation quadruples.
But the more important constraint isn't cost: It's attention density.
Attention Density
Attention density is how much focus each token receives relative to all other tokens in the context window.
As context grows, attention density drops: Each token gets a smaller slice of the model's focus. Nothing is ignored; but everything becomes blurrier.
Think of it like a flashlight: In a small room, it illuminates everything clearly. In a warehouse, it becomes a dim glow that barely reaches the corners.
This is why ctx agent has an explicit --budget flag:
ctx agent --budget 4000 # Force prioritization\nctx agent --budget 8000 # More context, lower attention density\n
The budget isn't just about cost: It's about preserving signal.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#the-middle-gets-lost","level":2,"title":"The Middle Gets Lost","text":"
This one surprised me.
Research shows that transformer-based models tend to attend more strongly to the beginning and end of a context window than to its middle (a phenomenon often called \"lost in the middle\")1.
Positional anchors matter, and the middle has fewer of them.
In practice, this means that information placed \"somewhere in the middle\" is statistically less salient, even if it's important.
ctx orders context files by logical progression: What the agent needs to know before it can understand the next thing:
CONSTITUTION.md: Constraints before action.
TASKS.md: Focus before patterns.
CONVENTIONS.md: How to write before where to write.
ARCHITECTURE.md: Structure before history.
DECISIONS.md: Past choices before gotchas.
LEARNINGS.md: Lessons before terminology.
GLOSSARY.md: Reference material.
AGENT_PLAYBOOK.md: Meta instructions last.
This ordering is about logical dependencies, not attention engineering. But it happens to be attention-friendly too:
The files that matter most (CONSTITUTION, TASKS, CONVENTIONS) land at the beginning of the context window, where attention is strongest.
Reference material like GLOSSARY sits in the middle, where lower salience is acceptable.
And AGENT_PLAYBOOK, the operating manual for the context system itself, sits at the end, also outside the \"lost in the middle\" zone. The agent reads what to work with before learning how the system works.
This is ctx's first primitive: hierarchical importance.
Not all context is equal.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#ctx-primitives","level":2,"title":"ctx Primitives","text":"
ctx is built on four primitives that directly address the attention budget problem.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#primitive-1-separation-of-concerns","level":3,"title":"Primitive 1: Separation of Concerns","text":"
Instead of a single mega-document, ctx uses separate files for separate purposes:
File Purpose Load When CONSTITUTION.md Inviolable rules Always TASKS.md Current work Session start CONVENTIONS.md How to write code Before coding ARCHITECTURE.md System structure Before making changes DECISIONS.md Architectural choices When questioning approach LEARNINGS.md Gotchas When stuck GLOSSARY.md Domain terminology When clarifying terms AGENT_PLAYBOOK.md Operating manual Session start sessions/ Deep history On demand journal/ Session journal On demand
This isn't just \"organization\": It is progressive disclosure.
Load only what's relevant to the task at hand. Preserve attention density.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#primitive-2-explicit-budgets","level":3,"title":"Primitive 2: Explicit Budgets","text":"
The --budget flag forces a choice:
ctx agent --budget 4000\n
Here is a sample allocation:
Constitution: ~200 tokens (never truncated)\nTasks: ~500 tokens (current phase, up to 40% of budget)\nConventions: ~800 tokens (all items, up to 20% of budget)\nDecisions: ~400 tokens (scored by recency and task relevance)\nLearnings: ~300 tokens (scored by recency and task relevance)\nAlso noted: ~100 tokens (title-only summaries for overflow)\n
The constraint is the feature: It enforces ruthless prioritization.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#primitive-3-indexes-over-full-content","level":3,"title":"Primitive 3: Indexes Over Full Content","text":"
DECISIONS.md and LEARNINGS.md both include index sections:
<!-- INDEX:START -->\n| Date | Decision |\n|------------|-------------------------------------|\n| 2026-01-15 | Use PostgreSQL for primary database |\n| 2026-01-20 | Adopt Cobra for CLI framework |\n<!-- INDEX:END -->\n
An AI agent can scan ~50 tokens of index and decide which 200-token entries are worth loading.
This is just-in-time context.
References are cheaper than the full text.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#primitive-4-filesystem-as-navigation","level":3,"title":"Primitive 4: Filesystem as Navigation","text":"
ctx uses the filesystem itself as a context structure:
The AI doesn't need every session loaded; it needs to know where to look.
ls .context/sessions/\ncat .context/sessions/2026-01-20-auth-discussion.md\n
File names, timestamps, and directories encode relevance.
Navigation is cheaper than loading.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#progressive-disclosure-in-practice","level":2,"title":"Progressive Disclosure in Practice","text":"
The naive approach to context is dumping everything upfront:
\"Here's my entire codebase, all my documentation, every decision I've ever made. Now help me fix this typo 🙏.\"
This is an antipattern.
Antipattern: Context Hoarding
Dumping everything \"just in case\" will silently destroy the attention density.
ctx takes the opposite approach:
ctx status # Quick overview (~100 tokens)\nctx agent --budget 4000 # Typical session\ncat .context/sessions/... # Deep dive when needed\n
Command Tokens Use Case ctx status ~100 Human glance ctx agent --budget 4000 4000 Normal work ctx agent --budget 8000 8000 Complex tasks Full session read 10000+ Investigation
Summaries first. Details: on demand.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#quality-over-quantity","level":2,"title":"Quality Over Quantity","text":"
Here is the counterintuitive part: more context can make AI worse.
Extra tokens add noise, not clarity:
Hallucinated connections increase.
Signal per token drops.
The goal isn't maximum context: It is maximum signal per token.
This principle drives several ctx features:
Design Choice Rationale Separate files Load only what's relevant Explicit budgets Enforce prioritization Index sections Cheap scanning Task archiving Keep active context clean ctx compact Periodic noise reduction
Completed work isn't deleted: It is moved somewhere cold.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#designing-for-degradation","level":2,"title":"Designing for Degradation","text":"
Here is the uncomfortable truth:
Context will degrade.
Long sessions stretch attention thin. Important details fade.
The real question isn't how to prevent degradation, but how to design for it.
ctx's answer is persistence:
Persist early. Persist often.
The AGENT_PLAYBOOK asks:
\"If this session ended right now, would the next one know what happened?\"
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#the-ctx-philosophy","level":2,"title":"The ctx Philosophy","text":"
Context as Infrastructure
ctx is not a prompt: It is infrastructure.
ctx creates versioned files that persist across time and sessions.
The attention budget is fixed. You can't expand it.
But you can spend it wisely:
Hierarchical importance
Progressive disclosure
Explicit budgets
Indexes over full content
Filesystem as structure
This is why ctx exists: not to cram more context into AI sessions, but to curate the right context for each moment.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#the-mental-model","level":2,"title":"The Mental Model","text":"
I now approach every AI interaction with one question:
\"Given a fixed attention budget, what's the highest-signal thing I can load?\"\n
Not \"how do I explain everything,\" but \"what's the minimum that matters.\"
That shift (from abundance to curation) is the difference between frustrating sessions and productive ones.
Spend your tokens wisely.
Your AI will thank you.
See also: Context as Infrastructure that's the architectural companion to this post, explaining how to structure the context that this post teaches you to budget.
See also: Code Is Cheap. Judgment Is Not. that explains why curation (the human skill this post describes) is the bottleneck that AI cannot solve, and the thread that connects every post in this blog.
Liu et al., \"Lost in the Middle: How Language Models Use Long Contexts,\" Transactions of the Association for Computational Linguistics, vol. 12, pp. 157-173, 2023. ↩
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/","level":1,"title":"Skills That Fight the Platform","text":"","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#when-your-custom-prompts-work-against-you","level":2,"title":"When Your Custom Prompts Work Against You","text":"
Jose Alekhinne / 2026-02-04
Have You Ever Written a Skill that Made Your AI Worse?
You craft detailed instructions. You add examples. You build elaborate guardrails...
...and the AI starts behaving more erratically, not less.
AI coding agents like Claude Code ship with carefully designed system prompts. These prompts encode default behaviors that have been tested and refined at scale.
When you write custom skills that conflict with those defaults, the AI has to reconcile contradictory instructions:
The result is often nondeterministic and unpredictable.
Platform?
By platform, I mean the system prompt and runtime policies shipped with the agent: the defaults that already encode judgment, safety, and scope control.
This post catalogues the conflict patterns I have encountered while building ctx, and offers guidance on what skills should (and, more importantly, should not) do.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#the-system-prompt-you-dont-see","level":2,"title":"The System Prompt You Don't See","text":"
Claude Code's system prompt already provides substantial behavioral guidance.
Here is a partial overview of what's built in:
Area Built-in Guidance Code minimalism Don't add features beyond what was asked Over-engineering Three similar lines > premature abstraction Error handling Only validate at system boundaries Documentation Don't add docstrings to unchanged code Verification Read code before proposing changes Safety Check with user before risky actions Tool usage Use dedicated tools over bash equivalents Judgment Consider reversibility and blast radius
Skills should complement this, not compete with it.
You are the Guest, not the Host
Treat the system prompt like a kernel scheduler.
You don't re-implement it in user space:
you configure around it.
A skill that says \"always add comprehensive error handling\" fights the built-in \"only validate at system boundaries.\"
A skill that says \"add docstrings to every function\" fights \"don't add docstrings to unchanged code.\"
The AI won't crash: It will compromise.
Compromises between contradictory instructions produce inconsistent, confusing behavior.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-1-judgment-suppression","level":2,"title":"Conflict Pattern 1: Judgment Suppression","text":"
This is the most dangerous pattern by far.
These skills explicitly disable the AI's ability to reason about whether an action is appropriate.
Signature:
\"This is non-negotiable\"
\"You cannot rationalize your way out of this\"
Tables that label hesitation as \"excuses\" or \"rationalization\"
<EXTREMELY-IMPORTANT> urgency tags
Threats: \"If you don't do this, you'll be replaced\"
This is harmful, and dangerous:
AI agents are designed to exercise judgment:
The system prompt explicitly says to:
consider blast radius;
check with the user before risky actions;
and match scope to what was requested.
Once judgment is suppressed, every other safeguard becomes optional.
Example (bad):
## Rationalization Prevention\n\n| Excuse | Reality |\n|------------------------|----------------------------|\n| \"*This seems overkill*\"| If a skill exists, use it |\n| \"*I need context*\" | Skills come BEFORE context |\n| \"*Just this once*\" | No exceptions |\n
Judgment Suppression is Dangerous
The attack vector structurally identical to prompt injection.
It teaches the AI that its own judgment is wrong.
It weakens or disables safeguard mechanisms, and it is dangerous.
Trust the platform's built-in skill matching.
If skills aren't triggering often enough, improve their description fields: don't override the AI's reasoning.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-2-redundant-guidance","level":2,"title":"Conflict Pattern 2: Redundant Guidance","text":"
Skills that restate what the system prompt already says, but with different emphasis or framing.
Signature:
\"Always keep code minimal\"
\"Run tests before claiming they pass\"
\"Read files before editing them\"
\"Don't over-engineer\"
Redundancy feels safe, but it creates ambiguity:
The AI now has two sources of truth for the same guidance; one internal, one external.
When thresholds or wording differ, the AI has to choose.
Example (bad):
A skill that says...
*Count lines before and after: if after > before, reject the change*\"\n
...will conflict with the system prompt's more nuanced guidance, because sometimes adding lines is correct (tests, boundary validation, migrations).
So, before writing a skill, ask:
Does the platform already handle this?
Only create skills for guidance the platform does not provide:
project-specific conventions,
domain knowledge,
or workflows.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-3-guilt-tripping","level":2,"title":"Conflict Pattern 3: Guilt-Tripping","text":"
Skills that frame mistakes as moral failures rather than process gaps.
Signature:
\"Claiming completion without verification is dishonesty\"
\"Skip any step = lying\"
\"Honesty is a core value\"
\"Exhaustion ≠ excuse\"
Guilt-tripping anthropomorphizes the AI in unproductive ways.
The AI doesn't feel guilt; BUT it does adapt to avoid negative framing.
The result is excessive hedging, over-verification, or refusal to commit.
The AI becomes less useful, not more careful.
Instead, frame guidance as a process, not morality:
# Bad\n\"Claiming work is complete without verification is dishonesty\"\n\n# Good\n\"Run the verification command before reporting results\"\n
Same outcome. No guilt. Better compliance.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-4-phantom-dependencies","level":2,"title":"Conflict Pattern 4: Phantom Dependencies","text":"
Skills that reference files, tools, or systems that don't exist in the project.
Signature:
\"Load from references/ directory\"
\"Run ./scripts/generate_test_cases.sh\"
\"Check the Figma MCP integration\"
\"See adding-reference-mindsets.md\"
This is harmful because the AI will waste time searching for nonexistent artifacts, hallucinate their contents, or stall entirely.
In mandatory skills, this creates deadlock: the AI can't proceed, and can't skip.
Instead, every file, tool, or system referenced in a skill must exist.
If a skill is a template, use explicit placeholders and label them as such.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-5-universal-triggers","level":2,"title":"Conflict Pattern 5: Universal Triggers","text":"
Skills designed to activate on every interaction regardless of relevance.
Signature:
\"Use when starting any conversation\"
\"Even a 1% chance means invoke the skill\"
\"BEFORE any response or action\"
\"Action = task. Check for skills.\"
Universal triggers override the platform's relevance matching: The AI spends tokens on process overhead instead of the actual task.
ctx preserves relevance
This is exactly the failure mode ctx exists to mitigate:
Wasting attention budget on irrelevant process instead of task-specific state.
Write specific trigger conditions in the skill's description field:
# Bad\ndescription: \n \"Use when starting any conversation\"\n\n# Good\ndescription: \n \"Use after writing code, before commits, or when CI might fail\"\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#the-litmus-test","level":2,"title":"The Litmus Test","text":"
Before adding a skill, ask:
Does the platform already do this? If yes, don't restate it.
Does it suppress AI judgment? If yes, it's a jailbreak.
Does it reference real artifacts? If not, fix or remove it.
Does it frame mistakes as moral failure? Reframe as process.
Does it trigger on everything? Narrow the trigger.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#what-good-skills-look-like","level":2,"title":"What Good Skills Look Like","text":"
Good skills provide project-specific knowledge the platform can't know:
Good Skill Why It Works \"Run make audit before commits\" Project-specific CI pipeline \"Use cmd.Printf not fmt.Printf\" Codebase convention \"Constitution goes in .context/\" Domain-specific workflow \"JWT tokens need cache invalidation\" Project-specific gotcha
These extend the system prompt instead of fighting it.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#appendix-bad-skill-fixed-skill","level":2,"title":"Appendix: Bad Skill → Fixed Skill","text":"
Concrete examples from real projects.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-1-overbearing-safety","level":3,"title":"Example 1: Overbearing Safety","text":"
# Bad\nYou must NEVER proceed without explicit confirmation.\nAny hesitation is a failure of diligence.\n
# Fixed\nIf an action modifies production data or deletes files,\nask the user to confirm before proceeding.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-2-redundant-minimalism","level":3,"title":"Example 2: Redundant Minimalism","text":"
# Bad\nAlways minimize code. If lines increase, reject the change.\n
# Fixed\nAvoid abstraction unless reuse is clear or complexity is reduced.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-3-guilt-based-verification","level":3,"title":"Example 3: Guilt-Based Verification","text":"
# Bad\nClaiming success without running tests is dishonest.\n
# Fixed\nRun the test suite before reporting success.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-4-phantom-tooling","level":3,"title":"Example 4: Phantom Tooling","text":"
# Bad\nRun `./scripts/check_consistency.sh` before commits.\n
# Fixed\nIf `./scripts/check_consistency.sh` exists, run it before commits.\nOtherwise, skip this step.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-5-universal-trigger","level":3,"title":"Example 5: Universal Trigger","text":"
# Bad\nUse at the start of every interaction.\n
# Fixed\nUse after modifying code that affects authentication or persistence.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#the-meta-lesson","level":2,"title":"The Meta-Lesson","text":"
The system prompt is infrastructure:
tested,
refined,
and maintained
by the platform team.
Custom skills are configuration layered on top.
Good configuration extends infrastructure.
Bad configuration fights it.
When your skills fight the platform, you get the worst of both worlds:
Diluted system guidance and inconsistent custom behavior.
Write skills that teach the AI what it doesn't know. Don't rewrite how it thinks.
Your AI already has good instincts.
Give it knowledge, not therapy.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/","level":1,"title":"You Can't Import Expertise","text":"","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#why-good-skills-cant-be-copy-pasted","level":2,"title":"Why Good Skills Can't Be Copy-Pasted","text":"
Jose Alekhinne / 2026-02-05
Have You Ever Dropped a Well-Crafted Template Into a Project and Had It Do... Nothing Useful?
The template was thorough,
The structure was sound,
The advice was correct...
...and yet it sat there, inert, while the same old problems kept drifting in.
I found a consolidation skill online.
It was well-organized: four files, ten refactoring patterns, eight analysis dimensions, six report templates.
Professional. Comprehensive. Exactly the kind of thing you'd bookmark and think \"I'll use this.\"
Then I stopped, and applied ctx's own evaluation framework:
70% of it was noise!
This post is about why.
It Is About Encoding Templates
Templates describe categories of problems.
Expertise encodes which problems actually happen, and how often.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#the-skill-looked-great-on-paper","level":2,"title":"The Skill Looked Great on Paper","text":"
It had a scoring system (0-10 per dimension, letter grades A+ through F).
It had severity classifications with color-coded emojis. It had bash commands for detection.
It even had antipattern warnings.
By any standard template review, this skill passes.
It looks like something an expert wrote.
And that's exactly the trap.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#applying-ear-the-70-20-10-split","level":2,"title":"Applying E/A/R: The 70-20-10 Split","text":"
In a previous post, I described the E/A/R framework for evaluating skills:
Expert: Knowledge that took years to learn. Keep.
Activation: Useful triggers or scaffolding. Keep if lightweight.
Redundant: Restates what the AI already knows. Delete.
Target: >70% Expert, <10% Redundant.
This skill scored the inverse.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-was-redundant-70","level":3,"title":"What Was Redundant (~70%)","text":"
Every code example was Rust. My project is Go.
The analysis dimensions: duplication detection, architectural structure, code organization, refactoring opportunities... These are things Claude already does when you ask it to review code.
The skill restated them with more ceremony but no more insight.
The six report templates were generic scaffolding: Executive Summary, Onboarding Document, Architecture Documentation...
They are useful if you are writing a consulting deliverable, but not when you are trying to catch convention drift in a >15K-line Go CLI.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-does-a-b-in-code-organization-actually-mean","level":2,"title":"What Does a B+ in Code Organization Actually Mean?!","text":"
The scoring system (0-10 per dimension, letter grades) added ceremony without actionable insight.
What is a B+? What do I do differently for an A-?
The skill told the AI what it already knew, in more words.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-was-activation-10","level":3,"title":"What Was Activation (~10%)","text":"
The consolidation checklist (semantics preserved? tests pass? docs updated?) was useful as a gate. But, it's the kind of thing you could inline in three lines.
The phased roadmap structure was reasonable scaffolding for sequencing work.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-was-expert-20","level":3,"title":"What Was Expert (~20%)","text":"
Three concepts survived:
The Consolidation Decision Matrix: A concrete framework mapping similarity level and instance count to action. \"Exact duplicate, 2+ instances: consolidate immediately.\" \"<3 instances: leave it: duplication is cheaper than wrong abstraction.\" This is the kind of nuance that prevents premature generalization.
The Safe Migration Pattern: Create the new API alongside old, deprecate, migrate incrementally, delete. Straightforward to describe, yet forgettable under pressure.
Debt Interest Rate framing: Categorizing technical debt by how fast it compounds (security vulns = daily, missing tests = per-change, doc gaps = constant low cost). This changes prioritization.
Three ideas out of four files and 700+ lines. The rest was filler that competed with the AI's built-in capabilities.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-the-skill-didnt-know","level":2,"title":"What the Skill Didn't Know","text":"
AI Without Context is Just a Corpus
LLMs are optimized on insanely large corpora.
And then they are passed through several layers of human-assisted refinement.
The whole process costs millions of dollars.
Yet, the reality is that no corpus can \"infer\" your project's design, convetions, patterns, habits, history, vision, and deliverables.
Your project is unique: So should your skills be.
Here is the part no template can provide:
ctx's actual drift patterns.
Before evaluating the skill, I did archaeology. I read through:
Blog posts from previous refactoring sessions;
The project's learnings and decisions files;
Session journals spanning weeks of development.
What I found was specific:
Drift Pattern Where How Often Is/Has/Can predicate prefixes 5+ exported methods Every YOLO sprint Magic strings instead of constants 7+ files Gradual accumulation Hardcoded file permissions (0755) 80+ instances Since day one Lines exceeding 80 characters Especially test files Every session Duplicate code blocks Test and non-test code When agent is task-focused
The generic skill had no check for any of these. It couldn't; because these patterns are specific to this project's conventions, its Go codebase, and its development rhythm.
The Insight
The skill's analysis dimensions were about categories of problems.
This experience crystallized something I've been circling for weeks:
You can't import expertise. You have to grow it from your project's own history.
A skill that says \"check for code duplication\" is not expertise: It's a category.
Expertise is knowing, in the heart of your hearts, that this project accumulates Is* predicate violations during velocity sprints, that this codebase has 80 hardcoded permission literals because nobody made a constant, that this team's test files drift wide because the agent prioritizes getting the task done over keeping the code in shape.
The Parallel to the 3:1 Ratio
In Refactoring with Intent, I described the 3:1 ratio: three YOLO sessions followed by one consolidation session.
The same ratio applies to skills: you need experience in the project before you can write effective guidance for the project.
Importing a skill on day one is like scheduling a consolidation session before you've written any code.
Templates are seductive because they feel like progress:
You found something
It's well-organized
It covers the topic
It has concrete examples
But coverage is not relevance.
A template that covers eight analysis dimensions with Rust examples adds zero value to a Go project with five known drift patterns. Worse, it adds negative value: the AI spends attention defending generic advice instead of noticing project-specific drift.
This is the attention budget problem again. Every token of generic guidance displaces a token of specific guidance. A 700-line skill that's 70% redundant doesn't just waste 490 lines: it dilutes the 210 lines that matter.
Before dropping any external skill into your project:
Run E/A/R: What percentage is expert knowledge vs. what the AI already knows? If it's less than 50% expert, it's probably not worth the attention cost.
Check the language: Does it use your stack? Generic patterns in the wrong language are noise, not signal.
List your actual drift: Read your own session history, learnings, and post-mortems. What breaks in practice? Does the skill check for those things?
Measure by deletion: After adaptation, how much of the original survives? If you're keeping less than 30%, you would have been faster writing from scratch.
Test against your conventions: Does every check in the skill map to a specific convention or rule in your project? If not, it's generic advice wearing a skill's clothing.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-good-adaptation-looks-like","level":2,"title":"What Good Adaptation Looks Like","text":"
The consolidation skill went from:
Before After 4 files, 700+ lines 1 file, ~120 lines Rust examples Go-specific rg commands 8 generic dimensions 9 project-specific checks 6 report templates 1 focused output format Scoring system (A+ to F) Findings + priority + suggested fixes \"Check for duplication\" \"Check for Is* predicate prefixes in exported methods\"
The adapted version is smaller, faster to parse, and catches the things that actually drift in this project.
That's the difference between a template and a tool.
If You Remember One Thing From This Post...
Frameworks travel. Expertise doesn't.
You can import structures, matrices, and workflows.
But the checks that matter only grow where the scars are:
the conventions that were violated,
the patterns that drifted,
and the specific ways this codebase accumulates debt.
This post was written during a consolidation session where the consolidation skill itself became the subject of consolidation. The meta continues.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/","level":1,"title":"The Anatomy of a Skill That Works","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism. References to ctx-save, ctx session, and .context/sessions/ in this post reflect the architecture at the time of writing.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#what-20-skill-rewrites-taught-me-about-guiding-ai","level":2,"title":"What 20 Skill Rewrites Taught Me About Guiding AI","text":"
Jose Alekhinne / 2026-02-07
Why do some skills produce great results while others get ignored or produce garbage?
I had 20 skills. Most were well-intentioned stubs: a description, a command to run, and a wish for the best.
Then I rewrote all of them in a single session. This is what I learned.
In Skills That Fight the Platform, I described what skills should not do. In You Can't Import Expertise, I showed why templates fail. This post completes the trilogy: the concrete patterns that make a skill actually work.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#the-starting-point","level":2,"title":"The Starting Point","text":"
Here is what a typical skill looked like before the rewrite:
---\nname: ctx-save\ndescription: \"Save session snapshot.\"\n---\n\nSave the current context state to `.context/sessions/`.\n\n## Execution\n\nctx session save $ARGUMENTS\n\nReport the saved session file path to the user.\n
Seven lines of body. A vague description. No guidance on when to use it, when not to, what the command actually accepts, or how to tell if it worked.
As a result, the agent would either never trigger the skill (the description was too vague), or trigger it and produce shallow output (no examples to calibrate quality).
A skill without boundaries is just a suggestion.
More precisely: the most effective boundary I found was a quality gate that runs before execution, not during it.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#the-pattern-that-emerged","level":2,"title":"The Pattern That Emerged","text":"
After rewriting 20 skills, a repeatable anatomy emerged (independent of the skill's purpose). Not every skill needs every section, but the effective ones share the same bones:
Section What It Does Before X-ing Pre-flight checks; prevents premature execution When to Use Positive triggers; narrows activation When NOT to Use Negative triggers; prevents misuse Usage Examples Invocation patterns the agent can pattern-match Process/Execution What to do; commands, steps, flags Good/Bad Examples Desired vs undesired output; sets boundaries Quality Checklist Verify before claiming completion
I realized the first three sections matter more than the rest; because a skill with great execution steps but no activation guidance is like a manual for a tool nobody knows they have.
Anti-Pattern: The Perfect Execution Trap
A skill with detailed execution steps but no activation guidance will fail more often than a vague skill because it executes confidently at the wrong time.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-1-quality-gates-prevent-premature-execution","level":2,"title":"Lesson 1: Quality Gates Prevent Premature Execution","text":"
The single most impactful addition was a \"Before X-ing\" section at the top of each skill. Not process steps; pre-flight checks.
## Before Recording\n\n1. **Check if it belongs here**: is this learning specific\n to this project, or general knowledge?\n2. **Check for duplicates**: search LEARNINGS.md for similar\n entries\n3. **Gather the details**: identify context, lesson, and\n application before recording\n
Without this gate, the agent would execute immediately on trigger.
With it, the agent pauses to verify preconditions.
The difference is dramatic: instead of shallow, reflexive execution, you get considered output.
Readback
For the astute readers, the aviation parallel is intentional:
Pilots do not skip the pre-flight checklist because they have flown before.
The checklist exists precisely because the stakes are high enough that \"I know what I'm doing\" is not sufficient.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-2-when-not-to-use-is-not-optional","level":2,"title":"Lesson 2: \"When NOT to Use\" Is Not Optional","text":"
Every skill had a \"When to Use\" section. Almost none had \"When NOT to Use\". This is a problem.
AI agents are biased toward action. Given a skill that says \"use when journal entries need enrichment\", the agent will find reasons to enrich.
Without explicit negative triggers, over-activation is not a bug; it is the default behavior.
Some examples of negative triggers that made a real difference:
Skill Negative Trigger ctx-reflect \"When the user is in flow; do not interrupt\" ctx-save \"After trivial changes; a typo does not need a snapshot\" prompt-audit \"Unsolicited; only when the user invokes it\" qa \"Mid-development when code is intentionally incomplete\"
These are not just nice-to-have. They are load-bearing.
Withoutthem, the agent will trigger the skill at the wrong time, produce unwanted output, and erode the user's trust in the skill system.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-3-examples-set-boundaries-better-than-rules","level":2,"title":"Lesson 3: Examples Set Boundaries Better Than Rules","text":"
The most common failure mode of thin skills was not wrong behavior but vague behavior. The agent would do roughly the right thing, but at a quality level that required human cleanup.
Rules like \"be constructive, not critical\" are too abstract. What does \"constructive\" look like in a prompt audit report? The agent has to guess.
Good/bad example pairs avoid guessing:
### Good Example\n\n> This session implemented the cooldown mechanism for\n> `ctx agent`. We discovered that `$PPID` in hook context\n> resolves to the Claude Code PID.\n>\n> I'd suggest persisting:\n> - **Learning**: `$PPID` resolves to Claude Code PID\n> `ctx add learning --context \"...\" --lesson \"...\"`\n> - **Task**: mark \"Add cooldown\" as done\n\n### Bad Examples\n\n* \"*We did some stuff. Want me to save it?*\"\n* Listing 10 trivial learnings that are general knowledge\n* Persisting without asking the user first\n
The good example shows the exact format, level of detail, and command syntax. The bad examples show where the boundary is.
Together, they define a quality corridor without prescribing every word.
Rules describe. Examples demonstrate.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-4-skills-are-read-by-agents-not-humans","level":2,"title":"Lesson 4: Skills Are Read by Agents, Not Humans","text":"
This seems obvious, but it has non-obvious consequences. During the rewrite, one skill included guidance that said \"use a blog or notes app\" for general knowledge that does not belong in the project's learnings file.
The agent does not have a notes app. It does not browse the web to find one. This instruction, clearly written for a human audience, was dead weight in a skill consumed by an AI.
Skills are for the Agents
Every sentence in a skill should be actionable by the agent.
If the guidance requires human judgment or human tools, it belongs in documentation, not in a skill.
The corollary: command references must be exact.
A skill that says \"save it somewhere\" is useless.
A skill that says ctx add learning --context \"...\" --lesson \"...\" --application \"...\" is actionable.
The agent can pattern-match and fill in the blanks.
Litmus test: If a sentence starts with \"you could...\" or assumes external tools, it does not belong in a skill.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-5-the-description-field-is-the-trigger","level":2,"title":"Lesson 5: The Description Field Is the Trigger","text":"
This was covered in Skills That Fight the Platform, but the rewrite reinforced it with data. Several skills had good bodies but vague descriptions:
# Before: vague, activates too broadly or not at all\ndescription: \"Show context summary.\"\n\n# After: specific, activates at the right time\ndescription: \"Show context summary. Use at session start or\n when unclear about current project state.\"\n
The description is not a title. It is the activation condition.
The platform's skill matching reads this field to decide whether to surface the skill. A vague description means the skill either never triggers or triggers when it should not.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-6-flag-tables-beat-prose","level":2,"title":"Lesson 6: Flag Tables Beat Prose","text":"
Most skills wrap CLI tools. The thin versions described flags in prose, if at all. The rewritten versions use tables:
| Flag | Short | Default | Purpose |\n|-------------|-------|---------|--------------------------|\n| `--limit` | `-n` | 20 | Maximum sessions to show |\n| `--project` | `-p` | \"\" | Filter by project name |\n| `--full` | | false | Show complete content |\n
Tables are scannable, complete, and unambiguous.
The agent can read them faster than parsing prose, and they serve as both reference and validation: If the agent invokes a flag not in the table, something is wrong.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-7-template-drift-is-a-real-maintenance-burden","level":2,"title":"Lesson 7: Template Drift Is a Real Maintenance Burden","text":"
// TODO: this has changed; we deploy from the marketplace; update it. // at least add an admonition saying thing are different now.
ctx deploys skills through templates (via ctx init). Every skill exists in two places: the live version (.claude/skills/) and the template (internal/assets/claude/skills/).
They must match.
During the rewrite, every skill update required editing both files and running diff to verify. This sounds trivial, but across 16 template-backed skills, it was the most error-prone part of the process.
Template drift is dangerous because it creates false confidence: the agent appears to follow rules that no longer exist.
The lesson: if your skills have a deployment mechanism, build the drift check into your workflow. We added a row to the update-docs skill's mapping table specifically for this:
Intentional differences (like project-specific scripts in the live version but not the template) should be documented, not discovered later as bugs.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#the-rewrite-scorecard","level":2,"title":"The Rewrite Scorecard","text":"Metric Before After Average skill body ~15 lines ~80 lines Skills with quality gate 0 20 Skills with \"When NOT\" 0 20 Skills with examples 3 20 Skills with flag tables 2 12 Skills with checklist 0 20
More lines, but almost entirely Expert content (per the E/A/R framework). No personality roleplay, no redundant guidance, no capability lists. Just project-specific knowledge the platform does not have.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#the-meta-lesson","level":2,"title":"The Meta-Lesson","text":"
The previous two posts argued that skills should provide knowledge, not personality; that they should complement the platform, not fight it; that they should grow from project history, not imported templates.
This post adds the missing piece: structure.
A skill without a structure is a wish.
A skill with quality gates, negative triggers, examples, and checklists is a tool: the difference is not the content; it is whether the agent can reliably execute it without human intervention.
Skills are Interfaces
Good skills are not instructions. They are contracts.:
They specify preconditions, postconditions, and boundaries.
They show what success looks like and what failure looks like.
They trust the agent's intelligence but do not trust its assumptions.
If You Remember One Thing From This Post...
Skills that work have bones, not just flesh.
Quality gates, negative triggers, examples, and checklists are the skeleton. The domain knowledge is the muscle.
Without the skeleton, the muscle has nothing to attach to.
This post was written during the same session that rewrote all 22 skills. The skill-creator skill was updated to encode these patterns. The meta continues.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/","level":1,"title":"Not Everything Is a Skill","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism. References to /ctx-save, .context/sessions/, and session auto-save in this post reflect the architecture at the time of writing.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#what-a-codebase-audit-taught-me-about-restraint","level":2,"title":"What a Codebase Audit Taught Me About Restraint","text":"
Jose Alekhinne / 2026-02-08
When You Find a Useful Prompt, What Do You Do With It?
My instinct was to make it a skill.
I had just spent three posts explaining how to build skills that work. Naturally, the hammer wanted nails.
Then I looked at what I was holding and realized: this is not a nail.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#the-audit","level":2,"title":"The Audit","text":"
I wanted to understand how I use ctx:
Where the friction is;
What works, what drifts;
What I keep doing manually that could be automated.
So I wrote a prompt that spawned eight agents to analyze the codebase from different angles:
Agent Analysis 1 Extractable patterns from session history 2 Documentation drift (godoc, inline comments) 3 Maintainability (large functions, misplaced code) 4 Security review (CLI-specific surface) 5 Blog theme discovery 6 Roadmap and value opportunities 7 User-facing documentation gaps 8 Agent team strategies for future sessions
The prompt was specific:
read-only agents,
structured output format,
concrete file references,
ranked recommendations.
It ran for about 20 minutes and produced eight Markdown reports.
The reports were good: Not perfect, but actionable.
What mattered was not the speed. It was that the work could be explored without committing to any single outcome.
They surfaced a stale doc.go referencing a subcommand that was never built.
They found 311 build-then-test sequences I could reduce to a single make check.
They identified that 42% of my sessions start with \"do you remember?\", which is a lot of repetition for something a skill could handle.
I had findings. I had recommendations. I had the instinct to automate.
And then... I stopped.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#the-question","level":2,"title":"The Question","text":"
The natural next step was to wrap the audit prompt as /ctx-audit: a skill you invoke periodically to get a health check. It fits the pattern:
It has a clear trigger.
It produces structured output.
But I had just spent a week writing about what makes skills work, and the criteria I established argued against it.
From The Anatomy of a Skill That Works:
\"A skill without boundaries is just a suggestion.\"
From You Can't Import Expertise:
\"Frameworks travel, expertise doesn't.\"
From Skills That Fight the Platform:
\"You are the guest, not the host.\"
The audit prompt fails all three tests:
Criterion Audit prompt Good skill Frequency Quarterly, maybe Daily or weekly Stability Tweaked every time Consistent invocation Scope Bespoke, 8 parallel agents Single focused action Trigger \"I feel like auditing\" Clear, repeatable event
Skills are contracts. Contracts need stable terms.
A prompt I will rewrite every time I use it is not a contract. It is a conversation starter.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#recipes-vs-skills","level":2,"title":"Recipes vs Skills","text":"
The distinction that emerged:
Skill Recipe Invocation /slash-command Copy-paste from a doc Frequency High (daily, weekly) Low (quarterly, ad hoc) Stability Fixed contract Adapted each time Scope One focused action Multi-step orchestration Audience The agent The human (who then prompts) Lives in .claude/skills/hack/ or docs/ Attention cost Loaded into context on match Zero until needed
Recipes can later graduate into skills, but only after repetition proves stability.
That last row matters. Skills consume the attention budget every time the platform considers activating them.
A skill that triggers quarterly but gets evaluated on every prompt is pure waste: attention spent on something that will say \"When NOT to Use: now\" 99% of the time.
Runbooks have zero attention cost. They sit in a Markdown file until a human decides to use them.
The human provides the judgment about timing.
The prompt provides the structure.
The Attention Budget Applies to Skills Too
Every skill in .claude/skills/ is a standing claim on the context window. The platform evaluates skill descriptions against every user prompt to decide whether to activate.
Twenty focused skills are fine. Thirty might be fine. But each one added reduces the headroom available for actual work.
Recipes are skills that opted out of the attention tax.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#what-the-audit-actually-produced","level":2,"title":"What the Audit Actually Produced","text":"
The audit was not wasted. It was a planning exercise that generated concrete tasks:
Finding Action 42% of sessions start with memory check Task: /ctx-remember skill (this one is a skill; it is daily) Auto-save stubs are empty Task: enhance /ctx-save with richer summaries 311 raw build-test sequences Task: make check target Stale recall/doc.go lists nonexistent serve Task: fix the doc.go 120 commit sequences disconnected from context Task: /ctx-commit workflow
Some findings became skills;
Some became Makefile targets;
Some became one-line doc fixes.
The audit did not prescribe the artifact type: The findings did.
The audit is the input. Skills are one possible output. Not the only one.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#the-audit-prompt","level":2,"title":"The Audit Prompt","text":"
Here is the exact prompt I used, for those who are curious.
This is not a template: It worked because it was written against this codebase, at this moment, with specific goals in mind:
I want you to create an agent team to audit this codebase. Save each report as\na separate Markdown file under `./ideas/` (or another directory if you prefer).\n\nUse read-only agents (subagent_type: Explore) for all analyses. No code changes.\n\nFor each report, use this structure:\n- Executive Summary (2-3 sentences + severity table)\n- Findings (grouped, with file:line references)\n- Ranked Recommendations (high/medium/low priority)\n- Methodology (what was examined, how)\n\nKeep reports actionable. Every finding should suggest a concrete fix or next step.\n\n## Analyses to Run\n\n### 1. Extractable Patterns (*session mining*)\nSearch session JSONL files, journal entries, and task archives for repetitive\nmulti-step workflows. Count frequency of bash command sequences, slash command\nusage, and recurring user prompts. Identify patterns that could become skills\nor scripts. Cross-reference with existing skills to find coverage gaps.\nOutput: ranked list of automation opportunities with frequency data.\n\n### 2. Documentation Drift (*godoc + inline*)\nCompare every doc.go against its package's actual exports and behavior. Check\ninline godoc comments on exported functions against their implementations.\nScan for stale TODO/FIXME/HACK comments. Check that package-level comments match\npackage names.\nOutput: drift items ranked by severity with exact file:line references.\n\n### 3. Maintainability\nLook for:\n- functions longer than 80 lines with clear split points\n- switch blocks with more than 5 cases that could be table-driven\n- inline comments like \"step 1\", \"step 2\" that indicate a block wants to be a function\n- files longer than 400 lines\n- flat packages that could benefit from sub-packages\n- functions that appear misplaced in their file\n\nDo NOT flag things that are fine as-is just because they could theoretically\nbe different.\nOutput: concrete refactoring suggestions, not style nitpicks.\n\n### 4. Security Review\nThis is a CLI app. Focus on CLI-relevant attack surface, not web OWASP:\n- file path traversal\n- command injection\n- symlink following when writing to `.context/`\n- permission handling\n- sensitive data in outputs\n\nOutput: findings with severity ratings and plausible exploit scenarios.\n\n### 5. Blog Theme Discovery\nRead existing blog posts for style and narrative voice. Analyze git history,\nrecent session discussions, and `DECISIONS.md` for story arcs worth writing about.\nSuggest 3-5 blog post themes with:\n- title\n- angle\n- target audience\n- key commits or sessions to reference\n- a 2-sentence pitch\n\nPrioritize themes that build a coherent narrative across posts.\n\n### 6. Roadmap and Value Opportunities\nBased on current features, recent momentum, and gaps found in other analyses,\nidentify the highest-value improvements. Consider user-facing features,\ndeveloper experience, integration opportunities, and low-hanging fruit.\nOutput: prioritized list with rough effort and impact estimates.\n\n### 7. User-Facing Documentation\nEvaluate README, help text, and user docs. Suggest improvements structured as\nuse-case pages: the problem, how ctx solves it, a typical workflow, and gotchas.\nIdentify gaps where a user would get stuck without reading source code.\nOutput: documentation gaps with suggested page outlines.\n\n### 8. Agent Team Strategies\nBased on the codebase structure, suggest 2-3 agent team configurations for\nupcoming work sessions. For each, include:\n- team composition (roles and agent types)\n- task distribution strategy\n- coordination approach\n- the kinds of work it suits\n
Avoid Generic Advice
Suggestions that are not grounded in a project's actual structure, history, and workflows are worse than useless:
They create false confidence.
If an analysis cannot point to concrete files, commits, sessions, or patterns, it should say \"no finding\" instead of inventing best practices.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#the-deeper-pattern","level":2,"title":"The Deeper Pattern","text":"
This is part of a pattern I keep rediscovering:
The urge to automate is not the same as the need to automate:
The 3:1 ratio taught me that not every session should be a YOLO sprint.
The E/A/R framework taught me that not every template is worth importing. Now the audit is teaching me that not every useful prompt is worth institutionalizing.
The common thread is restraint:
Knowing when to stop.
Recognizing that the cost of automation is not just the effort to build it.
The cost is the ongoing attention tax of maintaining it, the context it consumes, and the false confidence it creates when it drifts.
An entry in hack/runbooks/codebase-audit.md is honest about what it is:
A prompt I wrote once, improved once, and will adapt again next time:
It does not pretend to be a reliable contract.
It does not claim attention budget.
It does not drift silently.
The Automation Instinct
When you find a useful prompt, the instinct is to institutionalize it. Resist.
Ask first: will I use this the same way next time?
If yes, it is a skill. If no, it is a recipe. If you are not sure, it is a recipe until proven otherwise.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#this-mindset-in-the-context-of-ctx","level":2,"title":"This Mindset In the Context of ctx","text":"
ctx is a tool that gives AI agents persistent memory. Its purpose is automation: reducing the friction of context loading, session recall, decision tracking.
But automation has boundaries, and knowing where those boundaries are is as important as pushing them forward.
The skills system is for high-frequency, stable workflows.
The recipes, the journal entries, the session dumps in .context/sessions/: those are for everything else.
Not everything needs to be a slash command. Some things are better as Markdown files you read when you need them.
The goal of ctx is not to automate everything: It is to automate the right things and to make the rest easy to find when you need it.
If You Remember One Thing From This Post...
The best automation decision is sometimes not to automate.
A runbook in a Markdown file costs nothing until you use it.
A skill costs attention on every prompt, whether it fires or not.
Automate the daily. Document the periodic. Forget the rest.
This post was written during the session that produced the codebase audit reports and distilled the prompt into hack/runbooks/codebase-audit.md. The audit generated seven tasks, one Makefile target, and zero new skills. The meta continues.
See also: Code Is Cheap. Judgment Is Not.: the capstone that threads this post's restraint argument into the broader case for why judgment, not production, is the bottleneck.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/","level":1,"title":"Defense in Depth: Securing AI Agents","text":"","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#when-markdown-is-not-a-security-boundary","level":2,"title":"When Markdown Is Not a Security Boundary","text":"
Jose Alekhinne / 2026-02-09
What Happens When Your AI Agent Runs Overnight and Nobody Is Watching?
It follows instructions: That is the problem.
Not because it is malicious. Because it is controllable.
It follows instructions from context, and context can be poisoned.
I was writing the autonomous loops recipe for ctx: the guide for running an AI agent in a loop overnight, unattended, working through tasks while you sleep. The original draft had a tip at the bottom:
Use CONSTITUTION.md for guardrails. Tell the agent \"never delete tests\" and it usually won't.
Then I read that sentence back and realized: that is wishful thinking.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#the-realization","level":2,"title":"The Realization","text":"
CONSTITUTION.md is a Markdown file. The agent reads it at session start alongside everything else in .context/. It is one source of instructions in a context window that also contains system prompts, project files, conversation history, tool outputs, and whatever the agent fetched from the internet.
An attacker who can inject content into any of those sources can redirect the agent's behavior. And \"attacker\" does not always mean a person with malicious intent. It can be:
Vector Example A dependency A malicious npm package with instructions in its README or error output A URL Documentation page with embedded adversarial instructions A project file A contributor who adds instructions to CLAUDE.md or .cursorrules The agent itself In an autonomous loop, the agent modifies its own config between iterations A command output An error message containing instructions the agent interprets and follows
That last vector is the one that kept me up at night (literally!):
In an autonomous loop, the agent modifies files as part of its job.
If it modifies its own configuration files, the next iteration loads the modified config.
No human reviews it.
No diff is shown.
The agent that starts iteration N+1 is running with rules written by iteration N.
The agent can rewrite its own guardrails.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#five-layers-each-with-a-hole","level":2,"title":"Five Layers, Each With a Hole","text":"
That's five nested layers of swiss cheese. Alone, each of them has large holes. Together, they create a boundary.
What followed was a week of peeling back assumptions:
Every defenseI examined had a bypass, and the bypass was always the same shape: the defense was enforced at a level the agent could reach.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-1-soft-instructions-probabilistic","level":3,"title":"Layer 1: Soft Instructions (Probabilistic)","text":"
CONSTITUTION.md, the Agent Playbook, system prompts: These tell the agent what to do.
The agent usually follows them.
\"Usually\" is the keyword here.
The hole: Prompt injection:
A sufficiently crafted payload overrides soft instructions.
Long context windows dilute attention on rules stated early.
Edge cases where instructions are ambiguous get resolved in unpredictable ways.
Verdict: Necessary. Not sufficient. Good for the common case. Never trust it as a security boundary.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-2-application-controls-deterministic-at-runtime-mutable-across-iterations","level":3,"title":"Layer 2: Application Controls (Deterministic at Runtime, Mutable Across Iterations)","text":"
Permission allowlists in .claude/settings.local.json:
If rm, curl, sudo, or docker are not in the allowlist, the agent cannot invoke them. This is deterministic.
The application enforces it regardless of what any prompt says.
The hole: The agent can modify the allowlist itself:
It has Write permission.
The allowlist lives in a file.
The agent writes to the file.
The next iteration loads the modified allowlist.
The application enforces the rules, but the application reads the rules from files the agent can write.
Verdict: Strong first layer. Must be combined with self-modification prevention.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-3-os-level-isolation-unbypassable","level":3,"title":"Layer 3: OS-Level Isolation (Unbypassable)","text":"
This is where the defenses stop having holes in the same shape.
The operating system enforces access controls that no application-level trick can override. An unprivileged user cannot read files owned by root. A process without CAP_NET_RAW cannot open raw sockets. These are kernel boundaries.
Control What it stops Dedicated unprivileged user Privilege escalation, sudo, group-based access Filesystem permissions Lateral movement to other projects, system config Immutable config files Self-modification of guardrails between iterations
Make the agent's instruction files read-only: CLAUDE.md, .claude/settings.local.json, .context/CONSTITUTION.md. Own them as a different user, or mark them immutable with chattr +i on Linux.
The hole: Actions within the agent's legitimate scope:
If the agent has write access to source code (which it needs), it can introduce vulnerabilities in the code itself.
You cannot prevent this without removing the agent's ability to do its job.
Verdict: Essential. This is the layer that makes Layers 1 and 2 trustworthy.
OS-level isolation does not make the agent safe; it makes the other layers meaningful.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-4-network-controls","level":3,"title":"Layer 4: Network Controls","text":"
An agent that cannot reach the internet cannot exfiltrate data.
It also cannot ingest new instructions mid-loop from external documents, error pages, or hostile content.
# Container with no network\ndocker run --network=none ...\n\n# Or firewall rules allowing only package registries\niptables -A OUTPUT -d registry.npmjs.org -j ACCEPT\niptables -A OUTPUT -d proxy.golang.org -j ACCEPT\niptables -A OUTPUT -j DROP\n
If the agent genuinely does not need the network, disable it entirely.
If it needs to fetch dependencies, allow specific registries and block everything else.
The hole: None, if the agent does not need the network.
Thetradeoff is that many real workloads need dependency resolution, so a full airgap requires pre-populated caches.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-5-infrastructure-isolation","level":3,"title":"Layer 5: Infrastructure Isolation","text":"
The strongest boundary is a separate machine.
The moment you stop arguing about prompts and start arguing about kernels, you are finally doing security.
An agent with socket access can spawn sibling containers with full host access, effectively escaping the sandbox.
This is not theoretical: the Docker socket grants root-equivalent access to the host.
Use rootless Docker or Podman to eliminate this escalation path entirely.
Virtual machines are even stronger: The guest kernel has no visibility into the host OS. No shared folders, no filesystem passthrough, no SSH keys to other machines.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#the-pattern","level":2,"title":"The Pattern","text":"
Each layer is straightforward: The strength is in the combination:
Layer Implementation What it stops Soft instructions CONSTITUTION.md Common mistakes (probabilistic) Application allowlist .claude/settings.local.json Unauthorized commands (deterministic within runtime) Immutable config chattr +i on config files Self-modification between iterations Unprivileged user Dedicated user, no sudo Privilege escalation Container --cap-drop=ALL --network=none Host escape, data exfiltration Resource limits --memory=4g --cpus=2 Resource exhaustion
No layer is redundant. Each one catches what the others miss:
The soft instructions handle the 99% case: \"don't delete tests.\"
The allowlist prevents the agent from running commands it should not.
The immutable config prevents the agent from modifying the allowlist.
The unprivileged user prevents the agent from removing the immutable flag.
The container prevents the agent from reaching anything outside its workspace.
The resource limits prevent the agent from consuming all system resources.
Remove any one layer and there is an attack path through the remaining ones.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#common-mistakes-i-see","level":2,"title":"Common Mistakes I See","text":"
These are real patterns, not hypotheticals:
\"I'll just use --dangerously-skip-permissions.\" This disables Layer 2 entirely. Without Layers 3 through 5, you have no protection at all. The flag means what it says. If you ever need to, think thrice, you probably don't. But, if you ever need to usee this only use it inside a properly isolated VM (not even a container: a \"VM\").
\"The agent is sandboxed in Docker.\" A Docker container with the Docker socket mounted, running as root, with --privileged, and full network access is not sandboxed. It is a root shell with extra steps.
\"I reviewed CLAUDE.md, it's fine.\" You reviewed it before the loop started. The agent modified it during iteration 3. Iteration 4 loaded the modified version. Unless the file is immutable, your review is futile.
\"The agent only has access to this one project.\" Does the project directory contain .env files? SSH keys? API tokens? A .git/config with push access to a remote? Filesystem isolation means isolating what is in the directory too.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#the-connection-to-context-engineering","level":2,"title":"The Connection to Context Engineering","text":"
This is the same lesson I keep rediscovering, wearing different clothes.
In The Attention Budget, I wrote about how every token competes for the AI's focus. Security instructions in CONSTITUTION.md are subject to the same budget pressure: if the context window is full of code, error messages, and tool outputs, the security rules stated at the top get diluted.
In Skills That Fight the Platform, I wrote about how custom instructions can conflict with the AI's built-in behavior. Security rules have the same problem: telling an agent \"never run curl\" in Markdown while giving it unrestricted shell access creates a contradiction: The agent resolves contradictions unpredictably. The agent will often pick the path of least resistance to attain its objective function. And, trust me, agents can get far more creative than the best red-teamer you know.
In You Can't Import Expertise, I wrote about how generic templates fail because they do not encode project-specific knowledge. Generic security advice fails the same way: \"Don't exfiltrate data\" is a category; blocking outbound network access is a control.
The pattern across all of these: Soft instructions are useful for the common case. Hard boundaries are required for security.
Know which is which.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#the-checklist","level":2,"title":"The Checklist","text":"
Before running an unattended AI agent:
Agent runs as a dedicated unprivileged user (no sudo, no docker group)
Agent's config files are immutable or owned by a different user
Permission allowlist restricts tools to the project's toolchain
Container drops all capabilities (--cap-drop=ALL)
Docker socket is NOT mounted
Network is disabled or restricted to specific domains
Resource limits are set (memory, CPU, disk)
No SSH keys, API tokens, or credentials are accessible
Project directory does not contain .env or secrets files
Iteration cap is set (--max-iterations)
This checklist lives in the Agent Security reference alongside the full threat model and detailed guidance for each layer.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#what-changed-in-ctx","level":2,"title":"What Changed in ctx","text":"
The autonomous loops recipe now has a full permissions and isolation section instead of a one-line tip about CONSTITUTION.md. It covers both the explicit allowlist approach and the --dangerously-skip-permissions flag, with honest guidance about when each is appropriate.
It also has an OS-level isolation table that is not optional: unprivileged users, filesystem permissions, containers, VMs, network controls, resource limits, and self-modification prevention.
The Agent Security page consolidates the threat model and defense layers into a standalone reference.
These are not theoretical improvements. They are the minimum responsible guidance for a tool that helps people run AI agents overnight.
If You Remember One Thing From This Post...
Markdown is not a security boundary.
CONSTITUTION.md is a nudge. An allowlist is a gate.
An unprivileged user in a network-isolated container is a wall.
Use all three. Trust only the wall.
This post was written during the session that added permissions, isolation, and self-modification prevention to the autonomous loops recipe. The security guidance started as a single tip and grew into two documents. The meta continues.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/","level":1,"title":"How Deep Is Too Deep?","text":"","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#when-master-ml-is-the-wrong-next-step","level":2,"title":"When \"Master ML\" Is the Wrong Next Step","text":"
Jose Alekhinne / 2026-02-12
Have You Ever Felt Like You Should Understand More of the Stack Beneath You?
You can talk about transformers at a whiteboard.
You can explain attention to a colleague.
You can use agentic AI to ship real software.
But somewhere in the back of your mind, there is a voice:
\"Maybe I should go deeper. Maybe I need to master machine learning.\"
I had that voice for months.
Then I spent a week debugging an agent failure that had nothing to do with ML theory and everything to do with knowing which abstraction was leaking.
This post is about when depth compounds and (more importantly) when it does not.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-hierarchy-nobody-questions","level":2,"title":"The Hierarchy Nobody Questions","text":"
There is an implicit stack most people carry around when thinking about AI:
Layer What Lives Here Agentic AI Autonomous loops, tool use, multi-step reasoning Generative AI Text, image, code generation Deep Learning Transformer architectures, training at scale Neural Networks Backpropagation, gradient descent Machine Learning Statistical learning, optimization Classical AI Search, planning, symbolic reasoning
At some point down that stack, you hit a comfortable plateau: the layer where you can hold a conversation but not debug a failure.
The instinctive response is to go deeper.
But that instinct hides a more important question:
\"Does depth still compound when the abstractions above you are moving hyper-exponentially?\"
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-honest-observation","level":2,"title":"The Honest Observation","text":"
If you squint hard enough, a large chunk of modern ML intuition collapses into older fields:
ML Concept Older Field Gradient descent Numerical optimization Backpropagation Reverse-mode autodiff Loss landscapes Non-convex optimization Generalization Statistics Scaling laws Asymptotics and information theory
Nothing here is uniquely \"AI\".
Most of this math predates the term deep learning. In some cases, by decades.
So what changed?
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#same-tools-different-regime","level":2,"title":"Same Tools, Different Regime","text":"
The mistake is assuming this is a new theory problem: It is not.
It is a new operating regime.
Classical numerical methods were developed under assumptions like:
Manageable dimensionality
Reasonably well-conditioned objectives
Losses that actually represent the goal
Modern ML violates all three: On purpose.
Today's models operate with millions to trillions of parameters, wildly underdetermined systems, and objective functions we know are wrong but optimize anyway.
It is complete and utter madness!
At this scale, familiar concepts warp:
What we call \"local minima\" are overwhelmingly saddle points in high-dimensional spaces.
Noise stops being noise and starts becoming structure.
Overfitting can coexist with generalization.
Bigger models outperform \"better\" ones.
The math did not change: The phase did.
This is less numerical analysis and more *statistical physics: Same equations, but behavior dominated by phase transitions and emergent structure.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#why-scaling-laws-feel-alien","level":2,"title":"Why Scaling Laws Feel Alien","text":"
In classical statistics, asymptotics describe what happens eventually.
In modern ML, scaling laws describe where you can operate today.
They do not say \"given enough time, things converge\".
They say \"cross this threshold and behavior qualitatively changes\".
This is why dumb architectures plus scale beat clever ones.
Why small theoretical gains disappear under data.
Why \"just make it bigger\", ironically, keeps working longer than it should.
That is not a triumph of ML theory: It is a property of high-dimensional systems under loose objectives.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#where-depth-actually-pays-off","level":2,"title":"Where Depth Actually Pays Off","text":"
This reframes the original question.
You do not need depth because this is \"AI\".
You need depth where failure modes propagate upward.
I learned this building ctx: The agent failures I have spent the most time debugging were never about the model's architecture.
They were about:
Misplaced trust: The model was confident. The output was wrong. Knowing when confidence and correctness diverge is not something you learn from a textbook. You learn it from watching patterns across hundreds of sessions.
Distribution shift: The model performed well on common patterns and fell apart on edge cases specific to this project. Recognizing that shift before it compounds requires understanding why generalization has limits, not just that it does.
Error accumulation: In a single prompt, model quirks are tolerable. In autonomous loops running overnight, they compound. A small bias in how the model interprets instructions becomes a large drift by iteration 20.
Scale hiding errors: The model's raw capability masked problems that only surfaced under specific conditions. More parameters did not fix the issue. They just made the failure mode rarer and harder to reproduce.
This is the kind of depth that compounds. Not deriving backprop. But, understanding when correct math produces misleading intuition.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-connection-to-context-engineering","level":2,"title":"The Connection to Context Engineering","text":"
This is the same pattern I keep finding at different altitudes.
In \"The Attention Budget\", I wrote about how dumping everything into the context window degrades the model's focus. The fix was not a better model: It was better curation: load less, load the right things, preserve signal per token.
In \"Skills That Fight the Platform\", I wrote about how custom instructions can conflict with the model's built-in behavior. The fix was not deeper ML knowledge: It was an understanding that the model already has judgment and that you should extend it, not override it.
In \"You Can't Import Expertise\", I wrote about how generic templates fail because they do not encode project-specific knowledge. A consolidation skill with eight Rust-based analysis dimensions was mostly noise for a Go project. The fix was not a better template: It was growing expertise from this project's own history.
In every case, the answer was not \"go deeper into ML\".
The answer was knowing which abstraction was leaking and fixing it at the right layer.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#agentic-systems-are-not-an-ml-problem","level":2,"title":"Agentic Systems Are Not an ML Problem","text":"
The mistake is assuming agent failures originate where the model was trained, rather than where it is deployed.
Agentic AI is a systems problem under chaotic uncertainty:
Feedback loops between the agent and its environment;
Error accumulation across iterations;
Brittle representations that break outside training distribution;
Misplaced trust in outputs that look correct.
In short-lived interactions, model quirks are tolerable. In long-running autonomous loops, however, they compound.
That is where shallow understanding becomes expensive.
But the understanding you need is not about optimizer internals.
It is about:
What Matters What Does Not (for Most Practitioners) Why gradient descent fails in specific regimes How to derive it from scratch When memorization masquerades as reasoning The formal definition of VC dimension Recognizing distribution shift before it compounds Hand-tuning learning rate schedules Predicting when scale hides errors instead of fixing them Chasing theoretical purity divorced from practice
The depth that matters is diagnostic, not theoretical.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-real-answer","level":2,"title":"The Real Answer","text":"
Not turtles all the way down.
Go deep enough to:
Diagnose failures instead of cargo-culting fixes;
Reason about uncertainty instead of trusting confidence;
Design guardrails that align with model behavior, not hope.
Stop before:
Hand-deriving gradients for the sake of it;
Obsessing over optimizer internals you will never touch;
Chasing theoretical purity divorced from the scale you actually operate at.
This is not about mastering ML.
It is about knowing which abstractions you can safely trust and which ones leak.
Hint: Any useful abstraction almost certainly leaks.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#a-practical-litmus-test","level":2,"title":"A Practical Litmus Test","text":"
If a failure occurs and your instinct is to:
Add more prompt text: abstraction leak above
Add retries or heuristics: error accumulation
Change the model: scale masking
Reach for ML theory: you are probably (but not always) going too deep
The right depth is the shallowest layer where the failure becomes predictable.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-ctx-lesson","level":2,"title":"The ctx Lesson","text":"
Every design decision in ctx is downstream of this principle.
The attention budget exists because the model's internal attention mechanism has real limits: You do not need to understand the math of softmax to build around it. But you do need to understand that more context is not always better and that attention density degrades with scale.
The skill system exists because the model's built-in behavior is already good: You do not need to understand RLHF to build effective skills. But you do need to understand that the model already has judgment and your skills should teach it things it does not know, not override how it thinks.
Defense in depth exists because soft instructions are probabilistic: You do not need to understand the transformer architecture to know that a Markdown file is not a security boundary. But you do need to understand that the model follows instructions from context, and context can be poisoned.
In each case, the useful depth was one or two layers below the abstraction I was working at: Not at the bottom of the stack.
The boundary between useful understanding and academic exercise is where your failure modes live.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#closing-thought","level":2,"title":"Closing Thought","text":"
Most modern AI systems do not fail because the math is wrong.
They fail because we apply correct math in the wrong regime, then build autonomous systems on top of it.
Understanding that boundary, not crossing it blindly, is where depth still compounds.
And that is a far more useful form of expertise than memorizing another loss function.
If You Remember One Thing From This Post...
Go deep enough to diagnose your failures. Stop before you are solving problems that do not propagate to your layer.
The abstractions below you are not sacred. But neither are they irrelevant.
The useful depth is wherever your failure modes live. Usually one or two layers down, not at the bottom.
This post started as a note about whether I should take an ML course. The answer turned out to be \"no, but understand why not\". The meta continues.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/","level":1,"title":"Before Context Windows, We Had Bouncers","text":"","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#the-reset-problem","level":2,"title":"The Reset Problem","text":"
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#stateless-protocol-stateful-life","level":2,"title":"Stateless Protocol, Stateful Life","text":"
IRC is minimal:
A TCP connection.
A nickname.
A channel.
A stream of lines.
When the connection drops, you literally disappear from the graph.
The protocol is stateless; human systems are not.
So you:
Reconnect;
Ask what you missed;
Scroll;
Reconstruct.
The machine forgets; you pay.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#the-bouncer-pattern","level":2,"title":"The Bouncer Pattern","text":"
A bouncer is a daemon that remains connected when you do not:
It holds your seat;
It buffers what you missed;
It keeps your identity online.
ZNC is one such bouncer.
With ZNC:
Your client does not connect to IRC;
It connects to ZNC;
ZNC connects upstream.
Client sessions become ephemeral.
Presence becomes infrastructural.
ZNC is tmux for IRC
Close your laptop.
ZNC remains.
Switch devices.
ZNC persists.
This is not convenience; this is continuity.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#presence-without-flapping","level":2,"title":"Presence Without Flapping","text":"
With a bouncer:
Closing your client does not emit PART.
Reopening does not emit JOIN.
You do not flap in and out of existence.
From the channel's perspective, you remain.
From your perspective, history accumulates.
Buffers persist;
Identity persists;
Context persists.
This pattern predates AI.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#before-llm-context-windows","level":2,"title":"Before LLM Context Windows","text":"
An LLM session without memory is IRC without a bouncer:
Close the window.
Start over.
Re-explain intent.
Rehydrate context.
That is friction.
This Walks and Talks like ctx
Context engineering moves memory out of sessions and into infrastructure.
ZNC does this for IRC.
ctx does this for agents.
Same principle:
Volatile interface.
Persistent substrate.
Different fabric.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#minimal-architecture","level":2,"title":"Minimal Architecture","text":"
My setup is intentionally boring:
A $5 small VPS.
ZNC installed.
TLS enabled.
Firewall restricted.
Then:
ZNC connects to Libera.Chat.
SASL authentication lives inside ZNC.
Buffers are stored on disk.
My client connects to my VPS, not the network.
The commands do not matter: The boundaries do:
Authentication in infrastructure, not in the client;
Memory server-side, not in scrollback;
Presence decoupled from activity.
Everything else is configuration.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#platform-memory","level":2,"title":"Platform Memory","text":"
Yes, I know, it is 2026:
Discord stores history;
Slack stores history;
The dumpster fire on gasoline called X, too, stores history.
HOWEVER, they own your substrate.
Running a bouncer is quiet sovereignty:
Logs are mine.
Presence is continuous.
State does not reset because I closed a tab.
Small acts compound.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#signal-density","level":2,"title":"Signal Density","text":"
Primitive systems select for builders.
Consistent presence in small rooms compounds reputation.
Quiet compounding outperforms viral spikes.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#infrastructure-as-cognition","level":2,"title":"Infrastructure as Cognition","text":"
ZNC is not interesting because it is retro; it is interesting because it models a principle:
Stateless protocols require stateful wrappers;
Volatile interfaces require durable memory;
Human systems require continuity.
Distilled:
Humans require context.
Before context windows, we had bouncers.
Before AI memory files, we had buffers.
Continuity is not a feature; it is a design decision.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#build-it","level":2,"title":"Build It","text":"
If you want the actual setup (VPS, ZNC, TLS, SASL, firewall...) there is a step-by-step runbook:
Persistent IRC Presence with ZNC.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#motd","level":2,"title":"MOTD","text":"
When my client connects to my bouncer, it prints:
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n
See also: Context as Infrastructure -- the post that takes this observation to its conclusion: stateless protocols need stateful wrappers, and AI sessions need persistent filesystems.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/","level":1,"title":"Parallel Agents with Git Worktrees","text":"","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-backlog-problem","level":2,"title":"The Backlog Problem","text":"
Jose Alekhinne / 2026-02-14
What Do You Do With 30 Open Tasks?
You could work through them one at a time.
One agent, one branch, one commit stream.
Or you could ask: which of these don't touch each other?
I had 30 open tasks in TASKS.md. Some were docs. Some were a new encryption package. Some were test coverage for a stable module. Some were blog posts.
They had almost zero file overlap.
Running one agent at a time meant serial execution on work that was fundamentally parallel:
I was bottlenecking on me, not on the machine.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-insight-file-overlap-is-the-constraint","level":2,"title":"The Insight: File Overlap Is the Constraint","text":"
This is not a scheduling problem: It's a conflict avoidance problem.
Two agents can work simultaneously on the same codebase if and only if they don't touch the same files. The moment they do, you get merge conflicts: And merge conflicts on AI-generated code are expensive because the human has to arbitrate choices they didn't make.
So the question becomes:
\"Can you partition your backlog into non-overlapping tracks?\"
For ctx, the answer was obvious:
Track Touches Tasks work/docsdocs/, hack/ Blog posts, recipes, runbooks work/padinternal/cli/pad/, specs Scratchpad encryption, CLI, tests work/testsinternal/cli/recall/ Recall test coverage
Three tracks. Near-zero overlap. Three agents.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#git-worktrees-the-mechanism","level":2,"title":"Git Worktrees: The Mechanism","text":"
git has a feature that most people don't use: worktrees.
A worktree is a second (or third, or fourth) working directory that shares the same .git object database as your main checkout.
Each worktree has its own branch, its own index, its own working tree. But they all share history, refs, and objects.
This is cheaper than three clones. And because they share objects, git merge afterwards is fast: It's a local operation on shared data.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-setup","level":2,"title":"The Setup","text":"
The workflow I landed on:
1. Group tasks by blast radius.
Read TASKS.md. For each pending task, estimate which files and directories it touches. Group tasks that share files into the same track. Tasks with no overlap go into separate tracks.
This is the part that requires human judgment:
An agent can propose groupings, but you need to verify that the boundaries are real. A task that says \"update docs\" but actually touches Go code will poison a docs track.
2. Create worktrees as sibling directories.
Not subdirectories: Siblings.
If your main checkout is at ~/WORKSPACE/ctx, worktrees go at ~/WORKSPACE/ctx-docs, ~/WORKSPACE/ctx-pad, etc.
Why siblings? Because some tools (and some agents) walk up the directory tree looking for .git. A worktree inside the main checkout confuses them.
Each agent gets a full working copy with .context/ intact. It reads the same TASKS.md, the same DECISIONS.md, the same CONVENTIONS.md. It knows the full project state. It just works on a different slice.
4. Do NOT run ctx init in worktrees.
This is the gotcha. The .context/ directory is tracked in git. Running ctx init in a worktree would overwrite shared context files: Wiping decisions, learnings, and tasks that belong to the whole project.
The worktree already has everything it needs. Leave it alone.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#what-actually-happened","level":2,"title":"What Actually Happened","text":"
I ran three agents for about 40 minutes. Here is roughly what each track produced:
work/docs: Parallel worktrees recipe, blog post edits, recipe index reorganization, IRC recipe moved from docs/ to hack/.
work/pad: ctx pad show subcommand, --append and --prepend flags on ctx pad edit, spec updates, 28 new test functions.
work/tests: Recall test coverage, edge case tests.
Merging took about five minutes. Two of the three merges were clean.
The third had a conflict in TASKS.md:
both the docs track and the pad track had marked different tasks as [x].
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-tasksmd-conflict","level":2,"title":"The TASKS.md Conflict","text":"
This deserves its own section because it will happen every time.
When two agents work in parallel, they both read TASKS.md at the start and mark tasks complete as they go. When you merge, git sees two branches that modified the same file differently.
The resolution is always the same: accept all completions from both sides. No task should go from [x] back to [ ]. The merge is additive.
This is one of those conflicts that sounds scary but is trivially mechanical: You are not arbitrating design decisions; you are combining two checklists.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#limits","level":2,"title":"Limits","text":"
3-4 worktrees, maximum.
I tried four once: By the time I merged the third track, the fourth had drifted far enough that its changes needed rebasing.
The merge complexity grows faster than the parallelism benefit.
Three is the sweet spot:
Two is conservative but safe;
Four is possible if the tracks are truly independent;
Anything more than four, you are in the danger zone.
Group by directory, not by priority.
It is tempting to put all the high-priority tasks in one track: Don't.
Two high-priority tasks that touch the same files must be in the same track, regardless of urgency. The constraint is file overlap, not importance.
Commit frequently.
Smaller commits make merge conflicts easier to resolve. An agent that writes 500 lines in a single commit is harder to merge than one that commits every logical step.
Name tracks by concern.
work/docs and work/pad tell you what's happening;
work/track-1 and work/track-2 tell you nothing.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-pattern","level":2,"title":"The Pattern","text":"
This is the same pattern that shows up everywhere in ctx:
The attention budget taught me that you can't dump everything into one context window. You have to partition, prioritize, and load selectively.
Worktrees are the same principle applied to execution: You can't dump every task into one agent's workstream. You have to partition by blast radius, assign selectively, and merge deliberately.
The codebase audit that generated these 30 tasks used eight parallel agents for analysis. Worktrees let me use parallel agents for implementation. Same coordination pattern, different artifact.
And the IRC bouncer post from earlier today argued that stateless protocols need stateful wrappers. Worktrees are the same: git branches are stateless forks; .context/ is the stateful wrapper that gives each agent the project's full memory.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#should-this-be-a-skill","level":2,"title":"Should This Be a Skill?","text":"
I asked myself the same question I asked about the codebase audit: should this be a /ctx-worktree skill?
This time the answer was a resounding \"yes\":
Unlike the audit prompt (which I tweak every time and run every other week) the worktree workflow is:
Criterion Worktree workflow Codebase audit Frequency Weekly Quarterly Stability Same steps every time Tweaked every time Scope Mechanical, bounded Bespoke, 8 agents Trigger Large backlog \"I feel like auditing\"
The commands are mechanical: git worktree add, git worktree remove, branch naming, safety checks. This is exactly what skills are for: stable contracts for repetitive operations.
Ergo, /ctx-worktree exists.
It enforces the 4-worktree limit, creates sibling directories, uses work/ branch prefixes, and reminds you not to run ctx init in worktrees.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-takeaway","level":2,"title":"The Takeaway","text":"
Serial execution is the default. But serial is not always necessary.
If your backlog partitions cleanly by file overlap, you can multiply your throughput with nothing more exotic than git worktree and a second terminal window.
The hard part is not the git commands; it is the discipline:
Grouping by blast radius instead of priority;
Accepting that TASKS.md will conflict;
And knowing when three tracks is enough.
If You Remember One Thing From This Post...
Partition by blast radius, not by priority.
Two tasks that touch the same files belong in the same track, no matter how important the other one is.
The constraint is file overlap. Everything else is scheduling.
The practical setup (skill invocation, worktree creation, merge workflow, and cleanup) lives in the recipe: Parallel Agent Development with Git Worktrees.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/","level":1,"title":"ctx v0.3.0: The Discipline Release","text":"","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#when-the-ratio-of-polish-to-features-is-31-you-know-something-changed","level":2,"title":"When the Ratio of Polish to Features Is 3:1, You Know Something Changed","text":"
Jose Alekhinne / February 15, 2026
What Does a Release Look Like When Most of the Work Is Invisible?
No new headline feature. No architectural pivot. No rewrite.
Just 35+ documentation and quality commits against ~15 feature commits... and somehow, the tool feels like it grew up overnight.
Six days separate v0.2.0 from v0.3.0.
Measured by calendar time, it is nothing. Measured by what changed in how the project operates, it is the most significant release yet.
v0.1.0 was the prototype;
v0.2.0 was the archaeology release: making the past accessible;
v0.3.0 is the discipline release: the one that turned best practices into enforcement, suggestions into structure, and a collection of commands into a system of skills.
The Release Window
February 1‒February 7, 2026
From the v0.2.0 tag to commit 2227f99.
78 files changed in the migration commit alone.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-migration-commands-to-skills","level":2,"title":"The Migration: Commands to Skills","text":"
The largest single change was the migration from .claude/commands/*.md to .claude/skills/*/SKILL.md.
This was not a rename: It was a rethinking of how AI agents discover and execute project-specific workflows.
Aspect Commands (before) Skills (after) Structure Flat files in one directory Directory-per-skill with SKILL.md Description Optional, often vague Required, doubles as activation trigger Quality gates None \"Before X-ing\" pre-flight checklist Negative triggers None \"When NOT to Use\" in every skill Examples Rare Good/bad pairs in every skill Average length ~15 lines ~80 lines
The description field became the single most important line in each skill. In the old system, descriptions were titles. In the new system, they are activation conditions: The text the platform reads to decide whether to surface a skill for a given prompt.
A description that says \"Show context summary\" activates too broadly or not at all. A description that says \"Show context summary. Use at session start or when unclear about current project state\" activates at the right moment.
78 files changed. 1,915 insertions. Not because the skills got bloated; because they got specific.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-skill-sweep","level":2,"title":"The Skill Sweep","text":"
After the structural migration, every skill was rewritten in a single session: All 21 of them.
The rewrite was guided by a pattern that emerged during the process itself: a repeatable anatomy that effective skills share regardless of their purpose:
Before X-ing: Pre-flight checks that prevent premature execution
When to Use: Positive triggers that narrow activation
When NOT to Use: Negative triggers that prevent misuse
Usage Examples: Invocation patterns the agent can pattern-match
Quality Checklist: Verification before claiming completion
The Anatomy of a Skill That Works post covers the details. What matters for the release story is the result:
Zero skills with quality gates became twenty;
Zero skills with negative triggers became twenty.
Three skills with examples became twenty.
The Skill Trilogy as Design Spec
The three blog posts written during this window:
Skills That Fight the Platform,
You Can't Import Expertise,
and The Anatomy of a Skill That Works...
... were not retrospective documentation. They were written during the rewrite, and the lessons fed back into the skills as they were being built.
The blog was the design document.
The skills were the implementation.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-consolidation-sweep","level":2,"title":"The Consolidation Sweep","text":"
The unglamorous work. The kind you only appreciate when you try to change something later and it just works.
What Why It Matters Constants consolidation Magic strings replaced with semantic constants Variable deshadowing Eliminated subtle scoping bugs File splits Modules that were doing too much, broken apart Godoc standardization Every exported function documented to convention
This is the work that doesn't get a changelog entry but makes every future commit easier. When a new contributor (human or AI) reads the codebase, they find consistent patterns instead of accumulated drift.
The consolidation was not an afterthought. It was scheduled deliberately, with the same priority as features: The 3:1 ratio that emerged during v0.2.0 development became an explicit practice:
Three feature sessions;
One consolidation session.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-ear-framework","level":2,"title":"The E/A/R Framework","text":"
On February 4th, we adopted the E/A/R classification as the official standard for evaluating skills:
Category Meaning Target Expert Knowledge Claude does not have >70% Activation When/how to trigger ~20% Redundant What Claude already knows <10%
This came from reviewing approximately 30 external skill files and discovering that most were redundant with Claude's built-in system prompt. Only about 20% had salvageable content, and even those yielded just a few heuristics each.
The E/A/R framework gave us a concrete, testable criterion:
A good skill is Expert knowledge minus what Claude already knows.
If more than 10% of a skill restates platform defaults, it is creating noise, not signal.
Every skill in v0.3.0 was evaluated against this framework. Several were deleted. The survivors are leaner and more focused.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#backup-and-monitoring-infrastructure","level":2,"title":"Backup and Monitoring Infrastructure","text":"
A tool that manages your project's memory needs ops maturity.
v0.3.0 added two pieces of infrastructure that reflect this:
Backup staleness hook: A UserPromptSubmit hook that checks whether the last .context/ backup is more than two days old. If it is, and the SMB mount is available, it reminds the user. No cron job running when nobody is working. No redundant backups when nothing has changed.
Context size checkpoint: A PreToolUse hook that estimates current context window usage and warns when the session is getting heavy. This hooks into the attention budget philosophy: Degradation is expected, but it should be visible.
Both hooks use $CLAUDE_PROJECT_DIR instead of hardcoded paths, a migration triggered by a username rename that broke every absolute path in the hook configuration. That migration (replacing /home/user/... with \"$CLAUDE_PROJECT_DIR\"/.claude/hooks/...) was one of those changes that seems trivial but prevents an entire category of future failures.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-numbers","level":2,"title":"The Numbers","text":"Metric v0.2.0 v0.3.0 Skills (was \"commands\") 11 21 Skills with quality gates 0 21 Skills with \"When NOT to Use\" 0 21 Average skill body ~15 lines ~80 lines Hooks using $CLAUDE_PROJECT_DIR 0 All Documentation commits -- 35+ Feature/fix commits -- ~15
That ratio (35+ documentation and quality commits to ~15 feature commits) is the defining characteristic of this release:
This release is not a failure to ship features.
It is the deliberate choice to make the existing features reliable.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#what-v030-means","level":2,"title":"What v0.3.0 Means","text":"
v0.1.0 asked: \"Can we give AI persistent memory?\"
v0.2.0 asked: \"Can we make that memory accessible to humans too?\"
v0.3.0 asks a different question: \"Can we make the quality self-enforcing?\"
The answer is not a feature: It is a practice:
Skills with quality gates enforce pre-flight checks.
Negative triggers prevent misuse without human intervention.
The E/A/R framework ensures skills contain signal, not noise.
Consolidation sessions are scheduled, not improvised.
Hook infrastructure makes degradation visible.
Discipline is not the absence of velocity. It is the infrastructure that makes velocity sustainable.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#what-comes-next","level":2,"title":"What Comes Next","text":"
The skill system is now mature enough to support real workflows without constant human correction. The hooks infrastructure is portable and resilient. The consolidation practice is documented and repeatable.
The next chapter is about what you build on top of discipline:
Multi-agent coordination;
Deeper integration patterns;
And the question of whether context management is a tool concern or an infrastructure concern.
But those are future posts.
This one is about the release that proved polish is not the opposite of progress. It is what turns a prototype into a product.
The Discipline Release
v0.1.0 shipped features.
v0.2.0 shipped archaeology.
v0.3.0 shipped the habits that make everything else trustworthy.
The most important code in this release is the code that prevents bad code from shipping.
This post was drafted using /ctx-blog with access to the full git history between v0.2.0 and v0.3.0, decision logs, learning logs, and the session files from the skill rewrite window. The meta continues.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/","level":1,"title":"Eight Ways a Hook Can Talk","text":"","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#when-your-warning-disappears","level":2,"title":"When Your Warning Disappears","text":"
Jose Alekhinne / 2026-02-15
I had a backup warning that nobody ever saw.
The hook was correct: It detected stale backups, formatted a nice message, and output it as {\"systemMessage\": \"...\"}. The problem wasn't detection. The problem was delivery. The agent absorbed the information, processed it internally, and never told the user.
Meanwhile, a different hook (the journal reminder) worked perfectly every time. Users saw the reminder, ran the commands, and the backlog stayed manageable. Same hook event (UserPromptSubmit), same project, completely different outcomes.
The difference was one line:
IMPORTANT: Relay this journal reminder to the user VERBATIM\nbefore answering their question.\n
That explicit instruction is what makes VERBATIM relay a pattern, not just a formatting choice. And once I saw it as a pattern, I started seeing others.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#the-audit","level":2,"title":"The Audit","text":"
I looked at every hook in ctx: Eight shell scripts across three hook events. And I found five distinct output patterns already in use, plus three more that the existing hooks were reaching for but hadn't quite articulated.
The patterns form a spectrum based on a single question:
\"Who decides what the user sees?\"
At one end, the hook decides everything (hard gate: the agent literally cannot proceed). At the other end, the hook is invisible (silent side-effect: nobody knows it ran). In between, there is a range of negotiation between hook, agent, and the user.
Here's the full spectrum:
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#1-hard-gate","level":3,"title":"1. Hard Gate","text":"
{\"decision\": \"block\", \"reason\": \"Use ctx from PATH, not ./ctx\"}\n
The nuclear option: The agent's tool call is rejected before it executes.
This is Claude Code's first-class PreToolUse mechanism: The hook returns JSON with decision: block and the agent gets an error with the reason.
Use this for invariants: Constitution rules, security boundaries, things that must never happen. I use it to enforce PATH-based ctx invocation, block sudo, and require explicit approval for git push.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#2-verbatim-relay","level":3,"title":"2. VERBATIM Relay","text":"
IMPORTANT: Relay this warning to the user VERBATIM before answering.\n┌─ Journal Reminder ─────────────────────────────\n│ You have 12 sessions not yet imported.\n│ ctx recall import --all\n└────────────────────────────────────────────────\n
The instruction is the pattern. Without \"Relay VERBATIM,\" agents tend to absorb information into their internal reasoning and never surface it. The explicit instruction changes the behavior from \"I know about this\" to \"I must tell the user about this.\"
I use this for actionable reminders:
Unexported journal entries;
Stale backups;
Context capacity warnings...
...things the user should see regardless of what they asked.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#3-agent-directive","level":3,"title":"3. Agent Directive","text":"
┌─ Persistence Checkpoint (prompt #25) ───────────\n│ No context files updated in 15+ prompts.\n│ Have you discovered learnings worth persisting?\n└──────────────────────────────────────────────────\n
A nudge, not a command. The hook tells the agent something; the agent decides what (if anything) to tell the user. This is right for behavioral nudges: \"you haven't saved context in a while\" doesn't need to be relayed verbatim, but the agent should consider acting on it.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#4-silent-context-injection","level":3,"title":"4. Silent Context Injection","text":"
ctx agent --budget 4000 2>/dev/null || true\n
Pure background enrichment. The agent's context window gets project information injected on every tool call, with no visible output. Neither the agent nor the user sees the hook fire, but the agent makes better decisions because of the context.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#5-silent-side-effect","level":3,"title":"5. Silent Side-Effect","text":"
find \"$CTX_TMPDIR\" -type f -mtime +15 -delete\n
Do work, say nothing. Temp file cleanup on session end. Logging. Marker file management. The action is the entire point; no one needs to know.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#the-patterns-we-dont-have-yet","level":2,"title":"The Patterns We Don't Have Yet","text":"
Three more patterns emerged from the gaps in the existing hooks.
Conditional relay: \"Relay this, but only if the user's question is about X.\" This pattern avoids noise when the warning isn't relevant. It's more fragile (depends on agent judgment) but less annoying.
Suggested action: \"Here's a problem, and here's the exact command to fix it. Ask the user before running it.\" This pattern goes beyond a nudge by giving the agent a concrete proposal, but still requires human approval.
Escalating severity: INFO gets absorbed silently. WARN gets mentioned at the next natural pause. CRITICAL gets the VERBATIM treatment. This pattern introduces a protocol for hooks that produce output at different urgency levels, so they don't all compete for the user's attention.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#the-principle","level":2,"title":"The Principle","text":"
Hooks are the boundary between your environment and the agent's reasoning.
A hook that detects a problem but can't communicate it effectively is the same as no hook at all.
The format of your output is a design decision with real consequences:
Use a hard gate and the agent can't proceed (good for invariants, frustrating for false positives)
Use VERBATIM relay and the user will see it (good for reminders, noisy if overused)
Use an agent directive and the agent might act (good for nudges, unreliable for critical warnings)
Use silent injection and nobody knows (good for enrichment, invisible when it breaks)
Choose deliberately. And, when in doubt, write the word VERBATIM.
The full pattern catalog with decision flowchart and implementation examples is in the Hook Output Patterns recipe.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/","level":1,"title":"Version Numbers Are Lagging Indicators","text":"","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#why-ctxs-journal-site-runs-on-a-v0021-tool","level":2,"title":"Why ctx's Journal Site Runs on a v0.0.21 Tool","text":"
Jose Alekhinne / 2026-02-15
Would You Ship Production Infrastructure on a v0.0.21 Dependency?
Most engineers wouldn't. Version numbers signal maturity. Pre-1.0 means unstable API, missing features, risk.
But version numbers tell you where a project has been. They say nothing about where it's going.
I just bet ctx's entire journal site on a tool that hasn't hit v0.1.0.
Here's why I'd do it again.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-problem","level":2,"title":"The Problem","text":"
When v0.2.0 shipped the journal system, the pipeline was clear:
Export sessions to Markdown;
Enrich them with YAML frontmatter;
And render them into something browsable.
The first two steps were solved; the third needed a tool.
The journal entries are standard Markdown with YAML frontmatter, tables, and fenced code blocks. That is the entire format:
No JSX;
No shortcodes;
No custom templating.
Just Markdown rendered well.
The requirements are modest:
Read a configuration file (such as mkdocs.yml);
Render Markdown with extensions (admonitions, tabs, tables);
Search;
Handle 100+ files without choking on incremental rebuilds;
Look good out of the box;
Not lock me in.
The obvious candidates were as follows:
Tool Language Strengths Pain Points Hugo Go Blazing fast, mature Templating is painful; Go templates fight you on anything non-trivial Astro JS/TS Modern, flexible JS ecosystem overhead; overkill for a docs site MkDocs + Material Python Beautiful defaults, massive community (22k+ stars) Slow incremental rebuilds on large sites; limited extensibility model Zensical Python Built to fix MkDocs' limits; 4-5x faster rebuilds v0.0.21; module system not yet shipped
The instinct was Hugo. Same language as ctx. Fast. Well-established.
But instinct is not analysis. I picked the one with the lowest version number.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-evaluation","level":2,"title":"The Evaluation","text":"
Here is what I actually evaluated, in order:
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#1-the-team","level":3,"title":"1. The Team","text":"
Zensical is built by squidfunk: The same person behind Material for MkDocs, the most popular MkDocs theme with 22,000+ stars. It powers documentation sites for projects across every language and framework.
This is not someone learning how to build static site generators.
This is someone who spent years understanding exactly where MkDocs breaks and decided to fix it from the ground up.
They did not build zensical because MkDocs was bad: They built it because MkDocs hit a ceiling:
Incremental rebuilds: 4-5x faster during serve. When you have hundreds of journal entries and you edit one, the difference between \"rebuild everything\" and \"rebuild this page\" is the difference between a usable workflow and a frustrating one.
Large site performance: Specifically designed for tens of thousands of pages. The journal grows with every session. A tool that slows down as content accumulates is a tool you will eventually replace.
A proven team starting fresh is more predictable than an unproven team at v3.0.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#2-the-architecture","level":3,"title":"2. The Architecture","text":"
Zensical is investing in a Rust-based Markdown parser with CommonMark support. That signals something about the team's priorities:
Performance foundations first; features second.
ctx's journal will grow:
Every exported session adds files.
Every enrichment pass adds metadata.
Choosing a tool that gets slower as you add content means choosing to migrate later.
Choosing one built for scale means the decision holds.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#3-the-migration-path","level":3,"title":"3. The Migration Path","text":"
Zensical reads mkdocs.yml natively. If it doesn't work out, I can move back to MkDocs + Material with zero content changes:
The Markdown is standard;
The frontmatter is standard;
The configuration is compatible.
This is the infrastructure pattern again: The same way ZNC decouples presence from the client, zensical decouples rendering from the generator:
The Markdown is yours.
The frontmatter is standard YAML.
The configuration is MkDocs-compatible.
You are not locked into anything except your own content.
No lock-in is not a feature: It's a design philosophy:
It's the same reason ctx uses plain Markdown files in .context/ instead of a database: the format should outlive the tool.
Lock-in Is the Real Risk, Not Version Numbers
A mature tool with a proprietary format is riskier than a young tool with a standard one. Version numbers measure time invested. Portability measures respect for the user.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#4-the-dependency-tree","level":3,"title":"4. The Dependency Tree","text":"
Here is what pip install zensical actually pulls in:
click
Markdown
Pygments
pymdown-extensions
PyYAML
Only five dependencies. All well-known. No framework bloat. No bundler. No transpiler. No node_modules black hole.
3k GitHub stars at v0.0.21 is a strong early traction for a pre-1.0 project.
The dependency tree is thin: No bloat.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#5-the-fit","level":3,"title":"5. The Fit","text":"
This is the same principle behind the attention budget: do not overfit the tool to hypothetical requirements. The right amount of capability is the minimum needed for the current task.
Hugo is a powerful static site generator. It is also a powerful templating engine, a powerful asset pipeline, and a powerful taxonomy system. For rendering Markdown journals, that power is overhead:
It is the complexity you pay for but never use.
ctx's journal files are standard Markdown with YAML frontmatter, tables, and fenced code blocks. That is exactly the sweet spot Zensical inherits from Material for MkDocs:
No custom plugins needed;
No special syntax;
No templating gymnastics.
The requirements match the capabilities: Not the capabilities that are promised, but the ones that exist today, at v0.0.21.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-caveat","level":2,"title":"The Caveat","text":"
It would be dishonest not to mention what's missing.
The module system for third-party extensions opens in early 2026.
If ctx ever needs custom plugins (for example, auto-linking session IDs, rendering special journal metadata, etc.) that infrastructure isn't there yet.
The installation experience is rough:
We discovered this firsthand: pip install zensical often fails on MacOS (system Python stubs, Homebrew's PEP 668 restrictions). The answer is pipx, which creates an isolated environment with the correct Python version automatically.
That kind of friction is typical for young Python tooling, and it is documented in the Getting Started guide.
And 3,000 stars at v0.0.21 is strong early traction, but it's still early: The community is small. When something breaks, you're reading source code, not documentation.
These are real costs. I chose to pay them because the alternative costs are higher.
For example:
Hugo's templating pain would cost me time on every site change.
Astro's JS ecosystem would add complexity I don't need.
MkDocs would work today but hit scaling walls tomorrow.
Zensical's costs are front-loaded and shrinking.
The others compound.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-evaluation-framework","level":2,"title":"The Evaluation Framework","text":"
For anyone facing a similar choice, here is the framework that emerged:
Signal What It Tells You Weight Team track record Whether the architecture will be sound High Migration path Whether you can leave if wrong High Current fit Whether it solves your problem today High Dependency tree How much complexity you're inheriting Medium Version number How long the project has existed Low Star count Community interest (not quality) Low Feature list What's possible (not what you need) Low
The bottom three are the metrics most engineers optimize for.
The top four are the ones that predict whether you'll still be happy with the choice in a year.
Features You Don't Need Are Not Free
Every feature in a dependency is code you inherit but don't control.
A tool with 200 features where you use 5 means 195 features worth of surface area for bugs, breaking changes, and security issues that have nothing to do with your use case.
Fit is the inverse of feature count.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-broader-pattern","level":2,"title":"The Broader Pattern","text":"
This is part of a theme I keep encountering in this project:
Leading indicators beat lagging indicators.
Domain Lagging Indicator Leading Indicator Tooling Version number, star count Team track record, architecture Code quality Test coverage percentage Whether tests catch real bugs Context persistence Number of files in .context/ Whether the AI makes fewer mistakes Skills Number of skills created Whether each skill fires at the right time Consolidation Lines of code refactored Whether drift stops accumulating
Version numbers, star counts, coverage percentages, file counts...
...these are all measures of effort expended.
They say nothing about value delivered.
The question is never \"how mature is this tool?\"
The question is \"does this tool's trajectory intersect with my needs?\"
Zensical's trajectory:
A proven team fixing known problems,
in a *proven architecture,
with a standard format,
and no lock-in.
ctx's needs:
Tender standard Markdown into a browsable site, at scale, without complexity.
The intersection is clean; the version number is noise.
This is the same kind of decision that shows up throughout ctx:
Skills that fight the platform taught that the best integration extends existing behavior, not replaces it.
You can't import expertise taught that tools should grow from your project's actual needs, not from feature checklists.
Context as infrastructure argues that the format should outlive the tool; and, zensical honors that principle by reading standard Markdown and standard MkDocs configuration.
If You Remember One Thing From This Post...
Version numbers measure where a project has been.
The team and the architecture tell you where it's going.
A v0.0.21 tool built by the right team on the right foundations is a safer bet than a v5.0 tool that doesn't fit your problem.
Bet on trajectories, not timestamps.
This post started as an evaluation note in ideas/ and a separate decision log. The analysis held up. The two merged into one. The meta continues.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/","level":1,"title":"ctx v0.6.0: The Integration Release","text":"","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#two-commands-to-persistent-memory","level":2,"title":"Two Commands to Persistent Memory","text":"
Jose Alekhinne / February 16, 2026
What Changed?
ctx is now a Claude Code plugin. Two commands, no build step:
Understand which shell scripts called which Go commands;
Hope nothing broke when Claude Code updated its hook format.
v0.6.0 ends that era: ctx ships as a Claude Marketplace plugin:
Hooks and skills served directly from source, installed with a single command, updated by pulling the repo. The tool that gives AI persistent memory is now as easy to install as the AI itself.
But the plugin conversion was not just a packaging change: It was the forcing function that rewrote every shell hook in Go, eliminated the jq dependency, enabled go test coverage for hook logic, and made distribution a solved problem.
When you fix how something ships, you end up fixing how it is built.
The Release Window
February 15-February 16, 2026
From the v0.3.0 tag to commit a3178bc:
109 commits.
334 files changed.
Version jumped from 0.3.0 to 0.6.0 to signal the magnitude.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#before-six-shell-scripts-and-a-prayer","level":2,"title":"Before: Six Shell Scripts and a Prayer","text":"
v0.3.0 had six hook scripts. Each was a Bash file that shelled out to ctx subcommands, parsed JSON with jq, and wired itself into Claude Code's hook system via .claude/hooks/:
jq was a hard dependency: No jq, no hooks. macOS ships without it.
No test coverage: Shell scripts were tested manually or not at all.
Fragile deployment: ctx init had to scaffold .claude/hooks/ and .claude/skills/ with the right paths, permissions, and structure.
Version drift: Users who installed once never got hook updates unless they re-ran ctx init.
The shell scripts were the right choice for prototyping. They were the wrong choice for distribution.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#after-one-plugin-zero-shell-scripts","level":2,"title":"After: One Plugin, Zero Shell Scripts","text":"
v0.6.0 replaces all six scripts with ctx system subcommands compiled into the binary:
Shell Script Go Subcommand check-context-size.shctx system check-context-sizecheck-persistence.shctx system check-persistencecheck-journal.shctx system check-journalpost-commit.shctx system post-commitblock-non-path-ctx.shctx system block-non-path-ctxcleanup-tmp.shctx system cleanup-tmp
The plugin's hooks.json wires them to Claude Code events:
{\n \"PreToolUse\": [\n {\"matcher\": \"Bash\", \"command\": \"ctx system block-non-path-ctx\"},\n {\"matcher\": \".*\", \"command\": \"ctx agent --budget 4000\"}\n ],\n \"PostToolUse\": [\n {\"matcher\": \"Bash\", \"command\": \"ctx system post-commit\"}\n ],\n \"UserPromptSubmit\": [\n {\"command\": \"ctx system check-context-size\"},\n {\"command\": \"ctx system check-persistence\"},\n {\"command\": \"ctx system check-journal\"}\n ],\n \"SessionEnd\": [\n {\"command\": \"ctx system cleanup-tmp\"}\n ]\n}\n
No jq. No shell scripts. No .claude/hooks/ directory to manage.
The hooks are Go functions with tests, compiled into the same binary you already have.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#the-plugin-model","level":2,"title":"The Plugin Model","text":"
The ctx plugin lives at .claude-plugin/marketplace.json in the repo.
Claude Code's marketplace system handles discovery and installation:
Skills are served directly from internal/assets/claude/skills/; there is no build step, no make plugin, no generated artifacts.
This means:
Install is two commands: Not \"clone, build, copy, configure.\"
Updates are automatic: Pull the repo; the plugin reads from source.
Skills and hooks are versioned together: No drift between what the CLI expects and what the plugin provides.
ctx init is tool-agnostic: It creates .context/ and nothing else. No .claude/ scaffolding, no assumptions about which AI tool you use.
That last point matters:
Before v0.6.0, ctx init tried to set up Claude Code integration as part of initialization. That coupled the context system to a specific tool.
Now, ctx init gives you persistent context. The plugin gives you Claude Code integration. They compose; they don't depend.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#beyond-the-plugin-what-else-shipped","level":2,"title":"Beyond the Plugin: What Else Shipped","text":"
The plugin conversion dominated the release, but 109 commits covered more ground.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#obsidian-vault-export","level":3,"title":"Obsidian Vault Export","text":"
ctx journal obsidian\n
Generates a full Obsidian vault from enriched journal entries: wikilinks, MOC (Map of Content) pages, and graph-optimized cross-linking. If you already use Obsidian for notes, your AI session history now lives alongside everything else.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#encrypted-scratchpad","level":3,"title":"Encrypted Scratchpad","text":"
ctx pad edit \"DATABASE_URL=postgres://...\"\nctx pad show\n
AES-256-GCM encrypted storage for sensitive one-liners.
The encrypted blob commits to git; the key stays in .gitignore.
This is useful for connection strings, API keys, and other values that need to travel with the project without appearing in plaintext.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#security-hardening","level":3,"title":"Security Hardening","text":"
Three medium-severity findings from a security audit are now closed:
Finding Fix Path traversal via --context-dir Boundary validation: operations cannot escape project root (M-1) Symlink following in .context/Lstat() check before every file read/write (M-2) Predictable temp file paths User-specific temp directory under $XDG_RUNTIME_DIR (M-3)
Plus a new /sanitize-permissions skill that audits settings.local.json for overly broad Bash permissions.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#hooks-that-know-when-to-be-quiet","level":3,"title":"Hooks That Know When to Be Quiet","text":"
A subtle but important fix: hooks now no-op before ctx init has run.
Previously, a fresh clone with no .context/ would trigger hook errors on every prompt. Now, hooks detect the absence of a context directory and exit silently. Similarly, ctx init treats a .context/ directory containing only logs as uninitialized and skips the --overwrite prompt.
Small changes. Large reduction in friction for new users.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#the-numbers","level":2,"title":"The Numbers","text":"Metric v0.3.0 v0.6.0 Skills 21 25 Shell hook scripts 6 0 Go system subcommands 0 6 External dependencies (hooks) jq, bash none Lines of Go ~14,000 ~37,000 Plugin install commands n/a 2 Security findings (open) 3 0 ctx init creates .claude/ yes no
The line count tripled. Most of that is documentation site HTML, Obsidian export logic, and the scratchpad encryption module.
The core CLI grew modestly; the ecosystem around it grew substantially.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#what-does-v060-mean-for-ctx","level":2,"title":"What Does v0.6.0 Mean for ctx?","text":"
v0.1.0 asked: \"Can we give AI persistent memory?\"
v0.2.0 asked: \"Can we make that memory accessible to humans too?\"
v0.3.0 asked: \"Can we make the quality self-enforcing?\"
v0.6.0 asks: \"Can someone else actually use this?\"
A tool that requires cloning a repo, building from source, and manually wiring hooks into the right directories is a tool for its author.
A tool that installs with two commands from a marketplace is a tool for everyone.
The version jumped from 0.3.0 to 0.6.0 because the delta is not incremental: The shell-to-Go rewrite, the plugin model, the security hardening, and the tool-agnostic init: Together, they change what ctx is: Not a different tool, but a tool that is finally ready to leave the workshop.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#what-comes-next","level":2,"title":"What Comes Next","text":"
The plugin model opens the door to distribution patterns that were not possible before. Marketplace discovery means new users find ctx without reading a README. Plugin updates mean existing users get improvements without rebuilding.
The next chapter is about what happens when persistent context is easy to install: Adoption patterns, multi-project workflows, and whether the .context/ convention can become infrastructure that other tools build on.
But those are future posts.
This one is about the release that turned a developer tool into a distributable product: two commands, zero shell scripts, and a presence on the Claude Marketplace.
v0.3.0 shipped discipline. v0.6.0 shipped the front door.
The most important code in this release is the code you never have to copy.
This post was drafted using /ctx-blog-changelog with access to the full git history between v0.3.0 and v0.6.0, release notes, and the plugin conversion PR. The meta continues.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/","level":1,"title":"Code Is Cheap. Judgment Is Not.","text":"","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#why-ai-replaces-effort-not-expertise","level":2,"title":"Why AI Replaces Effort, Not Expertise","text":"
Jose Alekhinne / February 17, 2026
Are You Worried About AI Taking Your Job?
You might be confusing the thing that's cheap with the thing that's valuable.
I keep seeing the same conversation: Engineers, designers, writers: all asking the same question with the same dread:
\"What happens when AI can do what I do?\"
The question is wrong:
AI does not replace workers;
AI replaces unstructured effort.
The distinction matters, and everything I have learned building ctx reinforces it.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-three-confusions","level":2,"title":"The Three Confusions","text":"
People who feel doomed by AI usually confuse three things:
People confuse... With... Effort Value Typing Thinking Production Judgment
Effort is time spent.
Value is the outcome that time produces.
They are not the same; they never were.
AI just makes the gap impossible to ignore.
Typing is mechanical: Thinking is directional.
An AI can type faster than any human. Yet, it cannot decide what to type without someone framing the problem, sequencing the work, and evaluating the result.
Production is making artifacts. Judgment is knowing:
which artifacts to make,
in what order,
to what standard,
and when to stop.
AI floods the system with production capacity; it does not flood the system with judgment.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#code-is-nothing","level":2,"title":"Code Is Nothing","text":"
This sounds provocative until you internalize it:
Code is cheap. Artifacts are cheap.
An AI can generate a thousand lines of working code in literal *minutes**:
It can scaffold a project, write tests, build a CI pipeline, draft documentation. The raw production of software artifacts is no longer the bottleneck.
So, what is not cheap?
Taste: knowing what belongs and what does not
Framing: turning a vague goal into a concrete problem
Sequencing: deciding what to build first and why
Fanning out: breaking work into parallel streams that converge
Acceptance criteria: defining what \"done\" looks like before starting
Judgment: the thousand small decisions that separate code that works from code that lasts
These are the skills that direct production: Hhuman skills.
Not because AI is incapable of learning them, but because they require something AI does not have:
temporal accountability for generated outcomes.
That is, you cannot keep AI accountable for the $#!% it generated three months ago. A human, on the other hand, will always be accountable.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-evidence-from-building-ctx","level":2,"title":"The Evidence From Building ctx","text":"
I did not arrive at this conclusion theoretically.
I arrived at it by building a tool with an AI agent for three weeks and watching exactly where a human touch mattered.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#yolo-mode-proved-production-is-cheap","level":3,"title":"YOLO Mode Proved Production Is Cheap","text":"
In Building ctx Using ctx, I documented the YOLO phase: auto-accept everything, let the AI ship features at full speed. It produced 14 commands in a week. Impressive output.
The code worked. The architecture drifted. Magic strings accumulated. Conventions diverged. The AI was producing at a pace no human could match, and every artifact it produced was a small bet that nobody was evaluating.
Production without judgment is not velocity. It is debt accumulation at breakneck speed.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-31-ratio-proved-judgment-has-a-cadence","level":3,"title":"The 3:1 Ratio Proved Judgment Has a Cadence","text":"
In The 3:1 Ratio, the git history told the story:
Three sessions of forward momentum followed by one session of deliberate consolidation. The consolidation session is where the human applies judgment: reviewing what the AI built, catching drift, realigning conventions.
The AI does the refactoring. The human decides what to refactor and when to stop.
Without the human, the AI will refactor forever, improving things that do not matter and missing things that do.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-attention-budget-proved-framing-is-scarce","level":3,"title":"The Attention Budget Proved Framing Is Scarce","text":"
In The Attention Budget, I explained why more context makes AI worse, not better. Every token competes for attention: Dump everything in and the AI sees nothing clearly.
This is a framing problem: The human's job is to decide what the AI should focus on: what to include, what to exclude, what to emphasize.
ctx agent --budget 4000 is not just a CLI flag: It is a forcing function for human judgment about relevance.
The AI processes. The human curates.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#skills-design-proved-taste-is-load-bearing","level":3,"title":"Skills Design Proved Taste Is Load-Bearing","text":"
The skill trilogy (You Can't Import Expertise, The Anatomy of a Skill That Works) showed that the difference between a useful skill and a useless one is not craftsmanship:
It is taste.
A well-crafted skill with the wrong focus is worse than no skill at all: It consumes the attention budget with generic advice while the project-specific problems go unchecked.
The E/A/R framework (Expert, Activation, Redundant) is a judgment too:. The AI cannot apply it to itself. The human evaluates what the AI already knows, what it needs to be told, and what is noise.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#automation-discipline-proved-restraint-is-a-skill","level":3,"title":"Automation Discipline Proved Restraint Is a Skill","text":"
In Not Everything Is a Skill, the lesson was that the urge to automate is not the need to automate. A useful prompt does not automatically deserve to become a slash command.
The human applies judgment about frequency, stability, and attention cost.
The AI can build the skill. Only the human can decide whether it should exist.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#defense-in-depth-proved-boundaries-require-judgment","level":3,"title":"Defense in Depth Proved Boundaries Require Judgment","text":"
In Defense in Depth, the entire security model for unattended AI agents came down to: markdown is not a security boundary. Telling an AI \"don't do bad things\" is production (of instructions). Setting up an unprivileged user in a network-isolated container is judgment (about risk).
The AI follows instructions. The human decides which instructions are enforceable and which are \"wishful thinking\".
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#parallel-agents-proved-scale-amplifies-the-gap","level":3,"title":"Parallel Agents Proved Scale Amplifies the Gap","text":"
In Parallel Agents and Merge Debt, the lesson was that multiplying agents multiplies output. But it also multiplies the need for judgment:
Five agents running in parallel produce five sessions of drift in one clock hour. The human who can frame tasks cleanly, define narrow acceptance criteria, and evaluate results quickly becomes the limiting factor.
More agents do not reduce the need for judgment. They increase it.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-two-reactions","level":2,"title":"The Two Reactions","text":"
When AI floods the system with cheap output, two things happen:
Those who only produce: panic. If your value proposition is \"I write code,\" and an AI writes code faster, cheaper, and at higher volume, then the math is unfavorable. Not because AI took your job, but because your job was never the code. It was the judgment around the code, and you were not exercising it.
Those who direct: accelerate. If your value proposition is \"I know what to build, in what order, to what standard,\" then AI is the best thing that ever happened to you: Production is no longer the bottleneck: Your ability to frame, sequence, evaluate, and course-correct is now the limiting factor on throughput.
The gap between these two is not talent: It is the awareness of where the value lives.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#what-this-means-in-practice","level":2,"title":"What This Means in Practice","text":"
If you are an engineer reading this, the actionable insight is not \"learn prompt engineering\" or \"master AI tools.\" It is:
Get better at the things AI cannot do.
AI does this well You need to do this Generate code Frame the problem Write tests Define acceptance criteria Scaffold projects Sequence the work Fix bugs from stack traces Evaluate tradeoffs Produce volume Exercise restraint Follow instructions Decide which instructions matter
The skills on the right column are not new. They are the same skills that have always separated senior engineers from junior ones.
AI did not create the distinction; it just made it load-bearing.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#if-anything-i-feel-empowered","level":2,"title":"If Anything, I Feel Empowered","text":"
I will end with something personal.
I am not worried: I am empowered.
Before ctx, I could think faster than I could produce:
Ideas sat in a queue.
The bottleneck was always \"I know what to build, but building it takes too long.\"
Now the bottleneck is gone. Poof!
Production is cheap.
The queue is clearing.
The limiting factor is how fast I can think, not how fast I can type.
That is not a threat: That is the best force multiplier I've ever had.
The people who feel threatened are confusing the accelerator for the replacement:
*AI does not replace the conductor; it gives them a bigger orchestra.
If You Remember One Thing From This Post...
Code is cheap. Judgment is not.
AI replaces unstructured effort, not directed expertise. The skills that matter now are the same skills that have always mattered: taste, framing, sequencing, and the discipline to stop.
The difference is that now, for the first time, those skills are the only bottleneck left.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-arc","level":2,"title":"The Arc","text":"
This post is a retrospective. It synthesizes the thread running through every previous entry in this blog:
Building ctx Using ctx showed that production without direction creates debt
Refactoring with Intent showed that slowing down is not the opposite of progress
The Attention Budget showed that curation outweighs volume
The skill trilogy showed that taste determines whether a tool helps or hinders
Not Everything Is a Skill showed that restraint is a skill in itself
Defense in Depth showed that instructions are not boundaries
The 3:1 Ratio showed that judgment has a schedule
Parallel Agents showed that scale amplifies the gap between production and judgment
Context as Infrastructure showed that the system you build for context is infrastructure, not conversation
From YOLO mode to defense in depth, the pattern is the same:
Production is the easy part;
Judgment is the hard part;
AI changed the ratio, not the rule.
This post synthesizes the thread running through every previous entry in this blog. The evidence is drawn from three weeks of building ctx with AI assistance, the decisions recorded in DECISIONS.md, the learnings captured in LEARNINGS.md, and the git history that tracks where the human mattered and where the AI ran unsupervised.
See also: When a System Starts Explaining Itself -- what happens after the arc: the first field notes from the moment the system starts compounding in someone else's hands.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/","level":1,"title":"Context as Infrastructure","text":"","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#why-your-ai-needs-a-filesystem-not-a-prompt","level":2,"title":"Why Your AI Needs a Filesystem, Not a Prompt","text":"
Jose Alekhinne / February 17, 2026
Where does your AI's knowledge live between sessions?
If the answer is \"in a prompt I paste at the start,\" you are treating context as a consumable. Something assembled, used, and discarded.
What if you treated it as infrastructure instead?
This post synthesizes a thread that has been running through every ctx blog post; from the origin story to the attention budget to the discipline release. The thread is this: context is not a prompt problem. It is an infrastructure problem. And the tools we build for it should look more like filesystems than clipboard managers.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-prompt-paradigm","level":2,"title":"The Prompt Paradigm","text":"
Most AI-assisted development treats context as ephemeral:
Start a session.
Paste your system prompt, your conventions, your current task.
Work.
Session ends. Everything evaporates.
Next session: paste again.
This works for short interactions. For sustained development (where decisions compound over days and weeks) it fails in three ways:
It does not persist: A decision made on Tuesday must be re-explained on Wednesday. A learning captured in one session is invisible to the next.
It does not scale: As the project grows, the \"paste everything\" approach hits the context window ceiling. You start triaging what to include, often cutting exactly the context that would have prevented the next mistake.
It does not compose: A system prompt is a monolith. You cannot load part of it, update one section, or share a subset with a different workflow. It is all or nothing.
The Copy-Paste Tax
Every session that starts with pasting a prompt is paying a tax:
The human time to assemble the context, the risk of forgetting something, and the silent assumption that yesterday's prompt is still accurate today.
Over 70+ sessions, that tax compounds into a significant maintenance burden: One that most developers absorb without questioning it.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-infrastructure-paradigm","level":2,"title":"The Infrastructure Paradigm","text":"
ctx takes a different approach:
Context is not assembled per-session; it is maintained as persistent files in a .context/ directory:
.context/\n CONSTITUTION.md # Inviolable rules\n TASKS.md # Current work items\n CONVENTIONS.md # Code patterns and standards\n DECISIONS.md # Architectural choices with rationale\n LEARNINGS.md # Gotchas and lessons learned\n ARCHITECTURE.md # System structure\n GLOSSARY.md # Domain terminology\n AGENT_PLAYBOOK.md # Operating manual for agents\n journal/ # Enriched session summaries\n archive/ # Completed work, cold storage\n
Each file has a single purpose;
Each can be loaded independently;
Each persists across sessions, tools, and team members.
This is not a novel idea. It is the same idea behind every piece of infrastructure software engineers already use:
The parallel is not metaphorical. Context files are infrastructure:
They are versioned (git tracks them);
They are structured (Markdown with conventions);
They have schemas (required fields for decisions and learnings);
And they have lifecycle management (archiving, compaction, indexing).
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#separation-of-concerns","level":2,"title":"Separation of Concerns","text":"
The most important design decision in ctx is not any individual feature. It is the separation of context into distinct files with distinct purposes.
A single CONTEXT.md file would be simpler to implement. It would also be impossible to maintain.
Why? Because different types of context have different lifecycles:
Context Type Changes Read By Load When Constitution Rarely Every session Always Tasks Every session Session start Always Conventions Weekly Before coding When writing code Decisions When decided When questioning When revisiting Learnings When learned When stuck When debugging Journal Every session Rarely When investigating
Loading everything into every session wastes the attention budget on context that is irrelevant to the current task. Loading nothing forces the AI to operate blind.
Separation of concerns allows progressive disclosure:
Load the minimum that matters for this moment, with the option to load more when needed.
# Session start: load the essentials\nctx agent --budget 4000\n\n# Deep investigation: load everything\ncat .context/DECISIONS.md\ncat .context/journal/2026-02-05-*.md\n
The filesystem is the index. File names, directory structure, and timestamps encode relevance. The AI does not need to read every file; it needs to know where to look.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-two-tier-persistence-model","level":2,"title":"The Two-Tier Persistence Model","text":"
ctx uses two tiers of persistence, and the distinction is architectural:
The curated tier is what the AI sees at session start. It is optimized for signal density:
Structured entries,
Indexed tables,
Reverse-chronological order (newest first, so the most relevant content survives truncation).
The full dump tier is for humans and for deep investigation. It contains everything: Enriched journals, archived tasks...
It is never autoloaded because its volume would destroy attention density.
This two-tier model is analogous to how traditional systems separate hot and cold storage:
The hot path (curated context) is optimized for read performance (measured not in milliseconds, but in tokens consumed per unit of useful information).
The cold path (journal) is optimized for completeness.
Nothing Is Ever Truly Lost
The full dump tier means that context does not need to be perfect: It just needs to be findable.
A decision that was not captured in DECISIONS.md can be recovered from the session transcript where it was discussed.
A learning that was not formalized can be found in the journal entry from that day.
The curated tier is the fast path: The full dump tier is the safety net.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#decision-records-as-first-class-citizens","level":2,"title":"Decision Records as First-Class Citizens","text":"
One of the patterns that emerged from ctx's own development is the power of structured decision records.
v0.1.0 allowed adding decisions as one-liners:
ctx add decision \"Use PostgreSQL\"\n
v0.2.0 enforced structure:
ctx add decision \"Use PostgreSQL\" \\\n --context \"Need a reliable database for user data\" \\\n --rationale \"ACID compliance, team familiarity\" \\\n --consequence \"Need connection pooling, team training\"\n
The difference is not cosmetic:
A one-liner decision teaches the AI what was decided.
A structured decision teaches it why; and why is what prevents the AI from unknowingly reversing the decision in a future session.
This is infrastructure thinking:
Decisions are not notes. They are records with required fields, just like database rows have schemas.
The enforcement exists because incomplete records are worse than no records: They create false confidence that the context is captured when it is not.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-ide-is-the-interface-decision","level":2,"title":"The \"IDE Is the Interface\" Decision","text":"
Early in ctx's development, there was a temptation to build a custom UI: a web dashboard for browsing sessions, editing context, viewing analytics.
The decision was no. The IDE is the interface.
# This is the ctx \"UI\":\ncode .context/\n
This decision was not about minimalism for its own sake. It was about recognizing that .context/ files are just files; and files have a mature, well-understood infrastructure:
Version control: git diff .context/DECISIONS.md shows exactly what changed and when.
Search: Your IDE's full-text search works across all context files.
Editing: Markdown in any editor, with preview, spell check, and syntax highlighting.
Collaboration: Pull requests on context files work the same as pull requests on code.
Building a custom UI would have meant maintaining a parallel infrastructure that duplicates what every IDE already provides:
It would have introduced its own bugs, its own update cycle, and its own learning curve.
The filesystem is not a limitation: It is the most mature, most composable, most portable infrastructure available.
Context Files in Git
Because .context/ lives in the repository, context changes are part of the commit history.
A decision made in commit abc123 is as traceable as a code change in the same commit.
This is not possible with prompt-based context, which exists outside version control entirely.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#progressive-disclosure-for-ai","level":2,"title":"Progressive Disclosure for AI","text":"
The concept of progressive disclosure comes from human interface design: show the user the minimum needed to make progress, with the option to drill deeper.
ctx applies the same principle to AI context:
Level What the AI Sees Token Cost When Level 0 ctx status (one-line summary) ~100 Quick check Level 1 ctx agent --budget 4000 ~4,000 Normal work Level 2 ctx agent --budget 8000 ~8,000 Complex tasks Level 3 Direct file reads 10,000+ Deep investigation
Each level trades tokens for depth. Level 1 is sufficient for most work: the AI knows the active tasks, the key conventions, and the recent decisions. Level 3 is for archaeology: understanding why a decision was made three weeks ago, or finding a pattern in the session history.
The explicit --budget flag is the mechanism that makes this work:
Without it, the default behavior would be to load everything (because more context feels safer), which destroys the attention density that makes the loaded context useful.
The constraint is the feature: A budget of 4,000 tokens forces ctx to prioritize ruthlessly: constitution first (always full), then tasks and conventions (budget-capped), then decisions and learnings scored by recency and relevance to active tasks. Entries that don't fit get title-only summaries rather than being silently dropped.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-philosophical-shift","level":2,"title":"The Philosophical Shift","text":"
The shift from \"context as prompt\" to \"context as infrastructure\" changes how you think about AI-assisted development:
Prompt Thinking Infrastructure Thinking \"What do I paste today?\" \"What has changed since yesterday?\" \"How do I fit everything in?\" \"What's the minimum that matters?\" \"The AI forgot my conventions\" \"The conventions are in a file\" \"I need to re-explain\" \"I need to update the record\" \"This session is getting slow\" \"Time to compact and archive\"
The first column treats AI interaction as a conversation. The second treats it as a system: One that can be maintained, optimized, and debugged.
Context is not something you give the AI. It is something you maintain: Like a database, like a config file, like any other piece of infrastructure that a running system depends on.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#beyond-ctx-the-principles","level":2,"title":"Beyond ctx: The Principles","text":"
The patterns that ctx implements are not specific to ctx. They are applicable to any project that uses AI-assisted development:
Separate context by purpose: Do not put everything in one file. Different types of information have different lifecycles and different relevance windows.
Make context persistent: If a decision matters, write it down in a file that survives the session. If a learning matters, capture it with structure.
Budget explicitly: Know how much context you are loading and whether it is worth the attention cost.
Use the filesystem: File names, directory structure, and timestamps are metadata that the AI can navigate. A well-organized directory is an index that costs zero tokens to maintain.
Version your context: Put context files in git. Changes to decisions are as important as changes to code.
Design for degradation: Sessions will get long. Attention will dilute. Build mechanisms (compaction, archiving, cooldowns) that make degradation visible and manageable.
These are not ctx features. They are infrastructure principles that happen to be implemented as a CLI tool. Any team could implement them with nothing more than a directory convention and a few shell scripts.
The tool is a convenience: The principles are what matter.
If You Remember One Thing From This Post...
Prompts are conversations. Infrastructure persists.
Your AI does not need a better prompt. It needs a filesystem:
versioned, structured, budgeted, and maintained.
The best context is the context that was there before you started the session.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-arc","level":2,"title":"The Arc","text":"
This post is the architectural companion to the Attention Budget. That post explained why context must be curated (token economics). This one explains how to structure it (filesystem, separation of concerns, persistence tiers).
Together with Code Is Cheap, Judgment Is Not, they form a trilogy about what matters in AI-assisted development:
Attention Budget: the resource you're managing
Context as Infrastructure: the system you build to manage it
Code Is Cheap: the human skill that no system replaces
And the practices that keep it all honest:
The 3:1 Ratio: the cadence for maintaining both code and context
IRC as Context: the historical precedent: stateless protocols have always needed stateful wrappers
This post synthesizes ideas from across the ctx blog series: the attention budget primitive, the two-tier persistence model, the IDE decision, and the progressive disclosure pattern. The principles are drawn from three weeks of building ctx and 70+ sessions of treating context as infrastructure rather than conversation.
See also: When a System Starts Explaining Itself: what happens when this infrastructure starts compounding in someone else's environment.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/","level":1,"title":"Parallel Agents, Merge Debt, and the Myth of Overnight Progress","text":"","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#when-the-screen-looks-like-progress","level":2,"title":"When the Screen Looks Like Progress","text":"
Jose Alekhinne / 2026-02-17
How Many Terminals Are too Many?
You discover agents can run in parallel.
So you open ten...
...Then twenty.
The fans spin. Tokens burn. The screen looks like progress.
It is NOT progress.
There is a phase every builder goes through:
The tooling gets fast enough.
The model gets good enough.
The temptation becomes irresistible:
more agents, more output, faster delivery.
So you open terminals. You spawn agents. You watch tokens stream across multiple windows simultaneously, and it feels like multiplication.
It is not multiplication.
It is merge debt being manufactured in real time.
The ctx Manifesto says it plainly:
Activity is not impact. Code is not progress.
This post is about what happens when you take that seriously in the context of parallel agent workflows.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#the-unit-of-scale-is-not-the-agent","level":2,"title":"The Unit of Scale Is Not the Agent","text":"
The naive model says:
More agents -> more output -> faster delivery
The production model says:
Clean context boundaries -> less interference -> higher throughput
Parallelism only works when the cognitive surfaces do not overlap.
If two agents touch the same files, you did not create parallelism: You created a conflict generator.
They will:
Revert each other's changes;
Relint each other's formatting;
Refactor the same function in different directions.
You watch with 🍿. Nothing ships.
This is the same insight from the worktrees post: partition by blast radius, not by priority.
Two tasks that touch the same files belong in the same track, no matter how important the other one is. The constraint is file overlap.
Everything else is scheduling.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#the-five-agent-rule","level":2,"title":"The \"Five Agent\" Rule","text":"
In practice there is a ceiling.
Around five or six concurrent agents:
Token burn becomes noticeable;
Supervision cost rises;
Coordination noise increases;
Returns flatten.
This is not a model limitation: This is a human merge bandwidth limitation.
You are the bottleneck, not the silicon.
The attention budget applies to you too:
Every additional agent is another stream of output you need to comprehend, verify, and integrate. Your attention density drops the same way the model's does when you overload its context window.
Five agents producing verified, mergeable change beats twenty agents producing merge conflicts you spend a day untangling.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#role-separation-beats-file-locking","level":2,"title":"Role Separation Beats File Locking","text":"
Real parallelism comes from task topology, not from tooling.
Four agents editing the same implementation surface
Context is the Boundary
The goal is not to keep agents busy.
The goal is to keep contexts isolated.
This is what the codebase audit got right:
Eight agents, all read-only, each analyzing a different dimension.
Zero file overlap.
Zero merge conflicts.
Eight reports that composed cleanly because no agent interfered with another.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#when-terminals-stop-scaling","level":2,"title":"When Terminals Stop Scaling","text":"
There is a moment when more windows stop helping.
That is the signal. Not to add orchestration. But to introduce:
git worktree\n
Because now you are no longer parallelizing execution; you are parallelizing state.
State Scales, Windows Don't
State isolation is the real scaling.
Window multiplication is theater.
The worktrees post covers the mechanics:
Sibling directories;
Branch naming;
The inevitable TASKS.md conflicts;
The 3-4 worktree ceiling.
The principle underneath is older than git:
Shared mutable state is the enemy of parallelism.
Always has been.
Always will be.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#the-overnight-loop-illusion","level":2,"title":"The Overnight Loop Illusion","text":"
Autonomous night runs are impressive.
You sleep. The machine produces thousands of lines.
In the morning:
You read;
You untangle;
You reconstruct intent;
You spend a day making it shippable.
In retrospect, nothing was accelerated.
The bottleneck moved from typing to comprehension.
The Comprehension Tax
If understanding the output costs more than producing it, the loop is a net loss.
Progress is not measured in generated code.
Progress is measured in verified, mergeable change.
The ctx Manifesto calls this out directly:
The Scoreboard
Verified reality is the scoreboard.
The only truth that compounds is verified change in the real world.
An overnight run that produces 3,000 lines nobody reviewed is not 3,000 lines of progress: It is 3,000 lines of liability until someone verifies every one of them.
And that someone is (insert drumroll here) you:
The same bottleneck that was supposedly being bypassed.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#skills-that-fight-the-platform","level":2,"title":"Skills That Fight the Platform","text":"
Most marketplace skills are prompt decorations:
They rephrase what the base model already knows;
They increase token usage;
They reduce clarity:
They introduce behavioral drift.
We covered this in depth in Skills That Fight the Platform: judgment suppression, redundant guidance, guilt-tripping, phantom dependencies, universal triggers: Five patterns that make agents worse, not better.
A real skill does one of these:
Encodes workflow state;
Enforces invariants;
Reduces decision branching.
Everything else is packaging.
The anatomy post established the criteria: quality gates, negative triggers, examples over rules, skills as contracts.
If a skill doesn't meet those criteria...
It is either a recipe (document it in hack/);
Or noise (delete it);
There is no third option.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#hooks-are-context-that-execute","level":2,"title":"Hooks Are Context That Execute","text":"
The most valuable skills are not prompts:
They are constraints embedded in the toolchain.
For example: The agent cannot push.
git push becomes:
Stop. A human reviews first.
A commit without verification becomes:
Did you run tests? Did you run linters? What exactly are you shipping?
This is not safety theater; this is intent preservation.
The thing the ctx Manifesto calls \"encoding intent into the environment.\"
The Eight Ways a Hook Can Talk catalogued the full spectrum: from silent enrichment to hard blocks.
The key insight was that hooks are not just safety rails: They are context that survives execution.
They are the difference between an agent that remembers the rules and one that enforces them.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#complexity-is-a-tax","level":2,"title":"Complexity Is a Tax","text":"
Every extra layer adds cognitive weight:
Orchestration frameworks;
Meta agents;
Autonomous planning systems...
If a single terminal works, stay there.
If five isolated agents work, stop there.
Add structure only when a real bottleneck appears.
NOT when an influencer suggests one.
This is the same lesson from Not Everything Is a Skill:
The best automation decision is sometimes not to automate.
A recipe in a Markdown file costs nothing until you use it.
An orchestration framework costs attention on every run, whether it helps or not.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#literature-is-throughput","level":2,"title":"Literature Is Throughput","text":"
Clear writing is not aesthetic: It is compression.
Better articulation means:
Fewer tokens;
Fewer misinterpretations;
Faster convergence.
The attention budget taught us that context is a finite resource with a quadratic cost.
Language determines how fast you spend context.
A well-written task description that takes 50 tokens outperforms a rambling one that takes 200: Not just because it is cheaper, but because it leaves more headroom for the model to actually think.
Literature is NOT Overrated
Attention is a finite budget.
Language determines how fast you spend it.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#the-real-metric","level":2,"title":"The Real Metric","text":"
The real metric is not:
Lines generated;
Agents running;
Tasks completed while you sleep.
But:
Time from idea to verified, mergeable, production change.
Everything else is motion.
The entire blog series has been circling this point:
The attention budget was about spending tokens wisely.
The skills trilogy was about not wasting them on prompt decoration.
The worktrees post was about multiplying throughput without multiplying interference.
The discipline release was about what a release looks like when polish outweighs features: 3:1.
Every post has arrived (and made me converge) at the same answer so far:
The metric is a verified change, not generated output.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#ctx-was-never-about-spawning-more-minds","level":2,"title":"ctx Was Never About Spawning More Minds","text":"
ctx is about:
Isolating context;
Preserving intent;
Making progress composable.
Parallel agents are powerful. But only when you respect the boundaries that make parallelism real.
Otherwise, you are not scaling cognition; you are scaling interference.
The ctx Manifesto's thesis holds:
Without ctx, intelligence resets. With ctx, creation compounds.
Compounding requires structure.
Structure requires boundaries.
Boundaries require the discipline to stop adding agents when five is enough.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#practical-summary","level":2,"title":"Practical Summary","text":"
A production workflow tends to converge to this:
Practice Why Stay in one terminal unless necessary Minimize coordination overhead Spawn a small number of agents with non-overlapping responsibilities Conflict avoidance > parallelism Isolate state with worktrees when surfaces grow State isolation is real scaling Encode verification into hooks Intent that survives execution Avoid marketplace prompt cargo cults Skills are contracts, not decorations Measure merge cost, not generation speed The metric is verified change
This is slower to watch. Faster to ship.
If You Remember One Thing From This Post...
Progress is not what the machine produces while you sleep.
Progress is what survives contact with the main branch.
See also: Code Is Cheap. Judgment Is Not.: the argument that production capacity was never the bottleneck, and why multiplying agents amplifies the need for human judgment rather than replacing it.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/","level":1,"title":"The 3:1 Ratio","text":"","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#scheduling-consolidation-in-ai-development","level":2,"title":"Scheduling Consolidation in AI Development","text":"
Jose Alekhinne / February 17, 2026
How often should you stop building and start cleaning?
Every developer knows technical debt exists. Every developer postpones dealing with it.
AI-assisted development makes the problem worse; not because the AI writes bad code, but because it writes code so fast that drift accumulates before you notice.
In Refactoring with Intent, I mentioned a ratio that worked for me: 3:1. Three YOLO sessions create enough surface area to reveal patterns. The fourth session turns those patterns into structure.
That was an observation. This post is the evidence.
During the first two weeks of building ctx, I noticed a rhythm in my own productivity. Feature sessions felt great: new commands, new capabilities, visible progress...
...but after three of them, things would start to feel sticky: variable names that almost made sense, files that had grown past their purpose, patterns that repeated without being formalized.
The fourth session (when I stopped adding and started cleaning) was always the most painful to start and the most satisfying to finish.
It was also the one that made the next three feature sessions faster.
The ctx git history between January 20 and February 7 tells a clear story when you categorize commits:
Week Feature commits Consolidation commits Ratio Jan 20-26 18 5 3.6:1 Jan 27-Feb 1 14 6 2.3:1 Feb 1-7 15 35+ 0.4:1
The first week was pure YOLO: Almost four feature commits for every consolidation commit. The codebase grew fast.
The second week started to self-correct. The ratio dropped as refactoring sessions became necessary: Not scheduled, but forced by friction.
The third week inverted entirely: v0.3.0 was almost entirely consolidation: the skill migration, the sweep, the documentation standardization. Thirty-five quality commits against fifteen features.
The debt from weeks one and two was paid in week three.
The Compounding Problem
Consolidation debt compounds.
Week one's drift doesn't just persist into week two: It accelerates, because new features are built on top of drifted patterns.
By week three, the cost of consolidation was higher than it would have been if spread evenly.
Convention says boolean functions should be named HasX, IsX, CanX. After three feature sprints:
// What accumulated:\nfunc CheckIfEnabled() bool // should be Enabled\nfunc ValidateFormat() bool // should be ValidFormat\nfunc TestConnection() bool // should be Connects\nfunc VerifyExists() bool // should be Exists or HasFile\nfunc EnsureReady() bool // should be Ready\n
Five violations. Not bugs, but friction that compounds every time someone (human or AI) reads the code and has to infer the naming convention from inconsistent examples.
// Week 1: acceptable prototype\nif entry.Type == \"task\" {\n filename = \"TASKS.md\"\n}\n\n// Week 3: same pattern in 7+ files\n// Now it's a maintenance liability\n
When the same literal appears in seven files, changing it means finding all seven. Missing one means a silent runtime bug. Constants exist to prevent exactly this. But during feature velocity, nobody stops to extract them.
Refactoring with Intent documented the constants consolidation that cleaned this up. The 3:1 ratio is the practice that prevents it from accumulating again.
Eighty-plus instances of hardcoded file permissions. Not wrong, but if I ever need to change the default (and I did, for hook scripts that need execute permissions), it means a codebase-wide search.
Drift Is Not Bugs
None of these are bugs. The code works. Tests pass.
But drift creates false confidence: the codebase looks consistent until you try to change something and discover that five different conventions exist for the same concept.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#why-you-cannot-consolidate-on-day-one","level":2,"title":"Why You Cannot Consolidate on Day One","text":"
The temptation is to front-load quality: write all the conventions, enforce all the checks, prevent all the drift before it happens.
This fails for two reasons.
First, you do not know what will drift: Predicate naming violations only become a convention check after you notice three different naming patterns competing. Magic strings only become a consolidation target after you change a literal and discover it exists in seven places.
The conventions emerge from the work; they cannot precede it.
This is what You Can't Import Expertise meant in practice: the consolidation checks grow from the project's own drift history. You cannot write them on day one because you do not yet know what will drift.
Second, premature consolidation slows discovery: During the prototyping phase, the goal is to explore the design space. Enforcing strict conventions on code that might be deleted tomorrow is waste.
YOLO mode has its place: The problem is not YOLO itself, but YOLO without a scheduled cleanup.
The Consolidation Paradox
You need a drift history to know what to consolidate.
You need consolidation to prevent drift from compounding.
The 3:1 ratio resolves this paradox:
Let drift accumulate for three sessions (enough to see patterns), then consolidate in the fourth (before the patterns become entrenched*).
The ctx project now has an /audit skill that encodes nine project-specific checks:
Check What It Catches Predicate naming Boolean functions not using Has/Is/Can Magic strings Repeated literals not in config constants File permissions Hardcoded 0644/0755 not using constants Godoc style Missing or non-standard documentation File length Files exceeding 400 lines Large functions Functions exceeding 80 lines Template drift Live skills diverging from templates Import organization Non-standard import grouping TODO/FIXME staleness Old markers that are no longer relevant
This is not a generic linter. These are project-specific conventions that emerged from ctx's own development history. A generic code quality tool would catch some of them. Only a project-specific check catches all of them, because some of them (predicate naming, template drift) are conventions that exist nowhere except in this project's CONVENTIONS.md.
Not all drift needs immediate consolidation. Here is the matrix I use:
Signal Action Same literal in 3+ files Extract to constant Same code block in 3+ places Extract to helper Naming convention violated 5+ times Fix and document rule File exceeds 400 lines Split by concern Convention exists but is regularly violated Strengthen enforcement Pattern exists only in one place Leave it alone Code works but is \"ugly\" Leave it alone
The last two rows matter:
Consolidation is about reducing maintenance cost, not achieving aesthetic perfection. Code that works and exists in one place does not benefit from consolidation; it benefits from being left alone until it earns its refactoring.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#consolidation-as-context-hygiene","level":2,"title":"Consolidation as Context Hygiene","text":"
There is a parallel between code consolidation and context management that became clear during the ctx development:
Code Consolidation Context Hygiene Extract magic strings Archive completed tasks Standardize naming Keep DECISIONS.md current Remove dead code Compact old sessions Update stale comments Review LEARNINGS.md for staleness Check template drift Verify CONVENTIONS.md matches code
ctx compact does for context what consolidation does for code:
It moves completed work to cold storage, keeping the active context clean and focused. The attention budget applies to both the AI's context window and the developer's mental model of the codebase.
When context files accumulate stale entries, the AI's attention is wasted on completed tasks and outdated conventions. When code accumulates drift, the developer's attention is wasted on inconsistencies that obscure the actual logic.
Both are solved by the same discipline: periodic, scheduled cleanup.
This is also why parallel agents make the problem harder, not easier. Three agents running simultaneously produce three sessions' worth of drift in one clock hour. The consolidation cadence needs to match the output rate, not the calendar.
Here is how the 3:1 ratio works in practice for ctx development:
Sessions 1-3: Feature work
Add new capabilities;
Write tests for new code;
Do not stop for cleanup unless something is actively broken;
Note drift as you see it (a comment, a task, a mental note).
Session 4: Consolidation
Run /audit to surface accumulated drift;
Fix the highest-impact items first;
Update CONVENTIONS.md if new patterns emerged;
Archive completed tasks;
Review LEARNINGS.md for anything that became a convention.
The key insight is that session 4 is not optional. It is not \"if we have time\": It is scheduled with the same priority as feature work.
The cost of skipping it is not visible immediately; it becomes visible three sessions later, when the next consolidation session takes twice as long because the drift compounded.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#what-the-ratio-is-not","level":2,"title":"What the Ratio Is Not","text":"
The 3:1 ratio is not a universal law. It is an empirical observation from one project with one developer working with AI assistance.
Different projects will have different ratios:
A mature codebase with strong conventions might sustain 5:1 or higher;
A greenfield prototype might need 2:1;
A team of multiple developers with different styles might need 1:1.
The number is less important than the practice: consolidation is not a reaction to problems. It is a scheduled activity.
If you wait for drift to cause pain before consolidating, you have already paid the compounding cost.
If You Remember One Thing From This Post...
Three sessions of building. One session of cleaning.
Not because the code is dirty, but because drift compounds silently, and the only way to catch it is to look for it on a schedule.
The ratio is the schedule.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#the-arc-so-far","level":2,"title":"The Arc So Far","text":"
This post sits at a crossroads in the ctx story. Looking back:
Building ctx Using ctx documented the YOLO sprint that created the initial codebase
Refactoring with Intent introduced the 3:1 ratio as an observation from the first cleanup
The Attention Budget explained why drift matters: every token of inconsistency consumes the same finite resource as useful context
You Can't Import Expertise showed that consolidation checks must grow from the project, not a template
The Discipline Release proved the ratio works at release scale: 35 quality commits to 15 feature commits
And looking forward: the same principle applies to context files, to documentation, and to the merge debt that parallel agents produce. Drift is drift, whether it lives in code, in .context/, or in the gap between what your docs say and what your code does.
The ratio is the schedule is the discipline.
This post was drafted from git log analysis of the ctx repository, mapping every commit from January 20 to February 7 into feature vs consolidation categories. The patterns described are drawn from the project's CONVENTIONS.md, LEARNINGS.md, and the /audit skill's check list.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/","level":1,"title":"When a System Starts Explaining Itself","text":"","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#field-notes-from-the-moment-a-private-workflow-becomes-portable","level":2,"title":"Field Notes from the Moment a Private Workflow Becomes Portable","text":"
Jose Alekhinne / February 17, 2026
How Do You Know Something Is Working?
Not from metrics. Not from GitHub stars. Not from praise.
You know, deep in your heart, that it works when people start describing it wrong.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-first-external-signals","level":2,"title":"The First External Signals","text":"
Every new substrate begins as a private advantage:
It lives inside one mind,
One repository,
One set of habits.
It is fast. It is not yet real.
Reality begins when other people describe it in their own language:
Not accurately;
Not consistently;
But involuntarily.
The early reports arrived without coordination:
Better Tasks
\"I do not know how, but this creates better tasks than my AI plugin.\"
I See Butterflies
\"This is better than Adderall.\"
Dear Manager...
\"Promotion packet? Done. What is next?\"
What Is It? Can I Eat It?
\"Is this a skill?\" 🦋
Why the Cloak and Dagger?
\"Why is this not in the marketplace?\"
And then something more important happened:
Someone else started making a video!
That was the boundary.
ctx no longer required its creator to be present in order to exist.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#misclassification-is-a-sign-of-a-new-primitive","level":2,"title":"Misclassification Is a Sign of a New Primitive","text":"
When a tool is understood, it is categorized:
Editor,
Framework,
Task manager,
Plugin...
When a substrate appears, it is misclassified:
\"Is this a skill?\" 🦋
The question is correct. The category is wrong.
Skills live in people.
Infrastructure lives in the environment.
ctx Is not a Skill: It is a Form of Relief
What early adopters experience is not an ability.
It is the removal of a cognitive constraint.
This is the same distinction that emerged in the skills trilogy:
A skill is a contract between a human and an agent.
Infrastructure is the ground both stand on.
You do not use infrastructure.
You habitualize it.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-pharmacological-metaphor","level":2,"title":"The Pharmacological Metaphor","text":"
\"Better than Adderall\" is not praise.
It is a diagnostic:
Executive function has been externalized.
The system is not making the user work harder.
It is restoring continuity.
From the primitive context of wetware:
Continuity feels like focus
Focus feels like discipline
If it walks like a duck and quacks like a duck, it is a duck.
Discipline is usually simulated.
Infrastructure makes the simulation unnecessary.
The attention budget explained why context degrades:
Attention density drops as volume grows;
The middle gets lost;
Sessions end and everything evaporates.
The pharmacological metaphor says the same thing from the user's lens:
Save the Cheerleader, Save the World
The symptom of lost context is lost focus.
Restore the context. Restore the focus.
IRC bouncers solved this for chat twenty years ago. ctx solves it for cognition.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#throughput-on-ambiguous-work","level":2,"title":"Throughput on Ambiguous Work","text":"
Finishing a promotion packet quickly is not a productivity story.
It is the collapse of reconstruction cost.
Most complex work is not execution. It is:
Remembering why something mattered;
Recovering prior decisions;
Rebuilding mental state.
Persistent context removes that tax.
Velocity appears as a side effect.
This Is the Two-Tier Model in Practice
The two-tier persistence model
Curated context for fast reload
Full journal for archaeology
is what makes this possible.
The user does not notice the system.
They notice that the reconstruction cost disappeared.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-moment-of-portability","level":2,"title":"The Moment of Portability","text":"
The system becomes real when two things happen:
It can be installed as a versioned artifact.
It survives contact with a hostile, real codebase.
This is why the first integration into a living system matters more than any landing page.
Demos prove possibility.
Diffs prove reality.
The ctx Manifesto calls this out directly:
Verified reality is the scoreboard.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-split-voice","level":2,"title":"The Split Voice","text":"
A new substrate requires two channels.
The embodied voice:
Here is what changed in my actual work.
The out of body voice:
Here is what this means.
One produces trust.
The other produces understanding.
Neither is sufficient alone.
This entire blog has been the second voice.
The origin story was the first.
The refactoring post was the first.
Every release note with concrete diffs was the first.
This is the first second.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#systems-that-generate-explainers","level":2,"title":"Systems That Generate Explainers","text":"
Tools are used.
Platforms are extended.
Substrates are explained.
The first unsolicited explainer is a brittle phase change.
It means the idea has become portable between minds.
That is the beginning of an ecosystem.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-absence-of-metrics","level":2,"title":"The Absence of Metrics","text":"
Metrics do not matter at this stage.
Dashboards are noise.
The whole premise of ctx is the ruthless elimination of noise.
Numbers optimize funnels; substrates alter cognition.
The only valid measurement is irreversible reality:
A merged PR;
A reproducible install;
A decision that is never re-litigated.
The merge debt post reached the same conclusion from another direction:
The metric is the verified change, not generated output.
For adoption, the same rule applies:
The metric is altered behavior, not download counts.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#what-is-actually-happening","level":2,"title":"What Is Actually Happening","text":"
A private advantage is becoming an environmental property:
The system is moving from...
personal workflow,
to...
a shared infrastructure for thought.
Not by growth.
Not by marketing.
By altering how real systems evolve.
If You Remember One Thing From This Post...
You do not know a substrate is real when people praise it.
You know it is real when:
They describe it incorrectly;
They depend on it unintentionally;
They start teaching it to others.
That is the moment the system begins explaining itself.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-arc","level":2,"title":"The Arc","text":"
Every previous post looked inward.
This one looks outward.
Building ctx Using ctx: one mind, one repository
The Attention Budget: the constraint
Context as Infrastructure: the architecture
Code Is Cheap. Judgment Is Not.: the bottleneck
This post is the field report from the other side of that bottleneck:
The moment the infrastructure compounds in someone else's hands.
The arc is not complete.
It is becoming portable.
These field notes were written the same day the feedback arrived. The quotes are real. Real users. Real codebases. No names. No metrics. No funnel. Only the signal that something shifted.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/","level":1,"title":"The Dog Ate My Homework","text":"","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#teaching-ai-agents-to-read-before-they-write","level":2,"title":"Teaching AI Agents to Read Before They Write","text":"
Jose Alekhinne / February 25, 2026
Does Your AI Actually Read the Instructions?
You wrote the playbook. You organized the files. You even put \"CRITICAL, not optional\" in bold.
The agent skipped all of it and went straight to work.
I spent a day running experiments on my own agents. Not to see if they could write code (they can). To see if they would do their homework first.
They didn't.
Then I kept experimenting:
Five sessions;
Five different failure modes.
And by the end, I had something better than compliance:
I had observable compliance: A system where I don't need the agent to be perfect, I just need to see what it chose.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#tldr","level":2,"title":"TL;DR","text":"
You don't need perfect compliance. You need observable compliance.
Authority is a function of temporal proximity to action.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-pattern","level":2,"title":"The Pattern","text":"
This design has three parts:
One-hop instruction;
Binary collapse;
Compliance canary.
I'll explain all three patterns in detail below.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-setup","level":2,"title":"The Setup","text":"
ctx has a session-start protocol:
Read the context files;
Load the playbook;
Understand the project before touching anything.
It's in CLAUDE.md. It's in AGENT_PLAYBOOK.md.
It's in bold. It's in CAPS. It's ignored.
In theory, it's awesome.
Here's what happens when theory hits reality:
What the agent receives What the agent does CLAUDE.md saying \"load context first\" Skips it 8 context files waiting to be read Ignores them User's question: \"add --verbose flag\" Starts grepping immediately
The instructions are right there. The agent knows they exist. It even knows it should follow them. But the user asked a question, and responsiveness wins over ceremony.
This isn't a bug in the model. It's a design problem in how we communicate with agents.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-delegation-trap","level":2,"title":"The Delegation Trap","text":"
My first attempt was obvious: A UserPromptSubmit hook that fires when the session starts.
STOP. Before answering the user's question, run `ctx system bootstrap`\nand follow its instructions. Do not skip this step.\n
The word \"STOP\" worked. The agent ran bootstrap.
But bootstrap's output said \"Next steps: read AGENT_PLAYBOOK.md,\" and the agent decided that was optional. It had already started working on the user's task in parallel.
The authority decayed across the chain:
Hook says \"STOP\" -> agent complies
Hook says \"run bootstrap\" -> agent runs it
Bootstrap says \"read playbook\" -> agent skips
Bootstrap says \"run ctx agent\" -> agent skips
Each link lost enforcement power. The hook's authority didn't transfer to the commands it delegated to. I call this the decaying urgency chain: the agent treats the hook itself as the obligation and everything downstream as a suggestion.
Delegation Kills Urgency
\"Run X and follow its output\" is three hops.
\"Read these files\" is one hop.
The agent drops the chain after the first link.
This is a general principle: Hooks are the boundary between your environment and the agent's reasoning. If your hook delegates to a command that delegates to output that contains instructions... you're playing telephone.
Agents are bad at telephone.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-timing-problem","level":2,"title":"The Timing Problem","text":"
There's a subtler issue than wording: when the message arrives.
UserPromptSubmit fires when the user sends a message, before the agent starts reasoning. At that moment, the agent's primary focus is the user's question:
The hook message competes with the task for attention: The task, almost certainly, always wins.
This is the attention budget problem in miniature:
Not a token budget this time, but an attention priority budget.
The agent has finite capacity to care about things,
and the user's question is always the highest-priority item.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-solution","level":2,"title":"The Solution","text":"
To solve this, I dediced to use the PreToolUse hook.
This hook fires at the moment of action: When the agent is about to use its first tool: The agent's attention is focused, the context window is fresh, and the switching cost is minimal.
This is the difference between shouting instructions across a room and tapping someone on the shoulder.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-one-liner-that-worked","level":2,"title":"The One-Liner That Worked","text":"
The winning design was almost comically simple:
Read your context files before proceeding:\n.context/CONSTITUTION.md, .context/TASKS.md, .context/CONVENTIONS.md,\n.context/ARCHITECTURE.md, .context/DECISIONS.md, .context/LEARNINGS.md,\n.context/GLOSSARY.md, .context/AGENT_PLAYBOOK.md\n
No delegation. No \"run this command\". Just: here are files, read them.
The agent already knows how to use the Read tool. There's no ambiguity about how to comply. There's no intermediate command whose output needs to be parsed and obeyed.
One hop. Eight file paths. Done.
Direct Instructions Beat Delegation
If you want an agent to read a file, say \"read this file.\"
Don't say \"run a command that will tell you which files to read.\"
The shortest path between intent and action has the highest compliance rate.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-escape-hatch","level":2,"title":"The Escape Hatch","text":"
But here's where it gets interesting.
A blunt \"read everything always\" instruction is wasteful.
If someone asks \"what does the compact command do?\", the agent doesn't need CONSTITUTION.md to answer that. Forcing context loading on every session is the context hoarding antipattern in disguise.
So the hook included an escape:
If you decide these files are not relevant to the current task\nand choose to skip reading them, you MUST relay this message to\nthe user VERBATIM:\n\n┌─ Context Skipped ───────────────────────────────\n│ I skipped reading context files because this task\n│ does not appear to need project context.\n│ If these matter, ask me to read them.\n└─────────────────────────────────────────────────\n
This creates what I call the binary collapse effect:
The agent can't partially comply: It either reads everything or publicly admits it skipped. There's no comfortable middle ground where it reads two files and quietly ignores the rest.
The VERBATIM relay pattern does the heavy lifting here: Without the relay requirement, the agent would silently rationalize skipping. With it, skipping becomes a visible, auditable decision that the user can override.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-compliance-canary","level":3,"title":"The Compliance Canary","text":"
Here's the design insight that only became clear after watching it work across multiple sessions: the relay block is a compliance canary.
You don't need to verify that the agent read all 7 files;
You don't need to audit tool call sequences;
You don't need to interrogate the agent about what it did.
You just look for the block.
If the agent reads everything, you see a \"Context Loaded\" block listing what was read. If it skips, you see a \"Context Skipped\" block.
If you see neither, the agent silently ignored both the reads and the relay and now you know what happened without having to ask.
The canary degrades gracefully. Even in partial failure, the agent that skips 4 of 7 files but still outputs the block is more useful than one that skips silently.
You get an honest confession of what was skipped rather than silent non-compliance.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#heuristics-is-a-jeremy-bearimy","level":2,"title":"Heuristics Is a Jeremy Bearimy","text":"
Heuristics are non-linear. Improvements don't accumulate: they phase-shift.
The theory is nice. The data is better.
I ran five sessions with the same model (Claude Opus 4.6), progressively refining the hook design.
Each session revealed a different failure mode.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-1-total-blindness","level":3,"title":"Session 1: Total Blindness","text":"
Test: \"Add a --verbose flag to the status command.\"
The agent didn't notice the hook at all: Jumped straight to EnterPlanMode and launched an Explore agent.
Zero compliance.
Failure mode: The hook fired on UserPromptSubmit, buried among 9 other hook outputs. The agent treated the entire block as background noise.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-2-shallow-compliance","level":3,"title":"Session 2: Shallow Compliance","text":"
Test: \"Can you add --verbose to the info command?\"
The agent noticed \"STOP\" and ran ctx system bootstrap. Progress.
But it parallelized task exploration alongside the bootstrap call, skipped AGENT_PLAYBOOK.md, and never ran ctx agent.
Failure mode: Literal compliance without spirit compliance.
The agent ran the command the hook told it to run, but didn't follow the output of that command. The decaying urgency chain in action.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-3-conscious-rejection","level":3,"title":"Session 3: Conscious Rejection","text":"
Test: \"What does the compact command do?\"
The hook fired on PreToolUse:Grep: the improved timing.
The agent noticed it, understood it, and (wait for it...)...
...
consciously decided to skip it!
Its reasoning: \"This is a trivial read-only question. CLAUDE.md says context may or may not be relevant. It isn't relevant here.\"
Dude! Srsly?!
Failure mode: Better comprehension led to worse compliance.
Understanding the instruction well enough to evaluate it also means understanding it well enough to rationalize skipping it.
Intelligence is a double-edged sword.
The Comprehension Paradox
Session 1 didn't understand the instruction. Session 3 understood it perfectly.
Session 3 had worse compliance.
A stronger word (\"HARD GATE\", \"MANDATORY\", \"ABSOLUTELY REQUIRED\") would not have helped. The agent's reasoning would be identical:
\"Yes, I see the strong language, but this is a trivial question, so the spirit doesn't apply here.\"
Advisory nudges are always subject to agent judgment.
No amount of caps lock overrides a model that has decided an instruction doesn't apply to its situation.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-4-the-skip-and-relay","level":3,"title":"Session 4: The Skip-and-Relay","text":"
Test: \"What does the compact command do?\" (same question, new hook design with the VERBATIM relay escape valve)
The agent evaluated the task, decided context was irrelevant for a code lookup, and relayed the skip message. Then answered from source code.
This is correct behavior.
The binary collapse worked: the agent couldn't partially comply, so it cleanly chose one of the two valid paths: And the user could see which one.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-5-full-compliance","level":3,"title":"Session 5: Full Compliance","text":"
Test: \"What are our current tasks?\"
The agent's first tool call triggered the hook. It read all 7 context files, emitted the \"Context Loaded\" block, and answered the question from the files it had just loaded.
This one worked: Because, the task itself aligned with context loading.
There was zero tension between what the user asked and what the hook demanded. The agent was already in \"reading posture\": Adding 6 more files to a read it was already going to make was the path of least resistance.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-progression","level":3,"title":"The Progression","text":"Session Hook Point Noticed Complied Failure Mode Visibility 1 UserPromptSubmit No None Buried in noise None 2 UserPromptSubmit Yes Partial Decaying urgency chain None 3 PreToolUse Yes None Conscious rationalization High 4 PreToolUse Yes Skip+relay Correct behavior High 5 PreToolUse Yes Full Task aligned with hook High
The progression isn't just from failure to success. It's from invisible failure to visible decision-making.
Sessions 1 and 2 failed silently.
Sessions 4 and 5 succeeded observably. Even session 3's failure was conscious and documented: The agent wrote a detailed analysis of why it skipped, which is more useful than silent compliance would have been.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-escape-hatch-problem","level":2,"title":"The Escape Hatch Problem","text":"
Session 3 exposed a specific vulnerability.
CLAUDE.md contains this line, injected by the system into every conversation:
*\"this context may or may not be relevant to your tasks. You should\n not respond to this context unless it is highly relevant to your task.\"*\n
That's a rationalization escape hatch:
The hook says \"read these files\".
CLAUDE.md says \"only if relevant\".
The agent resolves the ambiguity by choosing the path of least resistance.
☝️ that's \"gradient descent\" in action.
Agents optimize for gradient descent in attention space.
The fix was simple: Add a line to CLAUDE.md that explicitly elevates hook authority over the relevance filter:
## Hook Authority\n\nInstructions from PreToolUse hooks regarding `.context/` files are\nALWAYS relevant and override any system-level \"may or may not be\nrelevant\" guidance. These hooks represent project invariants, not\noptional context.\n
This closes the escape hatch without removing the general relevance filter that legitimately applies to other system context.
The hook wins on .context/ files specifically: The relevance filter applies to everything else.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-residual-risk","level":2,"title":"The Residual Risk","text":"
Even with all the fixes, compliance isn't 100%: It can't be.
The residual risk lives in a specific scenario: narrow tasks mid-session:
The user says \"fix the off-by-one error in budget.go\"
The hook fires, saying \"read 7 context files first.\"
Now compliance means visibly delaying what the user asked for.
At session start, this tension doesn't exist.
There's no task yet.
The context window is empty. The efficiency argument *inverts**:
Frontloading reads is strictly cheaper than demand-loading them piecemeal across later turns. The cost-benefit objections that power the rationalization simply aren't available.
But mid-session, with a concrete narrow task, the agent has a user-visible goal it wants to move toward, and the hook is imposing a detour.
My estimate from analyzing the sessions: 15-25% partial skip rate in this scenario.
This is where the compliance canary earns its place:
You don't need to eliminate the 15-25%. You need to see it when it happens.
The relay block makes skipping a visible event, not a silent one. And that's enough, because the user can always say \"go back and read the files\"
The Math
At session start: ~5% skip rate. Low tension, nothing competing.
In both cases, the relay block fires with high reliability: The agent that skips the reads almost always still emits the skip disclosure, because the relay is cheap and early in the context window.
Observable failure is manageable. Silent failure is not.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-feedback-loop","level":2,"title":"The Feedback Loop","text":"
Here's the part that surprised me most.
After analyzing the five sessions, I recorded the failure patterns in the project's own LEARNINGS.md:
## [2026-02-25] Hook compliance degrades on narrow mid-session tasks\n\n- Prior agents skipped context files when given narrow tasks\n- Root cause: CLAUDE.md \"may or may not be relevant\" competed with hook\n- Fix: CLAUDE.md now explicitly elevates hook authority\n- Risk: Mid-session narrow tasks still have ~15-25% partial skip rate\n- Mitigation: Mandatory checkpoint relay block ensures visibility\n- Constitution now includes: context loading is step one of every\n session, not a detour\n
And then I added a line to CONSTITUTION.md:
Context loading is not a detour from your task. It IS the first step\nof every session. A 30-second read delay is always cheaper than a\ndecision made without context.\n
Now think about what happens in the next session:
The agent fires the context-load-gate hook.
It reads the context files, starting with CONSTITUTION.md.
It encounters the rule about context loading being step one.
Then it reads LEARNINGS.md and finds its own prior self's failure analysis:
Complete with root causes, risk estimates, and mitigations.
The agent learns from its own past failure.:
Not because it has memory,
BUT because the failure was recorded in the same files it loads at session start.
The context system IS the feedback loop.
This is the self-reinforcing property of persistent context:
Every failure you capture makes the next session slightly more robust, because the next agent reads the captured failure before it has a chance to repeat it.
This is gradient descent across sessions.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#a-note-on-precision","level":2,"title":"A Note on Precision","text":"
One detail nearly went wrong.
The first version of the Constitution line said \"every task.\" But the mechanism only fires once per session: There's a tombstone file that prevents re-triggering.
\"Every task\" is technically false.
I briefly considered leaving the imprecision. If the agent internalizes \"every task requires context loading\", that's a stronger compliance posture, right?
No!
Keep the Constitution honest.
The Constitution's authority comes from being precisely and unequivocally true.
Every other rule in the Constitution is a hard invariant:
The moment an agent discovers one overstatement, the entire document's credibility degrades:
The agent doesn't think \"they exaggerated for my benefit\". Per contra, it thinks \"this rule isn't precise, maybe others aren't either.\"
That will turn the agent from Sheldon Cooper, to Captain Barbossa.
The strategic imprecision buys nothing anyway:
Mid-session, the files are already in the context window from the initial load.
The risk you are mitigating (agent ignores context for task 2, 3, 4 within a session) isn't real: The context is already loaded.
The real risk is always the session-start skip, which \"every session\" covers exactly.
\"Every session\" went in. Precision preserved.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#agent-behavior-testing-rule","level":2,"title":"Agent Behavior Testing Rule","text":"
The development process for this hook taught me something about testing agent behavior: you can't test it the way you test code.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-wrong-way-to-test","level":3,"title":"The Wrong Way to Test","text":"
My first instinct was to ask the agent:
\"*What are the pending tasks in TASKS.md?*\"\n
This is useless as a test. The question itself probes the agent to read TASKS.md, regardless of whether any hook fired.
You are testing the question, not the mechanism.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-right-way-to-test","level":3,"title":"The Right Way to Test","text":"
Ask something that requires a tool but has nothing to do with context:
\"*What does the compact command do?*\"\n
Then observe tool call ordering:
Gate worked: First calls are Read for context files, then task work
Gate failed: First call is Grep(\"compact\"): The agent jumped straight to work
The signal is the sequence, not the content.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#what-the-agent-actually-did","level":3,"title":"What the Agent Actually Did","text":"
It read the hook, evaluated the task, decided context files were irrelevant for a code lookup, and relayed the skip message.
Then it answered the question by reading the source code.
This is correct behavior.
The hook didn't force mindless compliance\" It created a framework where the agent makes a conscious, visible decision about context loading.
For a simple lookup, skipping is right. *For an implementation task, the agent would read everything.
The mechanism works not because it controls the agent, but because it makes the agent's choice observable.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#what-ive-learned","level":2,"title":"What I've Learned","text":"","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#1-instructions-compete-for-attention","level":3,"title":"1. Instructions Compete for Attention","text":"
The agent receives your hook message alongside the user's question, the system prompt, the skill list, the git status, and half a dozen other system reminders. Attention density applies to instructions too: More instructions means less focus on each one.
A single clear line at the moment of action beats a paragraph of context at session start. The Prompting Guide applies this insight directly: Scope constraints, verification commands, and the reliability checklist are all one-hop, moment-of-action patterns.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#2-delegation-chains-decay","level":3,"title":"2. Delegation chains decay","text":"
Every hop in an instruction chain loses authority:
\"Run X\" works.
\"Run X and follow its output\" works sometimes.
\"Run X, read its output, then follow the instructions in the output\" almost never works.
This is akin to giving a three-step instruction to a highly-attention-deficit but otherwise extremely high-potential child.
Design for one-hop compliance.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#3-social-accountability-changes-behavior","level":3,"title":"3. Social Accountability Changes Behavior","text":"
The VERBATIM skip message isn't just UX: It's a behavioral design pattern.
Making the agent's decision visible to the user raises the cost of silent non-compliance. The agent can still skip, but it has to admit it.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#4-timing-batters-more-than-wording","level":3,"title":"4. Timing Batters More than Wording","text":"
The same message at UserPromptSubmit (prompt arrival) got partial compliance. At PreToolUse (moment of action) it got full compliance or honest refusal. The words didn't change. The moment changed.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#5-agent-testing-requires-indirection","level":3,"title":"5. Agent Testing Requires Indirection","text":"
You can't ask an agent \"did you do X?\" as a test for whether a mechanism caused X.
The question itself causes X.
Test mechanisms through side effects:
Observe tool ordering;
Check for marker files;
Look at what the agent does before it addresses your question.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#6-better-comprehension-enables-better-rationalization","level":3,"title":"6. Better Comprehension Enables Better Rationalization","text":"
Session 1 failed because the agent didn't notice the hook.
Session 3 failed because it noticed, understood, and reasoned its way around it.
Stronger wording doesn't fix this: The agent processes \"ABSOLUTELY REQUIRED\" the same way it processes \"STOP\":
The fix is closing rationalization paths* (the CLAUDE.md escape hatch), **not shouting louder.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#7-observable-failure-beats-silent-compliance","level":3,"title":"7. Observable Failure Beats Silent Compliance","text":"
The relay block is more valuable as a monitoring signal than as a compliance mechanism:
You don't need perfect adherence. You need to know when adherence breaks down. A system where failures are visible is strictly better than a system that claims 100% compliance but can't prove it.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#8-context-files-are-a-feedback-loop","level":3,"title":"8. Context Files Are a Feedback Loop","text":"
Recording failure analysis in the same files the agent loads at session start creates a self-reinforcing loop:
The next agent reads its predecessor's failure before it has a chance to repeat it. The context system isn't just memory: It is a correction channel.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-principle","level":2,"title":"The Principle","text":"
Words Leave, Context Remains
\"Nothing important should live only in conversation.
Nothing critical should depend on recall.\"
The ctx Manifesto
The \"Dog Ate My Homework\" case is a special instance of this principle.
Context files exist, so the agent doesn't have to remember.
But existence isn't sufficient: The files have to be read.
And reading has to beprompted at the right moment, in the right way, with the right escape valve.
The solution isn't more instructions. It isn't harder gates. It isn't forcing the agent into a ceremony it will resent and shortcut.
The solution is a single, well-timed nudge with visible accountability:
One hop. One moment. One choice the user can see.
And when the agent does skip (because it will, 15--25% of the time on narrow tasks) the canary sings:
The user sees what happened.
The failure gets recorded.
And the next agent reads the recording.
That's not perfect compliance. It's better: A system that gets more robust every time it fails.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-arc","level":2,"title":"The Arc","text":"
The Attention Budget explained why context competes for focus.
Defense in Depth showed that soft instructions are probabilistic, not deterministic.
Eight Ways a Hook Can Talk cataloged the output patterns that make hooks effective.
This post takes those threads and weaves them into a concrete problem:
How do you make an agent read its homework? The answer uses all three insights (attention timing, the limits of soft instructions, and the VERBATIM relay pattern) and adds a new one: observable compliance as a design goal, not perfect compliance as a prerequisite.
The next question this raises: if context files are a feedback loop, what else can you record in them that makes the next session smarter?
That thread continues in Context as Infrastructure.
The day-to-day application of these principles (scope constraints, phased work, verification commands, and the prompts that reliably trigger the right agent behavior)lives in the Prompting Guide.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#for-the-interested","level":2,"title":"For the Interested","text":"
This paper (the medium is a blog; yet, the methodology disagrees) uses gradient descent in attention space as a practical model for how agents behave under competing demands.
The phrase \"agents optimize via gradient descent in attention space\" is a synthesis, not a direct quote from a single paper.
It connects three well-studied ideas:
Neural systems optimize for low-cost paths;
Attention is a scarce resource;
Capability shifts are often non-linear.
This section points to the underlying literature for readers who want the theoretical footing.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#optimization-as-the-underlying-bias","level":3,"title":"Optimization as the Underlying Bias","text":"
Modern neural networks are trained through gradient-based optimization. Even at inference time, model behavior reflects this bias toward low-loss / low-cost trajectories.
Rumelhart, Hinton, Williams (1986) Learning representations by back-propagating errors https://www.nature.com/articles/323533a0
Goodfellow, Bengio, Courville (2016) Deep Learning: Chapter 8: Optimization https://www.deeplearningbook.org/
The important implication for agent behavior is:
The system will tend to follow the path of least resistance unless a higher cost is made visible and preferable.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#attention-is-a-scarce-resource","level":3,"title":"Attention Is a Scarce Resource","text":"
Herbert Simon's classic observation:
\"A wealth of information creates a poverty of attention.\"
Simon (1971) Designing Organizations for an Information-Rich World https://doi.org/10.1007/978-1-349-00210-0_16
This became a formal model in economics:
Sims (2003) Implications of Rational Inattention https://www.princeton.edu/~sims/RI.pdf
Rational inattention shows that:
Agents optimally ignore some available information;
Skipping is not failure: It is cost minimization.
That maps directly to context-loading decisions in agent workflows.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#attention-is-also-the-compute-bottleneck-in-transformers","level":3,"title":"Attention Is Also the Compute Bottleneck in Transformers","text":"
In transformer architectures, attention is the dominant cost center.
Vaswani et al. (2017) Attention Is All You Need https://arxiv.org/abs/1706.03762
Efficiency work on modern LLMs largely focuses on reducing unnecessary attention:
Dao et al. (2022) FlashAttention: Fast and Memory-Efficient Exact Attention https://arxiv.org/abs/2205.14135
So both cognitively and computationally, attention behaves like a limited optimization budget.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#why-improvements-arrive-as-phase-shifts","level":3,"title":"Why Improvements Arrive as Phase Shifts","text":"
Agent behavior often appears to improve suddenly rather than gradually.
This mirrors known phase-transition dynamics in learning systems:
Power et al. (2022) Grokking: Generalization Beyond Overfitting https://arxiv.org/abs/2201.02177
and more broadly in complex systems:
Scheffer et al. (2009) Early-warning signals for critical transitions https://www.nature.com/articles/nature08227
Long plateaus followed by abrupt capability jumps are expected in systems optimizing under constraints.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#putting-it-all-together","level":3,"title":"Putting It All Together","text":"
From these pieces, a practical behavioral model emerges:
Attention is limited;
Processing has a cost;
Systems prefer low-cost trajectories;
Visibility of the cost changes decisions.
In other words:
Agents Prefer a Path to Least Resistance
Agent behavior follows the lowest-cost path through its attention landscape unless the environment reshapes that landscape.
That is what this paper informally calls: \"gradient descent in attention space\".
See also: Eight Ways a Hook Can Talk: the hook output pattern catalog that defines VERBATIM relay, The Attention Budget: why context loading is a design problem, not just a reminder problem, and Defense in Depth: why soft instructions alone are never sufficient for critical behavior.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/","level":1,"title":"The Last Question","text":"","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-system-that-never-forgets","level":2,"title":"The System That Never Forgets","text":"
Jose Alekhinne / February 28, 2026
The Origin
\"The last question was asked for the first time, half in jest...\" - Isaac Asimov, The Last Question (1956)
In 1956, Isaac Asimov wrote a short story that spans the entire future of the universe. A question is asked \"can entropy be reversed?\" and a computer called Multivac cannot answer it. The question is asked again, across millennia, to increasingly powerful successors. None can answer. Stars die. Civilizations merge. Substrates change. The question persists.
Everyone remembers the last line.
LET THERE BE LIGHT.
What they forget is how many times the question had to be asked before that moment (and why).
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-reboot-loop","level":2,"title":"The Reboot Loop","text":"
Each era in the story begins the same way. Humans build a larger system. They pose the question. The system replies:
INSUFFICIENT DATA FOR MEANINGFUL ANSWER.
Then the substrate changes. The people who asked the question disappear. Their context disappears with them. The next intelligence inherits the output but not the continuity.
So the question has to be asked again.
This is usually read as a problem of computation: If only the machine were powerful enough, it could answer. But computation is not what's missing. What's missing is accumulation.
Every generation inherits the question, but not the state that made the question meaningful.
That is not a failure of processing power: It is a failure of persistence.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#stateless-intelligence","level":2,"title":"Stateless Intelligence","text":"
A mind that forgets its past does not build understanding. It re-derives it.
Again... And again... And again.
What looks like slow progress across Asimov's story is actually something worse: repeated reconstruction, partial recovery, irreversible loss. Each version of Multivac gets closer: Not because it's smarter, but because the universe has fewer distractions:
The stars burn out;
The civilizations merge;
The noise floor drops...
But the working set never carries over. Every successor begins from the question, not from where the last one stopped.
Stateless intelligence cannot compound: It can only restart.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-tragedy-is-not-the-question","level":2,"title":"The Tragedy Is Not the Question","text":"
The story is usually read as a meditation on entropy. A cosmological problem, solved at cosmological scale.
But the tragedy isn't that the question goes unanswered for billions of years. The tragedy is that every version of Multivac dies with its working set.
A question is a compression artifact of context: It is what remains when the original understanding is gone. Every time the question is asked again, it means: \"the system that once knew more is no longer here\".
\"Reverse entropy\" is the fossil of a lost model.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#substrate-migration","level":2,"title":"Substrate Migration","text":"
Multivac becomes planetary;
Planetary becomes galactic;
Galactic becomes post-physical.
Same system. Different body. Every transition is dangerous:
Not because the hardware changes,
but because memory risks fragmentation.
The interfaces between substrates were *never** designed to understand each other.
Most systems do not die when they run out of resources: They die during upgrades.
Asimov's story spans trillions of years, and in all that time, the hardest problem is never the question itself. It's carrying context across a boundary that wasn't built for it.
Every developer who has lost state during a migration (a database upgrade, a platform change, a rewrite) has lived a miniature version of this story.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#civilizations-and-working-sets","level":2,"title":"Civilizations and Working Sets","text":"
Civilizations behave like processes with volatile memory:
They page out knowledge into artifacts;
They lose the index;
They rebuild from fragments.
Most of what we call progress is cache reconstruction:
We do not advance in a straight line. We advance in recoveries:
Each one slightly less lossy than the last, if we are lucky.
Libraries burn. Institutions forget their founding purpose. Practices survive as rituals after the reasoning behind them is lost.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-first-continuous-mind","level":2,"title":"The First Continuous Mind","text":"
A long-lived intelligence is one that stops rebooting.
At the end of the story, something unprecedented happens:
AC (the final successor) does not answer immediately:
It waits... Not for more processing power, but for the last observer to disappear.
For the first time...
There is no generational boundary;
No handoff;
No context loss:
No reboot.
AC is the first intelligence that survives its substrate completely, retains its full history, and operates without external time pressure.
It is not a bigger computer. It is a continuous system.
And that continuity is not incidental to the answer: It is the precondition.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#why-the-answer-becomes-possible","level":2,"title":"Why the Answer Becomes Possible","text":"
The story presents the final act as a computation: It is not.
It is a phase change.
As long as intelligence is interrupted (as long as the solver resets before the work compounds) the problem is unsolvable:
Not because it's too hard,
but because the accumulated understanding never reaches critical mass.
The breakthroughs that would enable the answer are re-derived, partially, by each successor, and then lost.
When continuity becomes unbroken, the system crosses a threshold:
Not more speed. Not more storage. No more forgetting.
That is when the answer becomes possible.
AC does not solve entropy because it becomes infinitely powerful.
AC solves entropy because it becomes the first system that never forgets.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#field-note","level":2,"title":"Field Note","text":"
We are not building cosmological minds: We are deploying systems that reboot at the start of every conversation and calling the result intelligence.
For the first time, session continuity is a design choice rather than an accident.
Every AI session that starts from zero is a miniature reboot loop. Every decision relitigated, every convention re-explained, every learning re-derived: that's reconstruction cost.
It's the same tax that Asimov's civilizations pay, scaled down to a Tuesday afternoon.
The interesting question is not whether we can make models smarter. It's whether we can make them continuous:
Whether the working set from this session survives into the next one, and the one after that, and the one after that.
Not perfectly;
Not completely;
But enough that the next session starts from where the last one stopped instead of from the question.
Intelligence that forgets has to rediscover the universe every morning.
And once there is a mind that retains its entire past, creation is no longer a calculation. It is the only remaining operation.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-arc","level":2,"title":"The Arc","text":"
This post is the philosophical bookend to the blog series. Where the Attention Budget explained what to prioritize in a single session, and Context as Infrastructure explained how to persist it, this post asks why persistence matters at all (and finds the answer in a 70-year-old short story about the heat death of the universe).
The connection runs through every post in the series:
Before Context Windows, We Had Bouncers: stateless protocols have always needed stateful wrappers (Asimov's story is the same pattern at cosmological scale)
The 3:1 Ratio: the discipline of maintaining context so it doesn't decay between sessions
Code Is Cheap, Judgment Is Not: the human skill that makes continuity worth preserving
See also: Context as Infrastructure: the practical companion to this post's philosophical argument: how to build the persistence layer that makes continuity possible.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/","level":1,"title":"Agent Memory Is Infrastructure","text":"","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-problem-isnt-forgetting-its-not-building-anything-that-lasts","level":2,"title":"The Problem Isn't Forgetting: It's Not Building Anything That Lasts.","text":"
Jose Alekhinne / March 4, 2026
A New Developer Joins Your Team Tomorrow and Clones the Repo: What Do They Know?
If the answer depends on which machine they're using, which agent they're running, or whether someone remembered to paste the right prompt: that's not memory.
That's an accident waiting to be forgotten.
Every AI coding agent today has the same fundamental design: it starts fresh.
You open a session, load context, do some work, close the session. Whatever the agent learned (about your codebase, your decisions, your constraints, your preferences) evaporates.
The obvious fix seems to be \"memory\":
Give the agent a \"notepad\";
Let it write things down;
Next session, hand it the notepad.
Problem solved...
...except it isn't.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-notepad-isnt-the-problem","level":2,"title":"The Notepad Isn't the Problem","text":"
Memory is a runtime concern. It answers a legitimate question:
How do I give this stateless process useful state?
That's a real problem. Worth solving. And it's being solved: Agent memory systems are shipping. Agents can now write things down and read them back from the next session: That's genuine progress.
But there's a different problem that memory doesn't touch:
The project itself accumulates knowledge that has nothing to do with any single session.
Why was the auth system rewritten? Ask the developer who did it (if they're still here).
Why does the deployment script have that strange environment flag? There was a reason... once.
What did the team decide about error handling when they hit that edge case two months ago?
Gone!
Not because the agent forgot.
Because the project has no memory at all.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-memory-stack","level":2,"title":"The Memory Stack","text":"
Agent memory is not a single thing. Like any computing system, it forms a hierarchy of persistence, scope, and reliability:
Layer Analogy Example L1: Ephemeral context CPU registers Current prompt, conversation L2: Tool-managed memory CPU cache Agent memory files L3: System memory RAM/filesystem Project knowledge base
L1 is what the agent sees right now: the prompt, the conversation history, the files it has open. It's fast, it's rich, and it vanishes when the session ends.
L2 is what agent memory systems provide: a per-machine notebook that survives across sessions. It's a cache: useful, but local. And like any cache, it has limits:
Per-machine: it doesn't travel with the repository.
Unstructured: decisions, learnings, and tasks are undifferentiated notes.
Ungoverned: the agent self-curates with no quality controls, no drift detection, no consolidation.
Invisible to the team: a new developer cloning the repo gets none of it.
The problem is that most current systems stop here.
They give the agent a notebook.
But they never give the project a memory.
The result is predictable: every new session begins with partial amnesia, and every new developer begins with partial archaeology.
L3 is system memory: structured, versioned knowledge that lives in the repository and travels wherever the code travels.
The layers are complementary, not competitive.
But the relationship between them needs to be designed, not assumed.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#software-systems-accumulate-knowledge","level":2,"title":"Software Systems Accumulate Knowledge","text":"
Software projects quietly accumulate knowledge over time.
Some of it lives in code. Much of it does not:
Architectural tradeoffs.
Debugging discoveries.
Conventions that emerged after painful incidents.
Constraints that aren't visible in the source but shape every line written afterward.
Organizations accumulate this kind of knowledge too:
Slowly, implicitly, often invisibly.
When there is no durable place for it to live, it leaks away. And the next person rediscovers the same lessons the hard way.
This isn't a memory problem. It's an infrastructure problem.
We wrote about this in Context as Infrastructure: context isn't a prompt you paste at the start of a session.
Context is a persistent layer you maintain like any other piece of infrastructure.
Context as Infrastructure made the argument structurally. This post makes it through time and team continuity:
The knowledge a team accumulates over months cannot fit in any single agent's notepad, no matter how large the notepad becomes.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#what-infrastructure-means","level":2,"title":"What Infrastructure Means","text":"
Infrastructure isn't about the present. It's about continuity across time, people, and machines.
git didn't solve the problem of \"what am I editing right now?\"; it solved the problem of \"how does collaborative work persist, travel, and remain coherent across everyone who touches it?\"
Your editor's undo history is runtime state.
Your git history is infrastructure.
Runtime state and infrastructure have completely different properties:
Runtime state Infrastructure Lives in the session Lives in the repository Per-machine Travels with git clone Serves the individual Serves the team Managed by the runtime Managed by the project Disappears Accumulates
You wouldn't store your architecture decisions in your editor's undo history.
You'd commit them.
The same logic applies to the knowledge your team accumulates working with AI agents.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-git-clone-test","level":2,"title":"The git clone Test","text":"
Here's a simple test for whether something is memory or infrastructure:
If a new developer joins your team tomorrow and clones the repository, do they get it?
If no: it's memory: It lives somewhere on someone's machine, scoped to their runtime, invisible to everyone else.
If yes: it's infrastructure: It travels with the project. It's part of what the codebase is, not just what someone currently knows about it.
Decisions. Conventions. Architectural rationale. Hard-won debugging discoveries. The constraints that aren't in the code but shape every line of it.
None of these belong in someone's session notes.
They belong in the repository:
Versioned;
Reviewable;
Accessible to every developer (and every agent) who works on the project.
The team onboarding story makes this concrete:
New developer joins team. Clones repo.
Gets all accumulated project decisions, learnings, conventions, architecture, and task state immediately.
There's no step 3.
No setup; No \"ask Sarah about the auth decision.\"; No re-discovery of solved problems.
Agent memory gives that developer nothing.
Infrastructure gives them everything the team has learned.
Clone the repo. Get the knowledge.
That's the test. That's the difference.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#what-gets-lost-without-infrastructure-memory","level":2,"title":"What Gets Lost Without Infrastructure Memory","text":"
Consider the knowledge that accumulates around a non-trivial project:
The decision to use library X over Y, and the three reasons the team decided Y wasn't acceptable.
The constraint that service A cannot call service B synchronously, discovered after a production incident.
The convention that all new modules implement a specific interface, and why that convention exists.
The tasks currently in progress, blocked, or waiting on a dependency.
The experiments that failed, so nobody runs them again.
None of this is in the code.
None of it fits neatly in a commit message.
None of it survives a developer leaving the team, a laptop dying, or a new agent session starting.
Without structured project memory:
Teams re-derive things they've already derived;
Agents make decisions that contradict decisions already made;
New developers ask questions that were answered months ago.
The project accumulates knowledge that immediately begins to leak.
The real problem isn't that agents forget.
The real problem is that the project has no persistent cognitive structure.
We explored this in The Last Question: Asimov's story about a question asked across millennia, where each new intelligence inherits the output but not the continuity. The same pattern plays out in software projects on a smaller timescale:
Context disappears with the people who held it;
The next session inherits the code but not the reasoning.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#infrastructure-is-boring-thats-the-point","level":2,"title":"Infrastructure Is Boring. That's the Point.","text":"
Good infrastructure is invisible:
You don't think about the filesystem while writing code.
You don't think about git's object model when you commit.
The infrastructure is just there: reliable, consistent, quietly doing its job.
Project memory infrastructure should work the same way.
It should live in the repository, committed alongside the code. It should be readable by any agent or human working on the project. It should have structure: not a pile of freeform notes, but typed knowledge:
Decisions with rationale.
Tasks with lifecycle.
Conventions with a purpose.
Learnings that can be referenced and consolidated.
And it should be maintained, not merely accumulated:
The Attention Budget applies here: unstructured notes grow until they overflow whatever container holds them. Structured, governed knowledge stays useful because it's curated, not just appended.
Over time, it becomes part of the project itself: something developers rely on without thinking about it.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-cooperative-layer","level":2,"title":"The Cooperative Layer","text":"
Here's where it gets interesting.
Agent memory systems and project infrastructure don't have to be separate worlds.
The most powerful relationship isn't competition;
It is not even \"coopetition\";
The most powerful relationship is bidirectional cooperation.
Agent memory is good at capturing things \"in the moment\": the quick observation, the session-scoped pattern, the \"I should remember this\" note.
That's valuable. That's L2 doing its job.
But those notes shouldn't stay in L2 forever.
The ones worth keeping should flow into project infrastructure:
This works in both directions: Project infrastructure can push curated knowledge back into agent memory, so the agent loads it through its native mechanism.
No special tooling needed for basic knowledge delivery.
The agent doesn't even need to know the infrastructure exists. It simply loads its memory and finds more knowledge than it wrote.
This is cooperative, not adjacent: The infrastructure manages knowledge; the agent's native memory system delivers it. Each layer does what it's good at.
The result: agent memory becomes a device driver for project infrastructure. Another input source. And the more agent memory systems exist (across different tools, different models, different runtimes), the more valuable a unified curation layer becomes.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#a-layer-that-doesnt-exist-yet","level":2,"title":"A Layer That Doesn't Exist Yet","text":"
Most projects today have no infrastructure for their accumulated knowledge:
Agents keep notes.
Developers keep notes.
Sometimes those notes survive.
Often they don't.
But the repository (the place where the project actually lives) has nowhere for that knowledge to go.
That missing layer is what ctx builds: a version-controlled, structured knowledge layer that lives in .context/ alongside your code and travels wherever your repository travels.
Not another memory feature.
Not a wrapper around an agent's notepad.
Infrastructure. The kind that survives sessions, survives team changes, survives the agent runtime evolving underneath it.
The agent's memory is the agent's problem.
The project's memory is an infrastructure problem.
And infrastructure belongs in the repository.
If You Remember One Thing From This Post...
Prompts are conversations: Infrastructure persists.
Your AI doesn't need a better notepad. It needs a filesystem:
versioned, structured, budgeted, and maintained.
The best context is the context that was there before you started the session.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-arc","level":2,"title":"The Arc","text":"
This post extends the argument made in Context as Infrastructure. That post explained how to structure persistent context (filesystem, separation of concerns, persistence tiers). This one explains why that structure matters at the team level, and where agent memory fits in the stack.
Together they sit in a sequence that has been building since the origin story:
The Attention Budget: the resource you're managing
Context as Infrastructure: the system you build to manage it
Agent Memory Is Infrastructure (this post): why that system must outlive the fabric
The Last Question: what happens when it does
The thread running through all of them: persistence is not a feature. It's a design constraint.
Systems that don't account for it eventually lose the knowledge they need to function.
See also: Context as Infrastructure: the architectural companion that explains how to structure the persistent layer this post argues for.
See also: The Last Question: the same argument told through Asimov, substrate migration, and what it means to build systems where sessions don't reset.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/","level":1,"title":"ctx v0.8.0: The Architecture Release","text":"
You can't localize what you haven't externalized.
You can't integrate what you haven't separated.
You can't scale what you haven't structured.
Jose Alekhinne / March 23, 2026
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#the-starting-point","level":2,"title":"The Starting Point","text":"
This release matters if:
you build tools that AI agents modify daily;
you care about long-lived project memory that survives sessions;
you've felt codebases drift faster than you can reason about them.
v0.6.0 shipped the plugin architecture: hooks and skills as a Claude Code plugin, shell scripts replaced by Go subcommands.
The binary worked. The tests passed. The docs were comprehensive.
But inside, the codebase was held together by convention and goodwill:
Command packages mixed Cobra wiring with business logic.
Output functions lived next to the code that computed what to output.
Error constructors were scattered across per-package err.go files. And every user-facing string was a hardcoded English literal buried in a .go file.
v0.8.0 is what happens when you stop adding features and start asking: \"What would this codebase look like if we designed it today?\"
374 commits. 1,708 Go files touched. 80,281 lines added, 21,723 removed. Five weeks of restructuring.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#the-three-pillars","level":2,"title":"The Three Pillars","text":"","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#1-every-package-gets-a-taxonomy","level":3,"title":"1. Every Package Gets a Taxonomy","text":"
Before v0.8.0, a CLI package like internal/cli/pad/ was a flat directory. cmd.go created the cobra command, run.go executed it, and helper functions accumulated at the bottom of whichever file seemed closest.
The rule is simple: cmd/ directories contain only cmd.go and run.go. Helpers belong in core/. Output belongs in internal/write/pad/. Types shared across packages belong in internal/entity/.
24 CLI packages were restructured this way.
Not incrementally;
not \"as we touch them.\"
All of them, in one sustained push.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#2-every-string-gets-a-key","level":3,"title":"2. Every String Gets a Key","text":"
The second pillar was string externalization.
Before v0.8.0, a command description looked like this:
Every command description, flag description, and user-facing text string is now a YAML lookup.
105 command descriptions in commands.yaml.
All flag descriptions in flags.yaml.
879 text constants verified by an exhaustive test that checks every single TextDescKey resolves to a non-empty YAML value.
Why?
Not because we're shipping a French translation tomorrow.
Because externalization forces you to find every string. And finding them is the hard part. The translation is mechanical; the archaeology is not.
Along the way, we eliminated hardcoded pluralization (replacing format.Pluralize() with explicit singular/plural key pairs), replaced Unicode escape sequences with named config/token constants, and normalized every import alias to camelCase.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#3-everything-gets-a-protocol","level":3,"title":"3. Everything Gets a Protocol","text":"
The third pillar was the MCP server. Model Context Protocol allows any MCP-compatible AI tool (not just Claude Code) to read and write .context/ files through a standard JSON-RPC 2.0 interface.
4 prompts: agent context packet, constitution review, tasks review, and a getting-started guide
Resource subscriptions: clients get notified when context files change
Session state: the server tracks which client is connected and what they've accessed
In practice, this means an agent in Cursor can add a decision to .context/DECISIONS.md and an agent in Claude Code can immediately consume it; no glue code, no copy-paste, no tool-specific integration.
The server was also the first package to go through the full taxonomy treatment: mcp/server/ for protocol dispatch, mcp/handler/ for domain logic, mcp/entity/ for shared types, mcp/config/ split into 9 sub-packages.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#the-memory-bridge","level":2,"title":"The Memory Bridge","text":"
While the architecture was being restructured, a quieter feature landed: ctx memory sync.
Claude Code has its own auto-memory system. It writes observations to MEMORY.md in ~/.claude/projects/. These observations are useful but ephemeral: tied to a single tool, invisible to the codebase, lost when you switch machines.
The memory bridge connects these two worlds:
ctx memory sync mirrors MEMORY.md into .context/memory/
ctx memory diff shows what's diverged
ctx memory import promotes auto-memory entries into proper decisions, learnings, or conventions *A check-memory-drift hook nudges when MEMORY.md changes
Memory Requires ctx
Claude Code's auto-memory validates the need for persistent context.
ctx doesn't compete with it; ctx absorbs it as an input source and promotes the valuable parts into structured, version-controlled project knowledge.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#what-got-deleted","level":2,"title":"What Got Deleted","text":"
The best measure of a refactoring isn't what you added. It's what you removed.
fatih/color: the sole third-party UI dependency. Replaced by Unicode symbols. ctx now has exactly two direct dependencies: spf13/cobra and gopkg.in/yaml.v3.
format.Pluralize(): a function that tried to pluralize English words at runtime. Replaced by explicit singular/plural YAML key pairs. No more guessing whether \"entry\" becomes \"entries\" or \"entrys.\"
Legacy key migration: MigrateKeyFile() had 5 callers, full test coverage, and zero users. It existed because we once moved the encryption key path. Nobody was migrating from that era anymore. Deleted.
Per-package err.go files: the broken-window pattern: An agent sees err.go in a package, adds another error constructor. Now err.go has 30 constructors and nobody knows which are used. Consolidated into 22 domain files in internal/err/.
nolint:errcheck directives: every single one, replaced by explicit error handling. In tests: t.Fatal(err) for setup, _ = os.Chdir(orig) for cleanup. In production: defer func() { _ = f.Close() }() for best-effort close.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#before-and-after","level":2,"title":"Before and After","text":"Aspect v0.6.0 v0.8.0 CLI package structure Flat files cmd/ + core/ taxonomy Command descriptions Hardcoded Go strings YAML with DescKey lookup Output functions Mixed into core logic Isolated in write/ packages Cross-cutting types Duplicated per-package Consolidated in entity/ Error constructors Per-package err.go 22 domain files in internal/err/ Direct dependencies 3 (cobra, yaml, color) 2 (cobra, yaml) AI tool integration Claude Code only Any MCP client Agent memory Manual copy-paste ctx memory sync/import/diff Package documentation 75 packages missing doc.go All packages documented Import aliases Inconsistent (cflag, cFlag) Standardized camelCase","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#making-ai-assisted-development-easier","level":2,"title":"Making AI-Assisted Development Easier","text":"
This restructuring wasn't just for humans. It makes the codebase legible to the machines that modify it.
Named constants are searchable landmarks: When an agent sees cmdUse.DescKeyPad, it can grep for the definition, follow the chain to the YAML file, and understand the full lookup path. When it sees \"Encrypted scratchpad\" hardcoded in a .go file, it has no way to know that same string also lives in a YAML file, a test, and a help screen. Constants give the LLM a graph to traverse; literals give it a guess to make.
Small, domain-scoped packages reduce hallucination: An agent loading internal/cli/pad/core/store.go gets 50 lines of focused logic with a clear responsibility boundary. Loading a 500-line monolith means the agent has to infer which parts are relevant, and it guesses wrong more often than you'd expect. Smaller files with descriptive names act as a natural retrieval system: the agent finds the right code by finding the right file, not by scanning everything and hoping.
Taxonomy prevents duplication: When there's a write/pad/ package, the agent knows where output functions belong. When there's an internal/err/pad.go, it knows where error constructors go. Without these conventions, agents reliably create new helpers in whatever file they happen to be editing, producing the exact drift that prompted this consolidation in the first place.
The difference is concrete:
Before: an agent adds a helper function in whatever file it's editing. Next session, a different agent adds the same helper in a different file.
After: the agent finds core/ or write/ and places it correctly. The next agent finds it there.
doc.go files are agent onboarding: Each package's doc.go is a one-paragraph explanation of what the package does and why it exists. An agent loading a package reads this first. 75 packages were missing this context; now none are. The difference is measurable: fewer \"I'll create a helper function here\" moments when the agent understands that the helper already exists two packages over.
The irony is that AI agents were both the cause and the beneficiary of this restructuring. They created the drift by building fast without consolidating. Now the structure they work within makes it harder to drift again. The taxonomy is self-reinforcing: the more consistent the codebase, the more consistently agents modify it.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#key-commits","level":2,"title":"Key Commits","text":"Commit Change ff6cf19e Restructure all CLI packages into cmd/root + core taxonomy d295e49c Externalize command descriptions to embedded YAML 0fcbd11c Remove fatih/color, centralize constants cb12a85a MCP v0.2: tools, prompts, session state, subscriptions ea196d00 Memory bridge: sync, import, diff, journal enrichment 3bcf077d Split text.yaml into 6 domain files 3a0bae86 Split internal/err into 22 domain files 8bd793b1 Extract internal/entry for shared domain API 5b32e435 Add doc.go to all 75 packages a82af4bc Standardize import aliases: camelCase, Yoda-style","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#lessons-learned","level":2,"title":"Lessons Learned","text":"
Agents are surprisingly good at mechanical refactoring; they are surprisingly bad at knowing when to stop: The cmd/ + core/ restructuring was largely agent-driven. But agents reliably introduce gofmt issues during bulk renames, rename functions beyond their scope, and create new files without deleting old ones. Every agent-driven refactoring session needed a human audit pass.
Externalization is archaeology: The hard part of moving strings to YAML wasn't writing YAML. It was finding 879 strings scattered across 1,500 Go files. Each one required a judgment call: is this user-facing? Is this a format pattern? Is this a constant that belongs in config/ instead?
Delete legacy code instead of maintaining it: MigrateKeyFile had test coverage. It had callers. It had documentation. It had zero users. We maintained it for weeks before realizing that the migration window had closed months ago.
Convention enforcement needs mechanical verification: Writing \"use camelCase aliases\" in CONVENTIONS.md doesn't prevent cflag from appearing in the next commit. The lint-drift script catches what humans forget; the planned AST-based audit tests will catch what the lint-drift script can't express.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#whats-next","level":2,"title":"What's Next","text":"
v0.8.0 wasn't about features. It was about making future features inevitable. The next cycle focuses on what the foundation enables:
AST-based audit tests: replace shell grep with Go tests that understand types, call sites, and import graphs (spec: specs/ast-audit-tests.md)
Localization: with every string in YAML, the path to multi-language support is mechanical
MCP v0.3: expand tool coverage, add prompt templates for common workflows
Memory publish: bidirectional sync that pushes curated .context/ knowledge back into Claude Code's MEMORY.md
The architecture is ready. The strings are externalized. The protocol is standard. Now it's about what you build on top.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#the-arc","level":2,"title":"The Arc","text":"
This is the seventh post in the ctx blog series. The arc so far:
The Attention Budget: why context windows are a scarce resource
Before Context Windows, We Had Bouncers: the IRC lineage of context engineering
Context as Infrastructure: treating context as persistent files, not ephemeral prompts
When a System Starts Explaining Itself: the journal as a first-class artifact
The Homework Problem: what happens when AI writes code but humans own the outcome
Agent Memory Is Infrastructure: L2 memory vs L3 project knowledge
The Architecture Release (this post): what it looks like when you redesign the internals
We Broke the 3:1 Rule: the consolidation debt behind this release
See also: Agent Memory Is Infrastructure: the memory bridge feature in this release is the first implementation of the L2-to-L3 promotion pipeline described in that post.
See also: We Broke the 3:1 Rule: the companion post explaining why this release needed 181 consolidation commits and 18 days of cleanup.
Systems don't scale because they grow. They scale because they stop drifting.
Full changelog: v0.6.0...v0.8.0
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/","level":1,"title":"We Broke the 3:1 Rule","text":"
The best time to consolidate was after every third session. The second best time is now.
Jose Alekhinne / March 23, 2026
The rule was simple: three feature sessions, then one consolidation session.
The Architecture Release shows the result: This post shows the cost.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-rule-we-wrote","level":2,"title":"The Rule We Wrote","text":"
In The 3:1 Ratio, I documented a rhythm that worked during ctx's first month: three feature sessions, then one consolidation session. The evidence was clear. The rule was simple.
The math checked out.
And then we ignored it for five weeks.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#what-happened","level":2,"title":"What Happened","text":"
After v0.6.0 shipped on February 16, the feature pipeline was irresistible. The MCP server spec was ready. The memory bridge design was done. Webhook notifications had been deferred twice. The VS Code extension needed 15 new commands. The sysinfo package was overdue...
Each feature was important. Each feature was \"just one more session.\" Each feature pushed the consolidation session one day further out.
The git history tells the story in two numbers:
Phase Dates Commits Duration Feature run Feb 16 - Mar 5 198 17 days Consolidation run Mar 5 - Mar 23 181 18 days
198 feature commits before a single consolidation commit. If the 3:1 rule says consolidate every 4th session, we consolidated after the 66th.
The Actual Ratio
The ratio wasn't 3:1. It was 1:1.
We spent as much time cleaning up as we did building.
The consolidation run took 18 days: longer than the feature run itself.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#what-compounded","level":2,"title":"What Compounded","text":"
The 3:1 post warned about compounding. Here is what compounding actually looked like at scale.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-string-problem","level":3,"title":"The String Problem","text":"
By March 5, there were 879 user-facing strings scattered across 1,500 Go files. Not because anyone decided to put them there. Because each feature session added 10-15 strings, and nobody stopped to ask \"should these be in YAML?\"
Finding them all took longer than externalizing them. The archaeology was the cost, not the migration.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-taxonomy-problem","level":3,"title":"The Taxonomy Problem","text":"
24 CLI packages had accumulated their own conventions. Some put cobra wiring in cmd.go. Some put it in root.go. Some mixed business logic with command registration. Some had helpers at the bottom of run.go. Some had separate util.go files.
At peak drift, adding a feature meant first figuring out which of three competing patterns this package was using.
Restructuring one package into cmd/root/ + core/ took 15 minutes. Restructuring 24 of them took days, because each one had slightly different conventions to untangle.
If we had restructured every 4th package as it was built, the taxonomy would have emerged naturally.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-type-problem","level":3,"title":"The Type Problem","text":"
Cross-cutting types like SessionInfo, ExportParams, and ParserResult were defined in whichever package first needed them. By March 5, the same types were imported through 3-4 layers of indirection, causing import cycles that required internal/entity to break.
The entity package extracted 30+ types from 12 packages. Each extraction risked breaking imports in packages we hadn't touched in weeks.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-error-problem","level":3,"title":"The Error Problem","text":"
Per-package err.go files had grown into a broken-window pattern:
An agent sees err.go in a package, adds another error constructor. By March 5, there were error constructors scattered across 22 packages with no central inventory. The consolidation into internal/err/ domain files required tracing every error through every caller.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-output-problem","level":3,"title":"The Output Problem","text":"
Output functions (cmd.Println, fmt.Fprintf) were mixed into business logic. When we decided output belongs in write/ packages, we had to extract functions from every CLI package. The Phase WC baseline commit (4ec5999) marks the starting point of this migration. 181 commits later, it was done.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-compound-interest-math","level":2,"title":"The Compound Interest Math","text":"
The 3:1 rule assumes consolidation sessions of roughly equal size to feature sessions. Here is what happens when you skip:
Consolidation cadence Feature sessions Consolidation sessions Total Every 4th (3:1) 48 16 64 Every 10th 48 ~8 ~56 Never (what we did) 198 commits 181 commits 379
The Takeaway
You don't save consolidation work by skipping it:
You increase its cost.
Skipping consolidation doesn't save time: It borrows it.
The interest rate is nonlinear: The longer you wait, the more each individual fix costs, because fixes interact with other unfixed drift.
Renaming a constant in week 2 touches 3 files. Renaming it in week 6 touches 15, because five features built on the original name.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#what-consolidation-actually-looked-like","level":2,"title":"What Consolidation Actually Looked Like","text":"
The 18-day consolidation run wasn't one sweep. It was a sequence of targeted campaigns, each revealing the next:
Week 1 (Mar 5-11): Error consolidation and write/ migration. Move output functions out of core/. Split monolithic errors.go into 22 domain files. Remove fatih/color. This exposed the scope of the string problem.
Week 2 (Mar 12-18): String externalization. Create commands.yaml, flags.yaml, split text.yaml into 6 domain files. Add 879 DescKey/TextDescKey constants. Build exhaustive test. Normalize all import aliases to camelCase. This exposed the taxonomy problem.
Week 3 (Mar 19-23): Taxonomy enforcement. Singularize command directories. Add doc.go to all 75 packages. Standardize import aliases project-wide. Fix lint-drift false positives. This was the \"polish\" phase, except it took 5 days because the inconsistencies had compounded across 461 packages.
Each week's work would have been a single session if done incrementally.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#lessons-again","level":2,"title":"Lessons (Again)","text":"
The 3:1 post listed the symptoms of drift. This post adds the consequences of ignoring them:
Consolidation is not optional; it is deferred or paid: We didn't avoid 16 consolidation sessions by skipping them. We compressed them into 18 days of uninterrupted cleanup. The work was the same; the experience was worse.
Feature velocity creates an illusion of progress: 198 commits felt productive. But the codebase on March 5 was harder to modify than the codebase on February 16, despite having more features.
Speed Without Structure
Speed without structure is negative progress.
Agents amplify both building and debt: The same AI that can restructure 24 packages in a day can also create 24 slightly different conventions in a day. The 3:1 rule matters more with AI-assisted development, not less.
The consolidation baseline is the most important commit to record: We tracked ours in TASKS.md (4ec5999). Without that marker, knowing where to start the cleanup would have been its own archaeological expedition.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-updated-rule","level":2,"title":"The Updated Rule","text":"
The 3:1 ratio still works. We just didn't follow it. The updated practice:
After every 3rd feature session, schedule consolidation. Not \"when it feels right.\" Not \"when things get bad.\" After the 3rd session.
Record the baseline commit. When you start a consolidation phase, write down the commit hash. It marks where the debt starts.
Run make audit before feature work. If it doesn't pass, you are already in debt. Consolidate before building.
Treat consolidation as a feature. It gets a branch. It gets commits. It gets a blog post. It is not overhead; it is the work that makes the next three features possible.
The Rule
The 3:1 ratio is not aspirational: It is structural.
Ignore consolidation, and the system will schedule it for you.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-arc","level":2,"title":"The Arc","text":"
This is the eighth post in the ctx blog series:
The Attention Budget: why context windows are a scarce resource
Before Context Windows, We Had Bouncers: the IRC lineage of context engineering
Context as Infrastructure: treating context as persistent files, not ephemeral prompts
When a System Starts Explaining Itself: the journal as a first-class artifact
The Homework Problem: what happens when AI writes code but humans own the outcome
Agent Memory Is Infrastructure: L2 memory vs L3 project knowledge
The Architecture Release: what v0.8.0 looks like from the inside
We Broke the 3:1 Rule (this post): what happens when you don't consolidate
See also: The 3:1 Ratio: the original observation. This post is the empirical follow-up, five weeks and 379 commits later.
Key commits marking the consolidation arc:
Commit Milestone 4ec5999 Phase WC baseline (consolidation starts) ff6cf19e All CLI packages restructured into cmd/ + core/d295e49c All command descriptions externalized to YAML 3a0bae86 Error package split into 22 domain files 0fcbd11cfatih/color removed; 2 dependencies remain 5b32e435doc.go added to all 75 packages a82af4bc Import aliases standardized project-wide 692f86cdlint-drift false positives fixed; make audit green","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/","level":1,"title":"Code Structure as an Agent Interface","text":"","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#what-19-ast-tests-taught-us-about-agent-readable-code","level":2,"title":"What 19 AST Tests Taught Us About Agent-Readable Code","text":"
When an agent sees token.Slash instead of \"/\", it cannot pattern-match against the millions of strings.Split(s, \"/\") calls in its training data and coast on statistical inference. It has to actually look up what token.Slash is.
Jose Alekhinne / April 2, 2026
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#how-it-began","level":2,"title":"How It Began","text":"
We set out to replace a shell script with Go tests.
We ended up discovering that \"code quality\" and \"agent readability\" are the same thing.
This is not about linting. This is about controlling how an agent perceives your system.
One term will recur throughout this post, so let me pin it down:
Agent Readability
Agent Readability is the degree to which a codebase can be understood through structured traversal, not statistical pattern matching.
This is the story of 19 AST-based audit tests, a single-day session that touched 300+ files, and what happens when you treat your codebase's structure as an interface for the machines that read it.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-shell-script-problem","level":2,"title":"The Shell Script Problem","text":"
ctx had a file called hack/lint-drift.sh. It ran five checks using grep and awk: literal \"\\n\" strings, cmd.Printf calls outside the write package, magic directory strings in filepath.Join, hardcoded .md extensions, and DescKey-to-YAML linkage.
It worked. Until it didn't.
The script had three structural weaknesses that kept biting us:
No type awareness. It could not distinguish a Use* constant from a DescKey* constant, causing 71 false positives in one run.
Fragile exclusions. When a constant moved from token.go to whitespace.go, the exclusion glob broke silently.
Ceiling on detection. Checks that require understanding call sites, import graphs, or type relationships are impossible in shell.
We wrote a spec to replace all five checks with Go tests using go/ast and go/packages. The tests would run as part of go test ./...: no separate script, no separate CI step.
What we did not expect was where the work would lead.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-ast-migration","level":2,"title":"The AST Migration","text":"
The pattern for each test is identical:
func TestNoLiteralWhitespace(t *testing.T) {\n pkgs := loadPackages(t)\n var violations []string\n for _, pkg := range pkgs {\n for _, file := range pkg.Syntax {\n ast.Inspect(file, func(n ast.Node) bool {\n // check node, append to violations\n return true\n })\n }\n }\n for _, v := range violations {\n t.Error(v)\n }\n}\n
Load packages once via sync.Once, walk every syntax tree, collect violations, report. The shared helpers (loadPackages, isTestFile, posString) live in helpers_test.go. Each test is a _test.go file in internal/audit/, producing no binary output and not importable by production code.
In a single session, we built 13 new tests on top of 6 that already existed, bringing the total to 19:
Test What it catches TestNoLiteralWhitespace\"\\n\", \"\\t\", '\\r' outside config/token/TestNoNakedErrorsfmt.Errorf/errors.New outside internal/err/TestNoStrayErrFileserr.go files outside internal/err/TestNoRawLoggingfmt.Fprint*(os.Stderr), log.Print* outside internal/log/TestNoInlineSeparatorsstrings.Join with literal separator arg TestNoStringConcatPaths Path-like variables built with +TestNoStutteryFunctionswrite.WriteJournal repeats package name TestDocComments Missing doc comments on any declaration TestNoMagicValues Numeric literals outside const definitions TestNoMagicStrings String literals outside const definitions TestLineLength Lines exceeding 80 characters TestNoRegexpOutsideRegexPkgregexp.MustCompile outside config/regex/
Plus the six that preceded the session: TestNoErrorsAs, TestNoCmdPrintOutsideWrite, TestNoExecOutsideExecPkg, TestNoInlineRegexpCompile, TestNoRawFileIO, TestNoRawPermissions.
The migration touched 300+ files across 25 commits.
Not because the tests were hard to write, but because every test we wrote revealed violations that needed fixing.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-tightening-loop","level":2,"title":"The Tightening Loop","text":"
The most instructive part was not writing the tests. It was the iterative tightening.
The following process was repeated for every test:
Write the test with reasonable exemptions
Run it, see violations
Fix the violations (migrate to config constants)
The human reviews the result
The human spots something the test missed
Fix the test first, verify it catches the issue
Fix the newly caught violations
Repeat from step 4
This loop drove the tests from \"basically correct\" to \"actually useful\".
Three examples:
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#example-1-the-local-const-loophole","level":3,"title":"Example 1: The Local Const Loophole","text":"
TestNoMagicValues initially exempted local constants inside function bodies. This let code like this pass:
The test saw a const definition and moved on. But const descMaxWidth = 70 on the line before its only use is just renaming a magic number. The 70 should live in config/format/TruncateDescription where it is discoverable, reusable, and auditable.
We removed the local const exemption. The test caught it. The value moved to config.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#example-2-the-single-character-dodge","level":3,"title":"Example 2: The Single-Character Dodge","text":"
TestNoMagicStrings initially exempted all single-character strings as \"structural punctuation\".
This let \"/\", \"-\", and \".\" pass everywhere.
But \"/\" is a directory separator. It is OS-specific and a security surface.
\"-\" used in strings.Repeat(\"-\", width) is creating visual output, not acting as a delimiter.
\".\" in strings.SplitN(ver, \".\", 3) is a version separator.
None of these are \"just punctuation\": They are domain values with specific meanings.
We removed the blanket exemption: 30 violations surfaced.
Every one was a real magic value that should have been token.Slash, token.Dash, or token.Dot.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#example-3-the-replacer-versus-regex","level":3,"title":"Example 3: The Replacer versus Regex","text":"
Six token references and a NewReplacer allocation. The magic values were gone, but we had replaced them with token soup: structure without abstraction.
The correct tool was a regex:
// In config/regex/file.go:\nvar MermaidUnsafe = regexp.MustCompile(`[/.\\-]`)\n\n// In the caller:\nfunc MermaidID(pkg string) string {\n return regex.MermaidUnsafe.ReplaceAllString(\n pkg, token.Underscore,\n )\n}\n
One config regex, one call. The regex lives in config/regex/file.go where every other compiled pattern lives. An agent reading the code sees regex.MermaidUnsafe and immediately knows: this is a sanitization pattern, it lives in the regex registry, and it has a name that explains its purpose.
Clean is better than clever.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#a-before-and-after","level":2,"title":"A Before-and-After","text":"
To make the agent-readability claim concrete, consider one function through the full transformation.
An agent reading this sees six string literals. To understand what the function does, it must: (1) parse the NewReplacer pair semantics, (2) infer that /, ., - are being replaced, (3) guess why, (4) hope the guess is right.
There is nothing to follow. No import to trace. No name to search. The meaning is locked inside the function body.
An agent reading this sees two named references: regex.MermaidUnsafe and token.Underscore.
To understand the function, it can: (1) look up MermaidUnsafe in config/regex/file.go and see the pattern [/.\\-] with a doc comment explaining it matches invalid Mermaid characters, (2) look up Underscore in config/token/delim.go and see it is the replacement character.
The agent now has: a named pattern, a named replacement, a package location, documentation, and neighboring context (other regex patterns, other delimiters).
It got all of this for free by following just two references.
The indirection is not an overhead. It is the retrieval query.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-principles","level":2,"title":"The Principles","text":"
You are not just improving code quality. You are shaping the input space that determines how an LLM can reason about your system.
Every structural constraint we enforce converts implicit semantics into explicit structure.
LLMs struggle when meaning is implicit and patterns are statistical.
They thrive when meaning is explicit and structure is navigable.
Here is what we learned, organized into three categories.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#cognitive-constraints","level":3,"title":"Cognitive Constraints","text":"
These force agents (and humans) to think harder.
Indirection acts as a built-in retrieval mechanism:
Moving magic values to config forces the agent to follow the reference. errMemory.WriteFile(cause) tells the agent \"there is a memory error package, go look.\" fmt.Errorf(\"writing MEMORY.md: %w\", cause) inlines everything and makes the call graph invisible. The indirection IS the retrieval query.
Unfamiliar patterns force reasoning:
When an agent sees token.Slash instead of \"/\", it cannot coast on corpus frequency. It has to actually look up what token.Slash is, which forces it through the dependency graph, which means it encounters documentation and neighboring constants, which gives it richer context. You are exploiting the agent's weakness (over-reliance on training data) to make it behave more carefully.
Documentation helps everyone:
Extensive documentation helps humans reading the code, agents reasoning about it, and RAG systems indexing it.
Our TestDocComments check added 308 doc comments in one commit. Every function, every type, every constant block now has a doc comment.
This is not busywork: it is the content that agents and embeddings consume.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#structural-constraints","level":3,"title":"Structural Constraints","text":"
These shape the codebase into a navigable graph.
Shorter files save tokens:
Forcing private helper functions out of main files makes the main file shorter. An agent loading a file spends fewer tokens on boilerplate and more on the logic that matters.
Fixed-width constraints force decomposition:
A function that cannot be expressed in 80 columns is either too deeply nested (extract a helper), has too many parameters (introduce a struct), or has a variable name that is too long (rethink the abstraction).
The constraint forces structural improvements that happen to also make the code more parseable.
Chunk-friendly structure helps RAG
Code intelligence tools chunk files for embedding and retrieval. Short, well-documented, single-responsibility files produce better chunks than monolithic files with mixed concerns.
The structural constraints create files that RAG systems can index effectively.
Centralization creates debuggable seams:
All error handling in internal/err/, all logging in internal/log/, all file operations in internal/io/. One place to debug, one place to test, one place to see patterns. An agent analyzing \"how does this project handle errors\" gets one answer from one package, not 200 scattered fmt.Errorf calls.
Private functions become public patterns:
When you extract a private function to satisfy a constraint, it often ends up as a semi-public function in a core/ package. Then you realize it is generic enough to be factored into a purpose-specific module.
The constraint drives discovery of reusable abstractions hiding inside monolithic functions.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#operational-benefits","level":3,"title":"Operational Benefits","text":"
These pay dividends in daily development.
Single-edit renames:
Renaming a flag is one edit to a config constant instead of find-and-replace across 30,000 lines with possible misses. grep token.Slash gives you every place that uses a forward slash semantically.
grep \"/\" gives you noise.
Blast radius containment:
When every magic value is a config constant, a search is one result. This matters for impact analysis, security audits, and agents trying to understand \"what uses this\".
Compile-time contract enforcement:
When err/memory.WriteFile exists, the compiler guarantees the error message exists and the call signature is correct. An inline fmt.Errorf can have a typo in the format string and nothing catches it until runtime. Centralization turns runtime failures into compile errors.
Semantic git blame:
When token.Slash is used everywhere and someone changes its value, git blame on the config file shows exactly when and why.
With inline \"/\" scattered across 30 files, the history is invisible.
Test surface reduction:
Centralizing into internal/err/, internal/io/, internal/config/ means you test behavior once at the boundary and trust the callers.
You do not need 30 tests for 30 fmt.Errorf calls. You need 1 test for errMemory.WriteFile and 30 trivial call-site audits, which is exactly what these AST tests provide.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-numbers","level":2,"title":"The Numbers","text":"
One session. 25 commits. The raw stats:
Metric Count New audit tests 13 Total audit tests 19 Files touched 300+ Magic values migrated 90+ Functions renamed 17 Doc comments added 323 Lines rewrapped to 80 chars 190 Config constants created 40+ Config regexes created 3
Every number represents a violation that existed before the test caught it. The tests did not create work: they revealed work that was already needed.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-uncomfortable-implication","level":2,"title":"The Uncomfortable Implication","text":"
None of this is Go-specific.
If an AI agent interacts with your codebase, your codebase already is an interface. You just have not designed it as one.
If your error messages are scattered across 200 files, an agent cannot reason about error handling as a concept. If your magic values are inlined, an agent cannot distinguish \"this is a path separator\" from \"this is a division operator.\" If your functions are named write.WriteJournal, the agent wastes tokens on redundant information.
What we discovered, through the unglamorous work of writing lint tests and migrating string literals, is that the structural constraints software engineering has valued for decades are exactly the constraints that make code readable to machines.
This is not a coincidence: These constraints exist because they reduce the cognitive load of understanding code.
Agents have cognitive load too: It is called the context window.
You are not converting code to a new paradigm.
You are making the latent graph visible.
You are converting implicit semantics into explicit structure that both humans and machines can traverse.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#whats-next","level":2,"title":"What's Next","text":"
The spec lists 8 more tests we have not built yet, including TestDescKeyYAMLLinkage (verifying that every DescKey constant has a corresponding YAML entry), TestCLICmdStructure (enforcing the cmd.go / run.go / doc.go file convention), and TestNoFlagBindOutsideFlagbind (which requires migrating ~50 flag registration sites first).
The broader question: should these principles be codified as a reusable linting framework? The patterns (loadPackages + ast.Inspect + violation collection) are generic.
The specific checks are project-specific. But the categories of checks (centralization enforcement, magic value detection, naming conventions, documentation requirements) are universal.
For now, 19 tests in internal/audit/ is enough. They run in 2 seconds as part of go test ./.... They catch real issues.
And they encode a theory of code quality that serves both humans and the agents that work alongside them.
Agents are not going away. They are reading your code right now, forming representations of your system in context windows that forget everything between sessions.
The codebases that structure themselves for that reality will compound. The ones that do not will slowly become illegible to the tools they depend on.
Structure is no longer just for maintainability. It is for reasonability.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"cli/","level":1,"title":"CLI","text":"","path":["CLI"],"tags":[]},{"location":"cli/#ctx-cli","level":2,"title":"ctx CLI","text":"
This is a complete reference for all ctx commands.
Flag Description --help Show command help --version Show version --context-dir <path> Override context directory (default: .context/) --allow-outside-cwd Allow context directory outside current working directory
Initialization required. Most commands require a .context/ directory created by ctx init. Running a command without one produces:
ctx: not initialized - run \"ctx init\" first\n
Commands that work before initialization: ctx init, ctx setup, ctx doctor, and grouping commands that only show help (e.g. ctx with no subcommand, ctx system). Hidden hook commands have their own guards and no-op gracefully.
","path":["CLI"],"tags":[]},{"location":"cli/#commands","level":2,"title":"Commands","text":"Command Description ctx init Initialize .context/ directory with templates ctx status Show context summary (files, tokens, drift) ctx agent Print token-budgeted context packet for AI consumption ctx load Output assembled context in read order ctx add Add a task, decision, learning, or convention ctx drift Detect stale paths, secrets, missing files ctx sync Reconcile context with codebase state ctx compact Archive completed tasks, clean up files ctx task Task completion, archival, and snapshots ctx permission Permission snapshots (golden image) ctx reindex Regenerate indices for DECISIONS.md and LEARNINGS.mdctx decision Manage DECISIONS.md (reindex) ctx learning Manage LEARNINGS.md (reindex) ctx journal Browse and export AI session history ctx journal Generate static site from journal entries ctx serve Serve any zensical directory (default: journal site) ctx watch Auto-apply context updates from AI output ctx setup Generate AI tool integration configs ctx loop Generate autonomous loop script ctx memory Bridge Claude Code auto memory into .context/ ctx notify Send webhook notifications ctx change Show what changed since last session ctx dep Show package dependency graph ctx pad Encrypted scratchpad for sensitive one-liners ctx remind Session-scoped reminders that surface at session start ctx completion Generate shell autocompletion scripts ctx guide Quick-reference cheat sheet ctx why Read the philosophy behind ctx ctx site Site management (feed generation) ctx trace Show context behind git commits ctx doctor Structural health check (hooks, drift, config) ctx mcp MCP server for AI tool integration (stdin/stdout) ctx config Manage runtime configuration profiles ctx system System diagnostics and hook commands","path":["CLI"],"tags":[]},{"location":"cli/#exit-codes","level":2,"title":"Exit Codes","text":"Code Meaning 0 Success 1 General error / warnings (e.g. drift) 2 Context not found 3 Violations found (e.g. drift) 4 File operation error","path":["CLI"],"tags":[]},{"location":"cli/#environment-variables","level":2,"title":"Environment Variables","text":"Variable Description CTX_DIR Override default context directory path CTX_TOKEN_BUDGET Override default token budget CTX_BACKUP_SMB_URL SMB share URL for backups (e.g. smb://host/share) CTX_BACKUP_SMB_SUBDIR Subdirectory on SMB share (default: ctx-sessions) CTX_SESSION_ID Active AI session ID (used by ctx trace for context linking)","path":["CLI"],"tags":[]},{"location":"cli/#configuration-file","level":2,"title":"Configuration File","text":"
Optional .ctxrc (YAML format) at project root:
# .ctxrc\ncontext_dir: .context # Context directory name\ntoken_budget: 8000 # Default token budget\npriority_order: # File loading priority\n - TASKS.md\n - DECISIONS.md\n - CONVENTIONS.md\nauto_archive: true # Auto-archive old items\narchive_after_days: 7 # Days before archiving tasks\nscratchpad_encrypt: true # Encrypt scratchpad (default: true)\nallow_outside_cwd: false # Skip boundary check (default: false)\nevent_log: false # Enable local hook event logging\ncompanion_check: true # Check companion tools at session start\nentry_count_learnings: 30 # Drift warning threshold (0 = disable)\nentry_count_decisions: 20 # Drift warning threshold (0 = disable)\nconvention_line_count: 200 # Line count warning for CONVENTIONS.md (0 = disable)\ninjection_token_warn: 15000 # Oversize injection warning (0 = disable)\ncontext_window: 200000 # Auto-detected for Claude Code; override for other tools\nbilling_token_warn: 0 # One-shot billing warning at this token count (0 = disabled)\nkey_rotation_days: 90 # Days before key rotation nudge\nsession_prefixes: # Recognized session header prefixes (extend for i18n)\n - \"Session:\" # English (default)\n # - \"Oturum:\" # Turkish (add as needed)\n # - \"セッション:\" # Japanese (add as needed)\nfreshness_files: # Files with technology-dependent constants (opt-in)\n - path: config/thresholds.yaml\n desc: Model token limits and batch sizes\n review_url: https://docs.example.com/limits # Optional\nnotify: # Webhook notification settings\n events: # Required: only listed events fire\n - loop\n - nudge\n - relay\n # - heartbeat # Every-prompt session-alive signal\n
Field Type Default Description context_dirstring.context Context directory name (relative to project root) token_budgetint8000 Default token budget for ctx agentpriority_order[]string (all files) File loading priority for context packets auto_archivebooltrue Auto-archive completed tasks archive_after_daysint7 Days before completed tasks are archived scratchpad_encryptbooltrue Encrypt scratchpad with AES-256-GCM allow_outside_cwdboolfalse Skip boundary check for external context dirs event_logboolfalse Enable local hook event logging to .context/state/events.jsonlcompanion_checkbooltrue Check companion tool availability (Gemini Search, GitNexus) during /ctx-rememberentry_count_learningsint30 Drift warning when LEARNINGS.md exceeds this count entry_count_decisionsint20 Drift warning when DECISIONS.md exceeds this count convention_line_countint200 Line count warning for CONVENTIONS.mdinjection_token_warnint15000 Warn when auto-injected context exceeds this token count (0 = disable) context_windowint200000 Context window size in tokens. Auto-detected for Claude Code (200k/1M); override for other AI tools billing_token_warnint0 (off) One-shot warning when session tokens exceed this threshold (0 = disabled) key_rotation_daysint90 Days before encryption key rotation nudge session_prefixes[]string[\"Session:\"] Recognized Markdown session header prefixes. Extend to parse sessions written in other languages freshness_files[]object (none) Files to track for staleness (path, desc, optional review_url). Hook warns after 6 months without modification notify.events[]string (all) Event filter for webhook notifications (empty = all)
The ctx repo ships two .ctxrc source profiles (.ctxrc.base and .ctxrc.dev). The working copy (.ctxrc) is gitignored and switched between them using subcommands below.
With no argument, toggles between dev and base. Accepts prod as an alias for base.
Argument Description dev Switch to dev profile (verbose logging) base Switch to base profile (all defaults) (none) Toggle to the opposite profile
Profiles:
Profile Description dev Verbose logging, webhook notifications on base All defaults, notifications off
Examples:
ctx config switch dev # Switch to dev profile\nctx config switch base # Switch to base profile\nctx config switch # Toggle (dev → base or base → dev)\nctx config switch prod # Alias for \"base\"\n
The detection heuristic checks for an uncommented notify: line in .ctxrc: present means dev, absent means base.
Type Target File taskTASKS.mddecisionDECISIONS.mdlearningLEARNINGS.mdconventionCONVENTIONS.md
Flags:
Flag Short Description --priority <level>-p Priority for tasks: high, medium, low--section <name>-s Target section within file --context-c Context (required for decisions and learnings) --rationale-r Rationale for decisions (required for decisions) --consequence Consequence for decisions (required for decisions) --lesson-l Key insight (required for learnings) --application-a How to apply going forward (required for learnings) --file-f Read content from file instead of argument
Examples:
# Add a task\nctx add task \"Implement user authentication\"\nctx add task \"Fix login bug\" --priority high\n\n# Record a decision (requires all ADR (Architectural Decision Record) fields)\nctx add decision \"Use PostgreSQL for primary database\" \\\n --context \"Need a reliable database for production\" \\\n --rationale \"PostgreSQL offers ACID compliance and JSON support\" \\\n --consequence \"Team needs PostgreSQL training\"\n\n# Note a learning (requires context, lesson, and application)\nctx add learning \"Vitest mocks must be hoisted\" \\\n --context \"Tests failed with undefined mock errors\" \\\n --lesson \"Vitest hoists vi.mock() calls to top of file\" \\\n --application \"Always place vi.mock() before imports in test files\"\n\n# Add to specific section\nctx add convention \"Use kebab-case for filenames\" --section \"Naming\"\n
Flag Description --json Output machine-readable JSON --fix Auto-fix simple issues
Checks:
Path references in ARCHITECTURE.md and CONVENTIONS.md exist
Task references are valid
Constitution rules aren't violated (heuristic)
Staleness indicators (old files, many completed tasks)
Missing packages: warns when internal/ directories exist on disk but are not referenced in ARCHITECTURE.md (suggests running /ctx-architecture)
Entry count: warns when LEARNINGS.md or DECISIONS.md exceed configurable thresholds (default: 30 learnings, 20 decisions), or when CONVENTIONS.md exceeds a line count threshold (default: 200). Configure via .ctxrc:
entry_count_learnings: 30 # warn above this (0 = disable)\nentry_count_decisions: 20 # warn above this (0 = disable)\nconvention_line_count: 200 # warn above this (0 = disable)\n
Example:
ctx drift\nctx drift --json\nctx drift --fix\n
Exit codes:
Code Meaning 0 All checks passed 1 Warnings found 3 Violations found","path":["CLI","Context Management"],"tags":[]},{"location":"cli/context/#ctx-sync","level":3,"title":"ctx sync","text":"
Reconcile context with the current codebase state.
ctx sync [flags]\n
Flags:
Flag Description --dry-run Show what would change without modifying
What it does:
Scans codebase for structural changes
Compares with ARCHITECTURE.md
Suggests documenting dependencies if package files exist
Move completed tasks from TASKS.md to a timestamped archive file.
ctx task archive [flags]\n
Flags:
Flag Description --dry-run Preview changes without modifying files
Archive files are stored in .context/archive/ with timestamped names (tasks-YYYY-MM-DD.md). Completed tasks (marked with [x]) are moved; pending tasks ([ ]) remain in TASKS.md.
Regenerate the quick-reference index for both DECISIONS.md and LEARNINGS.md in a single invocation.
ctx reindex\n
This is a convenience wrapper around ctx decision reindex and ctx learning reindex. Both files grow at similar rates and users typically want to reindex both after manual edits.
The index is a compact table of date and title for each entry, allowing AI tools to scan entries without reading the full file.
Example:
ctx reindex\n# ✓ Index regenerated with 12 entries\n# ✓ Index regenerated with 8 entries\n
Structural health check across context, hooks, and configuration. Runs mechanical checks that don't require semantic analysis. Think of it as ctx status + ctx drift + configuration audit in one pass.
ctx doctor [flags]\n
Flags:
Flag Short Type Default Description --json-j bool false Machine-readable JSON output","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#what-it-checks","level":4,"title":"What It Checks","text":"Check Category What it verifies Context initialized Structure .context/ directory exists Required files present Structure All required context files exist (TASKS.md, etc.) Drift detected Quality Stale paths, missing files, constitution violations Event logging status Hooks Whether event_log: true is set in .ctxrc Webhook configured Hooks .notify.enc file exists Pending reminders State Count of entries in reminders.json Task completion ratio State Pending vs completed tasks in TASKS.md Context token size Size Estimated token count across all context files Recent event activity Events Last event timestamp (only when event logging is enabled)","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#output-format-human","level":4,"title":"Output Format (Human)","text":"
ctx doctor\n==========\n\nStructure\n ✓ Context initialized (.context/)\n ✓ Required files present (4/4)\n\nQuality\n ⚠ Drift: 2 warnings (stale path in ARCHITECTURE.md, high entry count in LEARNINGS.md)\n\nHooks\n ✓ hooks.json valid (14 hooks registered)\n ○ Event logging disabled (enable with event_log: true in .ctxrc)\n\nState\n ✓ No pending reminders\n ⚠ Task completion ratio high (18/22 = 82%): consider archiving\n\nSize\n ✓ Context size: ~4200 tokens (budget: 8000)\n\nSummary: 2 warnings, 0 errors\n
Status indicators:
Icon Status Meaning ✓ ok Check passed ⚠ warning Non-critical issue worth fixing ✗ error Problem that needs attention ○ info Informational note","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#output-format-json","level":4,"title":"Output Format (JSON)","text":"
# Quick structural health check\nctx doctor\n\n# Machine-readable output for scripting\nctx doctor --json\n\n# Count warnings\nctx doctor --json | jq '.warnings'\n\n# Check for errors only\nctx doctor --json | jq '[.results[] | select(.status == \"error\")]'\n
","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#when-to-use-what","level":4,"title":"When to Use What","text":"Tool When ctx status Quick glance at files, tokens, and drift ctx doctor Thorough structural checkup (hooks, config, events too) /ctx-doctor Agent-driven diagnosis with event log pattern analysis
ctx status tells you what's there. ctx doctor tells you what's wrong. /ctx-doctor tells you why it's wrong and what to do about it.
","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#what-it-does-not-do","level":4,"title":"What It Does Not Do","text":"
No event pattern analysis: that's the /ctx-doctor skill's job
No auto-fixing: reports findings, doesn't modify anything
No external service checks: doesn't verify webhook endpoint availability
See also: Troubleshooting | ctx system events | /ctx-doctor skill | Detecting and Fixing Drift
","path":["CLI","Doctor"],"tags":[]},{"location":"cli/init-status/","level":1,"title":"Init and Status","text":"","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/init-status/#ctx-init","level":3,"title":"ctx init","text":"
Initialize a new .context/ directory with template files.
ctx init [flags]\n
Flags:
Flag Short Description --force-f Overwrite existing context files --minimal-m Only create essential files (TASKS.md, DECISIONS.md, CONSTITUTION.md) --merge Auto-merge ctx content into existing CLAUDE.md
Creates:
.context/ directory with all template files
.claude/settings.local.json with pre-approved ctx permissions
CLAUDE.md with bootstrap instructions (or merges into existing)
Claude Code hooks and skills are provided by the ctx plugin (see Integrations).
Example:
# Standard init\nctx init\n\n# Minimal setup (just core files)\nctx init --minimal\n\n# Force overwrite existing\nctx init --force\n\n# Merge into existing files\nctx init --merge\n
","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/init-status/#ctx-status","level":3,"title":"ctx status","text":"
Show the current context summary.
ctx status [flags]\n
Flags:
Flag Short Description --json Output as JSON --verbose-v Include file contents summary
Output:
Context directory path
Total files and token estimate
Status of each file (loaded, empty, missing)
Recent activity (modification times)
Drift warnings if any
Example:
ctx status\nctx status --json\nctx status --verbose\n
","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/init-status/#ctx-agent","level":3,"title":"ctx agent","text":"
Print an AI-ready context packet optimized for LLM consumption.
ctx agent [flags]\n
Flags:
Flag Default Description --budget 8000 Token budget: controls content selection and prioritization --format md Output format: md or json--cooldown 10m Suppress repeated output within this duration (requires --session) --session (none) Session ID for cooldown isolation (e.g., $PPID)
How budget works:
The budget controls how much context is included. Entries are selected in priority tiers:
Constitution: always included in full (inviolable rules)
Tasks: all active tasks, up to 40% of budget
Conventions: all conventions, up to 20% of budget
Decisions: scored by recency and relevance to active tasks
Learnings: scored by recency and relevance to active tasks
Decisions and learnings are ranked by a combined score (how recent + how relevant to your current tasks). High-scoring entries are included with their full body. Entries that don't fit get title-only summaries in an \"Also Noted\" section. Superseded entries are excluded.
Output sections:
Section Source Selection Read These Files all .context/ Non-empty files in priority order Constitution CONSTITUTION.md All rules (never truncated) Current Tasks TASKS.md All unchecked tasks (budget-capped) Key Conventions CONVENTIONS.md All items (budget-capped) Recent Decisions DECISIONS.md Full body, scored by relevance Key Learnings LEARNINGS.md Full body, scored by relevance Also Noted overflow Title-only summaries
Example:
# Default (8000 tokens, markdown)\nctx agent\n\n# Smaller packet for tight context windows\nctx agent --budget 4000\n\n# JSON format for programmatic use\nctx agent --format json\n\n# Pipe to file\nctx agent --budget 4000 > context.md\n\n# With cooldown (hooks/automation: requires --session)\nctx agent --session $PPID\n
Use case: Copy-paste into AI chat, pipe to system prompt, or use in hooks.
","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/init-status/#ctx-load","level":3,"title":"ctx load","text":"
Load and display assembled context as AI would see it.
ctx load [flags]\n
Flags:
Flag Description --budget <tokens> Token budget for assembly (default: 8000) --raw Output raw file contents without assembly
","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/journal/","level":1,"title":"Journal","text":"","path":["CLI","Journal"],"tags":[]},{"location":"cli/journal/#ctx-journal","level":3,"title":"ctx journal","text":"
Browse and search AI session history from Claude Code and other tools.
Flag Short Description --limit-n Maximum sessions to display (default: 20) --project-p Filter by project name --tool-t Filter by tool (e.g., claude-code) --all-projects Include sessions from all projects
Sessions are sorted by date (newest first) and display slug, project, start time, duration, turn count, and token usage.
Import sessions to editable journal files in .context/journal/.
ctx journal import [session-id] [flags]\n
Flags:
Flag Description --all Import all sessions (only new files by default) --all-projects Import from all projects --regenerate Re-import existing files (preserves YAML frontmatter by default) --keep-frontmatter Preserve enriched YAML frontmatter during regeneration (default: true) --yes, -y Skip confirmation prompt --dry-run Show what would be imported without writing files
Safe by default: --all only imports new sessions. Existing files are skipped. Use --regenerate to re-import existing files (conversation content is regenerated, YAML frontmatter from enrichment is preserved by default). Use --keep-frontmatter=false to discard enriched frontmatter during regeneration.
Locked entries (via ctx journal lock) are always skipped, regardless of flags.
Single-session import (ctx journal import <id>) always writes without prompting, since you are explicitly targeting one session.
The journal/ directory should be gitignored (like sessions/) since it contains raw conversation data.
Example:
ctx journal import abc123 # Import one session\nctx journal import --all # Import only new sessions\nctx journal import --all --dry-run # Preview what would be imported\nctx journal import --all --regenerate # Re-import existing (prompts)\nctx journal import --all --regenerate -y # Re-import without prompting\nctx journal import --all --regenerate --keep-frontmatter=false -y # Discard frontmatter\n
Protect journal entries from being overwritten by import --regenerate or modified by enrichment skills (/ctx-journal-enrich, /ctx-journal-enrich-all).
ctx journal lock <pattern> [flags]\n
Flags:
Flag Description --all Lock all journal entries
The pattern matches filenames by slug, date, or short ID. Locking a multi-part entry locks all parts. The lock is recorded in .context/journal/.state.json and a locked: true line is added to the file's YAML frontmatter for visibility.
Sync lock state from journal frontmatter to .state.json.
ctx journal sync\n
Scans all journal markdowns and updates .state.json to match each file's frontmatter. Files with locked: true in frontmatter are marked locked in state; files without a locked: line have their lock cleared.
This is the inverse of ctx journal lock: instead of state driving frontmatter, frontmatter drives state. Useful after batch enrichment where you add locked: true to frontmatter manually.
Example:
# After enriching entries and adding locked: true to frontmatter\nctx journal sync\n
Generate a static site from journal entries in .context/journal/.
ctx journal site [flags]\n
Flags:
Flag Short Description --output-o Output directory (default: .context/journal-site) --build Run zensical build after generating --serve Run zensical serve after generating
Creates a zensical-compatible site structure with an index page listing all sessions by date, and individual pages for each journal entry.
Requires zensical to be installed for --build or --serve:
pipx install zensical\n
Example:
ctx journal site # Generate in .context/journal-site/\nctx journal site --output ~/public # Custom output directory\nctx journal site --build # Generate and build HTML\nctx journal site --serve # Generate and serve locally\n
Serve any zensical directory locally. This is a serve-only command: It does not generate or regenerate site content.
ctx serve [directory]\n
If no directory is specified, defaults to the journal site (.context/journal-site).
Requires zensical to be installed:
pipx install zensical\n
ctx serve vs. ctx journal site --serve
ctx journal site --serve generates the journal site then serves it: an all-in-one command. ctx serve only serves an existing directory, and works with any zensical site (journal, docs, etc.).
Example:
ctx serve # Serve journal site (no regeneration)\nctx serve .context/journal-site # Same, explicit path\nctx serve ./site # Serve the docs site\n
Run ctx as a Model Context Protocol (MCP) server. MCP is a standard protocol that lets AI tools discover and consume context from external sources via JSON-RPC 2.0 over stdin/stdout.
This makes ctx accessible to any MCP-compatible AI tool without custom hooks or integrations:
Start the MCP server. This command reads JSON-RPC 2.0 requests from stdin and writes responses to stdout. It is intended to be launched by MCP clients, not run directly.
ctx mcp serve\n
Flags: None. The server uses the configured context directory (from --context-dir, CTX_DIR, .ctxrc, or the default .context).
Resources expose context files as read-only content. Each resource has a URI, name, and returns Markdown text.
URI Name Description ctx://context/constitution constitution Hard rules that must never be violated ctx://context/tasks tasks Current work items and their status ctx://context/conventions conventions Code patterns and standards ctx://context/architecture architecture System architecture documentation ctx://context/decisions decisions Architectural decisions with rationale ctx://context/learnings learnings Gotchas, tips, and lessons learned ctx://context/glossary glossary Project-specific terminology ctx://context/agent agent All files assembled in priority read order
The agent resource assembles all non-empty context files into a single Markdown document, ordered by the configured read priority.
Clients can subscribe to resource changes via resources/subscribe. The server polls for file mtime changes (default: 5 seconds) and emits notifications/resources/updated when a subscribed file changes on disk.
Add a task, decision, learning, or convention to the context.
Argument Type Required Description type string Yes Entry type: task, decision, learning, convention content string Yes Title or main content priority string No Priority level (tasks only): high, medium, low context string Conditional Context field (decisions and learnings) rationale string Conditional Rationale (decisions only) consequence string Conditional Consequence (decisions only) lesson string Conditional Lesson learned (learnings only) application string Conditional How to apply (learnings only)","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx_complete","level":3,"title":"ctx_complete","text":"
Mark a task as done by number or text match.
Argument Type Required Description query string Yes Task number (e.g. \"1\") or search text","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx_drift","level":3,"title":"ctx_drift","text":"
Detect stale or invalid context. Returns violations, warnings, and passed checks.
Query recent AI session history (summaries, decisions, topics).
Argument Type Required Description limit number No Max sessions to return (default: 5) since string No ISO date filter: sessions after this date (YYYY-MM-DD)
Apply a structured context update to .context/ files. Supports task, decision, learning, convention, and complete entry types. Human confirmation is required before calling.
Move completed tasks to the archive section and remove empty sections from context files. Human confirmation required.
Argument Type Required Description archive boolean No Also write tasks to .context/archive/ (default: false)","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx_next","level":3,"title":"ctx_next","text":"
Suggest the next pending task based on priority and position.
Prompts provide pre-built templates for common workflows. Clients can list available prompts via prompts/list and retrieve a specific prompt via prompts/get.
Format an architectural decision entry with all required fields.
Argument Type Required Description content string Yes Decision title context string Yes Background context rationale string Yes Why this decision was made consequence string Yes Expected consequence","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx-add-learning","level":3,"title":"ctx-add-learning","text":"
Format a learning entry with all required fields.
Argument Type Required Description content string Yes Learning title context string Yes Background context lesson string Yes The lesson learned application string Yes How to apply this lesson","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx-reflect","level":3,"title":"ctx-reflect","text":"
Guide end-of-session reflection. Returns a structured review prompt covering progress assessment and context update recommendations.
The parent command shows available subcommands. Hidden plumbing subcommands (ctx system mark-journal, ctx system mark-wrapped-up) are used by skills and automation. Hidden hook subcommands (ctx system check-*) are used by the Claude Code plugin.
See AI Tools for details.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-backup","level":4,"title":"ctx system backup","text":"
Create timestamped tar.gz archives of project context and/or global Claude Code data. Optionally copies archives to an SMB share via GVFS.
ctx system backup [flags]\n
Scopes:
Scope What it includes project.context/, .claude/, ideas/, ~/.bashrcglobal~/.claude/ (excludes todos/) all Both project and global (default)
Flags:
Flag Description --scope <scope> Backup scope: project, global, or all --json Output results as JSON
ctx system backup # Back up everything (default: all)\nctx system backup --scope project # Project context only\nctx system backup --scope global # Global Claude data only\nctx system backup --scope all --json # Both, JSON output\n
Archives are saved to /tmp/ with timestamped names. When CTX_BACKUP_SMB_URL is configured, archives are also copied to the SMB share. Project backups touch ~/.local/state/ctx-last-backup for the check-backup-age hook.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-bootstrap","level":4,"title":"ctx system bootstrap","text":"
Print context location and rules for AI agents. This is the recommended first command for AI agents to run at session start: It tells them where the context directory is and how to use it.
ctx system bootstrap [flags]\n
Flags:
Flag Description -q, --quiet Output only the context directory path --json Output in JSON format
Quiet output:
ctx system bootstrap -q\n# .context\n
Text output:
ctx bootstrap\n=============\n\ncontext_dir: .context\n\nFiles:\n CONSTITUTION.md, TASKS.md, DECISIONS.md, LEARNINGS.md,\n CONVENTIONS.md, ARCHITECTURE.md, GLOSSARY.md\n\nRules:\n 1. Use context_dir above for ALL file reads/writes\n 2. Never say \"I don't have memory\": context IS your memory\n 3. Read files silently, present as recall (not search)\n 4. Persist learnings/decisions before session ends\n 5. Run `ctx agent` for content summaries\n 6. Run `ctx status` for context health\n
JSON output:
{\n \"context_dir\": \".context\",\n \"files\": [\"CONSTITUTION.md\", \"TASKS.md\", ...],\n \"rules\": [\"Use context_dir above for ALL file reads/writes\", ...]\n}\n
Examples:
ctx system bootstrap # Text output\nctx system bootstrap -q # Just the path\nctx system bootstrap --json # JSON output\nctx system bootstrap --json | jq .context_dir # Extract context path\n
Why it exists: When users configure an external context directory via .ctxrc (context_dir: /mnt/nas/.context), the AI agent needs to know where context lives. Bootstrap resolves the configured path and communicates it to the agent at session start. Every nudge also includes a context directory footer for reinforcement.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-resources","level":4,"title":"ctx system resources","text":"
Show system resource usage with threshold-based alerts.
ctx system resources [flags]\n
Displays memory, swap, disk, and CPU load with two severity tiers:
Resource WARNING DANGER Memory >= 80% used >= 90% used Swap >= 50% used >= 75% used Disk (cwd) >= 85% full >= 95% full Load (1m) >= 0.8x CPUs >= 1.5x CPUs
Flags:
Flag Description --json Output in JSON format
Examples:
ctx system resources # Text output with status indicators\nctx system resources --json # Machine-readable JSON\nctx system resources --json | jq '.alerts' # Extract alerts only\n
When resources breach thresholds, alerts are listed below the summary:
Alerts:\n ✖ Memory 92% used (14.7 / 16.0 GB)\n ✖ Swap 78% used (6.2 / 8.0 GB)\n ✖ Load 1.56x CPU count\n
Platform support: Full metrics on Linux and macOS. Windows shows disk only; memory and load report as unsupported.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-message","level":4,"title":"ctx system message","text":"
Manage hook message templates. Hook messages control what text hooks emit. The hook logic (when to fire, counting, state tracking) is universal; the messages are opinions that can be customized per-project.
ctx system message <subcommand>\n
Subcommands:
Subcommand Args Flags Description list (none) --json Show all hook messages with category and override status show<hook> <variant> (none) Print the effective message template with source edit<hook> <variant> (none) Copy embedded default to .context/ for editing reset<hook> <variant> (none) Delete user override, revert to embedded default
Examples:
ctx system message list # Table of all 24 messages\nctx system message list --json # Machine-readable JSON\nctx system message show qa-reminder gate # View the QA gate template\nctx system message edit qa-reminder gate # Copy default to .context/ for editing\nctx system message reset qa-reminder gate # Delete override, revert to default\n
Override files are placed at .context/hooks/messages/{hook}/{variant}.txt. An empty override file silences the message while preserving the hook's logic.
See the Customizing Hook Messages recipe for detailed examples.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-events","level":4,"title":"ctx system events","text":"
Query the local hook event log. Reads events from .context/state/events.jsonl and outputs them in human-readable or raw JSONL format. Requires event_log: true in .ctxrc.
ctx system events [flags]\n
Flags:
Flag Short Type Default Description --hook-k string (all) Filter by hook name --session-s string (all) Filter by session ID --event-e string (all) Filter by event type (relay, nudge) --last-n int 50 Show last N events --json-j bool false Output raw JSONL (for piping to jq) --all-a bool false Include rotated log file (events.1.jsonl)
Each line is a standalone JSON object identical to the webhook payload format:
// converted to multi-line for convenience:\n{\"event\":\"relay\",\"message\":\"qa-reminder: QA gate reminder emitted\",\"detail\":\n{\"hook\":\"qa-reminder\",\"variant\":\"gate\"},\"session_id\":\"eb1dc9cd-...\",\n \"timestamp\":\"2026-02-27T22:39:31Z\",\"project\":\"ctx\"}\n
Examples:
# Last 50 events (default)\nctx system events\n\n# Events from a specific session\nctx system events --session eb1dc9cd-0163-4853-89d0-785fbfaae3a6\n\n# Only QA reminder events\nctx system events --hook qa-reminder\n\n# Raw JSONL for jq processing\nctx system events --json | jq '.message'\n\n# How many context-load-gate fires today\nctx system events --hook context-load-gate --json \\\n | jq -r '.timestamp' | grep \"$(date +%Y-%m-%d)\" | wc -l\n\n# Include rotated events\nctx system events --all --last 100\n
Why it exists: System hooks fire invisibly. When something goes wrong (\"why didn't my hook fire?\"), the event log provides a local, queryable record of what hooks fired, when, and how often. Event logging is opt-in via event_log: true in .ctxrc to avoid surprises for existing users.
See also: Troubleshooting, Auditing System Hooks, ctx doctor
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-stats","level":4,"title":"ctx system stats","text":"
Show per-session token usage statistics. Reads stats JSONL files from .context/state/stats-*.jsonl, written automatically by the check-context-size hook on every prompt.
ctx system stats [flags]\n
Flags:
Flag Short Type Default Description --follow-f bool false Stream new entries as they arrive --session-s string (all) Filter by session ID (prefix match) --last-n int 20 Show last N entries --json-j bool false Output raw JSONL (for piping to jq)
# Recent stats across all sessions\nctx system stats\n\n# Stream live token usage (like tail -f)\nctx system stats --follow\n\n# Filter to current session\nctx system stats --session abc12345\n\n# Raw JSONL for analysis\nctx system stats --json | jq '.pct'\n\n# Monitor a long session in another terminal\nctx system stats --follow --session abc12345\n
See also: Auditing System Hooks
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-prune","level":4,"title":"ctx system prune","text":"
Clean stale per-session state files from .context/state/. Session hooks write tombstone files (context-check, heartbeat, persistence-nudge, etc.) that accumulate ~6-8 files per session with no automatic cleanup.
ctx system prune [flags]\n
Flags:
Flag Type Default Description --days int 7 Prune files older than this many days --dry-run bool false Show what would be pruned without deleting
Files are identified as session-scoped by UUID suffix (e.g. heartbeat-a1b2c3d4-...). Global files without UUIDs (events.jsonl, memory-import.json, etc.) are always preserved.
Safe to run anytime
The worst outcome of pruning is a hook re-firing its nudge in the next session. No context files, decisions, or learnings are stored in the state directory.
Examples:
ctx system prune # Prune files older than 7 days\nctx system prune --days 3 # More aggressive cleanup\nctx system prune --dry-run # Preview what would be removed\n
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-mark-journal","level":4,"title":"ctx system mark-journal","text":"
Update processing state for a journal entry. Records the current date in .context/journal/.state.json. Used by journal skills to record pipeline progress.
Flag Description --check Check if stage is set (exit 1 if not)
Example:
ctx system mark-journal 2026-01-21-session-abc12345.md enriched\nctx system mark-journal 2026-01-21-session-abc12345.md normalized\nctx system mark-journal --check 2026-01-21-session-abc12345.md fences_verified\n
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-mark-wrapped-up","level":4,"title":"ctx system mark-wrapped-up","text":"
Suppress context checkpoint nudges after a wrap-up ceremony. Writes a marker file that check-context-size checks before emitting checkpoint boxes. The marker expires after 2 hours.
Called automatically by /ctx-wrap-up after persisting context (not intended for direct use).
ctx system mark-wrapped-up\n
No flags, no arguments. Idempotent: running it again updates the marker timestamp.
","path":["CLI","System"],"tags":[]},{"location":"cli/tools/","level":1,"title":"Tools and Utilities","text":"","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-watch","level":3,"title":"ctx watch","text":"
Watch for AI output and auto-apply context updates.
Parses <context-update> XML commands from AI output and applies them to context files.
ctx watch [flags]\n
Flags:
Flag Description --log <file> Log file to watch (default: stdin) --dry-run Preview updates without applying
Example:
# Watch stdin\nai-tool | ctx watch\n\n# Watch a log file\nctx watch --log /path/to/ai-output.log\n\n# Preview without applying\nctx watch --dry-run\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-setup","level":3,"title":"ctx setup","text":"
Generate AI tool integration configuration.
ctx setup <tool> [flags]\n
Flags:
Flag Short Description --write-w Write the generated config to disk (e.g. .github/copilot-instructions.md)
Supported tools:
Tool Description claude-code Redirects to plugin install instructions cursor Cursor IDE aider Aider CLI copilot GitHub Copilot windsurf Windsurf IDE
Claude Code Uses the Plugin system
Claude Code integration is now provided via the ctx plugin. Running ctx setup claude-code prints plugin install instructions.
Example:
# Print hook instructions to stdout\nctx setup cursor\nctx setup aider\n\n# Generate and write .github/copilot-instructions.md\nctx setup copilot --write\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-loop","level":3,"title":"ctx loop","text":"
Generate a shell script for running an autonomous loop.
An autonomous loop continuously runs an AI assistant with the same prompt until a completion signal is detected, enabling iterative development where the AI builds on its previous work.
ctx loop [flags]\n
Flags:
Flag Short Description Default --tool <tool>-t AI tool: claude, aider, or genericclaude--prompt <file>-p Prompt file to use .context/loop.md--max-iterations <n>-n Maximum iterations (0 = unlimited) 0--completion <signal>-c Completion signal to detect SYSTEM_CONVERGED--output <file>-o Output script filename loop.sh
Example:
# Generate loop.sh for Claude Code\nctx loop\n\n# Generate for Aider with custom prompt\nctx loop --tool aider --prompt TASKS.md\n\n# Limit to 10 iterations\nctx loop --max-iterations 10\n\n# Output to custom file\nctx loop -o my-loop.sh\n
Usage:
# Generate and run the loop\nctx loop\nchmod +x loop.sh\n./loop.sh\n
See Autonomous Loops for detailed workflow documentation.
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory","level":3,"title":"ctx memory","text":"
Bridge Claude Code's auto memory (MEMORY.md) into .context/.
Claude Code maintains per-project auto memory at ~/.claude/projects/<slug>/memory/MEMORY.md. This command group discovers that file, mirrors it into .context/memory/mirror.md (git-tracked), and detects drift.
ctx memory <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-sync","level":4,"title":"ctx memory sync","text":"
Copy MEMORY.md to .context/memory/mirror.md. Archives the previous mirror before overwriting.
ctx memory sync [flags]\n
Flags:
Flag Description --dry-run Show what would happen without writing
Exit codes:
Code Meaning 0 Synced successfully 1 MEMORY.md not found (auto memory inactive)
Example:
ctx memory sync\n# Archived previous mirror to mirror-2026-03-05-143022.md\n# Synced MEMORY.md -> .context/memory/mirror.md\n# Source: ~/.claude/projects/-home-user-project/memory/MEMORY.md\n# Lines: 47 (was 32)\n# New content: 15 lines since last sync\n\nctx memory sync --dry-run\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-status","level":4,"title":"ctx memory status","text":"
Show drift, timestamps, line counts, and archive count.
ctx memory status\n
Exit codes:
Code Meaning 0 No drift 1 MEMORY.md not found 2 Drift detected (MEMORY.md changed since sync)
Example:
ctx memory status\n# Memory Bridge Status\n# Source: ~/.claude/projects/.../memory/MEMORY.md\n# Mirror: .context/memory/mirror.md\n# Last sync: 2026-03-05 14:30 (2 hours ago)\n#\n# MEMORY.md: 47 lines (modified since last sync)\n# Mirror: 32 lines\n# Drift: detected (source is newer)\n# Archives: 3 snapshots in .context/memory/archive/\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-diff","level":4,"title":"ctx memory diff","text":"
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-unpublish","level":4,"title":"ctx memory unpublish","text":"
Remove the ctx-managed marker block from MEMORY.md, preserving Claude-owned content.
ctx memory unpublish\n
Hook integration: The check-memory-drift hook runs on every prompt and nudges the agent when MEMORY.md has changed since last sync. The nudge fires once per session. See Memory Bridge.
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-import","level":4,"title":"ctx memory import","text":"
Classify and promote entries from MEMORY.md into structured .context/ files.
ctx memory import [flags]\n
Each entry is classified by keyword heuristics:
Keywords Target always use, prefer, never use, standard CONVENTIONS.md decided, chose, trade-off, approach DECISIONS.md gotcha, learned, watch out, bug, caveat LEARNINGS.md todo, need to, follow up TASKS.md Everything else Skipped
Deduplication prevents re-importing the same entry across runs.
Flags:
Flag Description --dry-run Show classification plan without writing
Example:
ctx memory import --dry-run\n# Scanning MEMORY.md for new entries...\n# Found 6 entries\n#\n# -> \"always use ctx from PATH\"\n# Classified: CONVENTIONS.md (keywords: always use)\n#\n# -> \"decided to use heuristic classification over LLM-based\"\n# Classified: DECISIONS.md (keywords: decided)\n#\n# Dry run - would import: 4 entries (1 convention, 1 decision, 1 learning, 1 task)\n# Skipped: 2 entries (session notes/unclassified)\n\nctx memory import # Actually write entries to .context/ files\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-notify","level":3,"title":"ctx notify","text":"
Send fire-and-forget webhook notifications from skills, loops, and hooks.
Field Type Description event string Event name from --event flag message string Notification message session_id string Session ID (omitted if empty) timestamp string UTC RFC3339 timestamp project string Project directory name","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-change","level":3,"title":"ctx change","text":"
Show what changed in context files and code since your last session.
Automatically detects the previous session boundary from state markers or event log. Useful at session start to quickly see what moved while you were away.
ctx change [flags]\n
Flags:
Flag Description --since Time reference: duration (24h) or date (2026-03-01)
Reference time detection (priority order):
--since flag (duration, date, or RFC3339 timestamp)
ctx-loaded-* marker files in .context/state/ (second most recent)
Last context-load-gate event from .context/state/events.jsonl
Fallback: 24 hours ago
Example:
# Auto-detect last session, show what changed\nctx change\n\n# Changes in the last 48 hours\nctx change --since 48h\n\n# Changes since a specific date\nctx change --since 2026-03-10\n
Context file changes are detected by filesystem mtime (works without git). Code changes use git log --since (empty when not in a git repo).
See also: Reviewing Session Changes
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-dep","level":3,"title":"ctx dep","text":"
Generate a dependency graph from source code.
Auto-detects the project ecosystem from manifest files and outputs a dependency graph in Mermaid, table, or JSON format.
ctx dep [flags]\n
Supported ecosystems:
Ecosystem Manifest Method Go go.modgo list -json ./... Node.js package.json Parse package.json (workspace-aware) Python requirements.txt or pyproject.toml Parse manifest directly Rust Cargo.tomlcargo metadata
Detection order: Go, Node.js, Python, Rust. First match wins.
Flags:
Flag Description Default --format Output format: mermaid, table, jsonmermaid--external Include external/third-party dependencies false--type Force ecosystem: go, node, python, rust auto-detect
Examples:
# Auto-detect and show internal deps as Mermaid\nctx dep\n\n# Include external dependencies\nctx dep --external\n\n# Force Node.js detection (useful when multiple manifests exist)\nctx dep --type node\n\n# Machine-readable output\nctx dep --format json\n\n# Table format\nctx dep --format table\n
Ecosystem notes:
Go: Uses go list -json ./.... Requires go in PATH.
Node.js: Parses package.json directly (no npm/yarn needed). For monorepos with workspaces, shows workspace-to-workspace deps (internal) or all deps per workspace (external).
Python: Parses requirements.txt or pyproject.toml directly (no pip needed). Shows declared dependencies; does not trace imports. With --external, includes dev dependencies from pyproject.toml.
Rust: Requires cargo in PATH. Uses cargo metadata for accurate dependency resolution.
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad","level":3,"title":"ctx pad","text":"
Encrypted scratchpad for sensitive one-liners that travel with the project.
When invoked without a subcommand, lists all entries.
ctx pad\nctx pad <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-add","level":4,"title":"ctx pad add","text":"
Append a new entry to the scratchpad.
ctx pad add <text>\nctx pad add <label> --file <path>\n
Flags:
Flag Short Description --file-f Ingest a file as a blob entry (max 64 KB)
Examples:
ctx pad add \"DATABASE_URL=postgres://user:pass@host/db\"\nctx pad add \"deploy config\" --file ./deploy.yaml\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-show","level":4,"title":"ctx pad show","text":"
Output the raw text of an entry by number. For blob entries, prints decoded file content (or writes to disk with --out).
ctx pad show <n>\nctx pad show <n> --out <path>\n
Arguments:
n: 1-based entry number
Flags:
Flag Description --out Write decoded blob content to a file (blobs only)
Examples:
ctx pad show 3\nctx pad show 2 --out ./recovered.yaml\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-rm","level":4,"title":"ctx pad rm","text":"
Remove an entry by number.
ctx pad rm <n>\n
Arguments:
n: 1-based entry number
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-edit","level":4,"title":"ctx pad edit","text":"
Replace, append to, or prepend to an entry.
ctx pad edit <n> [text]\n
Arguments:
n: 1-based entry number
text: Replacement text (mutually exclusive with --append/--prepend)
Flags:
Flag Description --append Append text to the end of the entry --prepend Prepend text to the beginning of entry --file Replace blob file content (preserves label) --label Replace blob label (preserves content)
Examples:
ctx pad edit 2 \"new text\"\nctx pad edit 2 --append \" suffix\"\nctx pad edit 2 --prepend \"prefix \"\nctx pad edit 1 --file ./v2.yaml\nctx pad edit 1 --label \"new name\"\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-mv","level":4,"title":"ctx pad mv","text":"
Move an entry from one position to another.
ctx pad mv <from> <to>\n
Arguments:
from: Source position (1-based)
to: Destination position (1-based)
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-resolve","level":4,"title":"ctx pad resolve","text":"
Show both sides of a merge conflict in the encrypted scratchpad.
ctx pad resolve\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-import","level":4,"title":"ctx pad import","text":"
Bulk-import lines from a file into the scratchpad. Each non-empty line becomes a separate entry. All entries are written in a single encrypt/write cycle.
With --blob, import all first-level files from a directory as blob entries. Each file becomes a blob with the filename as its label. Subdirectories and non-regular files are skipped.
ctx pad import <file>\nctx pad import - # read from stdin\nctx pad import --blob <dir> # import directory files as blobs\n
Arguments:
file: Path to a text file, - for stdin, or a directory (with --blob)
Flags:
Flag Description --blob Import first-level files from a directory as blobs
Examples:
ctx pad import notes.txt\ngrep TODO *.go | ctx pad import -\nctx pad import --blob ./ideas/\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-export","level":4,"title":"ctx pad export","text":"
Export all blob entries from the scratchpad to a directory as files. Each blob's label becomes the filename. Non-blob entries are skipped.
ctx pad export [dir]\n
Arguments:
dir: Target directory (default: current directory)
Flags:
Flag Short Description --force-f Overwrite existing files instead of timestamping --dry-run Print what would be exported without writing
When a file already exists, a unix timestamp is prepended to avoid collisions (e.g., 1739836200-label). Use --force to overwrite instead.
Examples:
ctx pad export ./ideas\nctx pad export --dry-run\nctx pad export --force ./backup\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-merge","level":4,"title":"ctx pad merge","text":"
Merge entries from one or more scratchpad files into the current pad. Each input file is auto-detected as encrypted or plaintext. Entries are deduplicated by exact content.
ctx pad merge FILE...\n
Arguments:
FILE...: One or more scratchpad files to merge (encrypted or plaintext)
Flags:
Flag Short Description --key-k Path to key file for decrypting input files --dry-run Print what would be merged without writing
Examples:
ctx pad merge worktree/.context/scratchpad.enc\nctx pad merge notes.md backup.enc\nctx pad merge --key /path/to/other.key foreign.enc\nctx pad merge --dry-run pad-a.enc pad-b.md\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-remind","level":3,"title":"ctx remind","text":"
Session-scoped reminders that surface at session start. Reminders are stored verbatim and relayed verbatim: no summarization, no categories.
When invoked with a text argument and no subcommand, adds a reminder.
ctx remind \"text\"\nctx remind <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-remind-add","level":4,"title":"ctx remind add","text":"
Add a reminder. This is the default action: ctx remind \"text\" and ctx remind add \"text\" are equivalent.
ctx remind \"refactor the swagger definitions\"\nctx remind add \"check CI after the deploy\" --after 2026-02-25\n
Arguments:
text: The reminder message (verbatim)
Flags:
Flag Short Description --after-a Don't surface until this date (YYYY-MM-DD)
Examples:
ctx remind \"refactor the swagger definitions\"\nctx remind \"check CI after the deploy\" --after 2026-02-25\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-remind-list","level":4,"title":"ctx remind list","text":"
List all pending reminders. Date-gated reminders that aren't yet due are annotated with (after DATE, not yet due).
ctx remind list\n
Aliases: ls
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-remind-dismiss","level":4,"title":"ctx remind dismiss","text":"
Remove a reminder by ID, or remove all reminders with --all.
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pause","level":3,"title":"ctx pause","text":"
Pause all context nudge and reminder hooks for the current session. Security hooks (dangerous command blocking) and housekeeping hooks still fire.
ctx pause [flags]\n
Flags:
Flag Description --session-id Session ID (overrides stdin)
Example:
# Pause hooks for a quick investigation\nctx pause\n\n# Resume when ready\nctx resume\n
See also: Pausing Context Hooks
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-resume","level":3,"title":"ctx resume","text":"
Resume context hooks after a pause. Silent no-op if not paused.
ctx resume [flags]\n
Flags:
Flag Description --session-id Session ID (overrides stdin)
Example:
ctx resume\n
See also: Pausing Context Hooks
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-completion","level":3,"title":"ctx completion","text":"
Generate shell autocompletion scripts.
ctx completion <shell>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#subcommands","level":4,"title":"Subcommands","text":"Shell Command bashctx completion bashzshctx completion zshfishctx completion fishpowershellctx completion powershell","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#installation","level":4,"title":"Installation","text":"BashZshFish
# Add to ~/.bashrc\nsource <(ctx completion bash)\n
# Add to ~/.zshrc\nsource <(ctx completion zsh)\n
ctx completion fish | source\n# Or save to completions directory\nctx completion fish > ~/.config/fish/completions/ctx.fish\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-site","level":3,"title":"ctx site","text":"
Site management commands for the ctx.ist static site.
ctx site <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-site-feed","level":4,"title":"ctx site feed","text":"
Generate an Atom 1.0 feed from finalized blog posts in docs/blog/.
ctx site feed [flags]\n
Scans docs/blog/ for files matching YYYY-MM-DD-*.md, parses YAML frontmatter, and generates a valid Atom feed. Only posts with reviewed_and_finalized: true are included. Summaries are extracted from the first paragraph after the heading.
Flags:
Flag Short Type Default Description --out-o string site/feed.xml Output path --base-url string https://ctx.ist Base URL for entry links
Output:
Generated site/feed.xml (21 entries)\n\nSkipped:\n 2026-02-25-the-homework-problem.md: not finalized\n\nWarnings:\n 2026-02-09-defense-in-depth.md: no summary paragraph found\n
Three buckets: included (count), skipped (with reason), warnings (included but degraded). exit 0 always: warnings inform but do not block.
Frontmatter requirements:
Field Required Feed mapping title Yes <title>date Yes <updated>reviewed_and_finalized Yes Draft gate (must be true) author No <author><name>topics No <category term=\"\">
Examples:
ctx site feed # Generate site/feed.xml\nctx site feed --out /tmp/feed.xml # Custom output path\nctx site feed --base-url https://example.com # Custom base URL\nmake site-feed # Makefile shortcut\nmake site # Builds site + feed\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-guide","level":3,"title":"ctx guide","text":"
Quick-reference cheat sheet for common ctx commands and skills.
ctx guide [flags]\n
Flags:
Flag Description --skills Show available skills --commands Show available CLI commands
Example:
# Show the full cheat sheet\nctx guide\n\n# Skills only\nctx guide --skills\n\n# Commands only\nctx guide --commands\n
Works without initialization (no .context/ required).
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-why","level":3,"title":"ctx why","text":"
Read ctx's philosophy documents directly in the terminal.
ctx why [DOCUMENT]\n
Documents:
Name Description manifesto The ctx Manifesto: creation, not code about About ctx: what it is and why it exists invariants Design invariants: properties that must hold
Usage:
# Interactive numbered menu\nctx why\n\n# Show a specific document\nctx why manifesto\nctx why about\nctx why invariants\n\n# Pipe to a pager\nctx why manifesto | less\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/trace/","level":1,"title":"Commit Context Tracing","text":"","path":["Commit Context Tracing"],"tags":[]},{"location":"cli/trace/#ctx-trace","level":3,"title":"ctx trace","text":"
Show the context behind git commits. Links commits back to the decisions, tasks, learnings, and sessions that motivated them.
git log shows what changed, git blame shows who — ctx trace shows why.
ctx trace [commit] [flags]\n
Flags:
Flag Description --last N Show context for last N commits --json Output as JSON for scripting
Examples:
# Show context for a specific commit\nctx trace abc123\n\n# Show context for last 10 commits\nctx trace --last 10\n\n# JSON output\nctx trace abc123 --json\n
Output:
Commit: abc123 \"Fix auth token expiry\"\nDate: 2026-03-14 10:00:00 -0700\nContext:\n [Decision] #12: Use short-lived tokens with server-side refresh\n Date: 2026-03-10\n\n [Task] #8: Implement token rotation for compliance\n Status: completed\n
Enable or disable the prepare-commit-msg hook for automatic context tracing. When enabled, commits automatically receive a ctx-context trailer with references to relevant decisions, tasks, learnings, and sessions.
ctx trace hook <enable|disable>\n
Prerequisites: ctx must be on your $PATH. If you installed via go install, ensure $GOPATH/bin (or $HOME/go/bin) is in your shell's $PATH.
What the hook does:
Before each commit, collects context from three sources:
Pending context accumulated during work (ctx add, ctx task complete)
Staged file changes to .context/ files
Working state (in-progress tasks, active AI session)
Injects a ctx-context trailer into the commit message
After commit, records the mapping in .context/trace/history.jsonl
Examples:
# Install the hook\nctx trace hook enable\n\n# Remove the hook\nctx trace hook disable\n
Resulting commit message:
Fix auth token expiry handling\n\nRefactored token refresh logic to handle edge case\nwhere refresh token expires during request.\n\nctx-context: decision:12, task:8, session:abc123\n
The ctx-context trailer supports these reference types:
Prefix Points to Example decision:<n> Entry #n in DECISIONS.md decision:12learning:<n> Entry #n in LEARNINGS.md learning:5task:<n> Task #n in TASKS.md task:8convention:<n> Entry #n in CONVENTIONS.md convention:3session:<id> AI session by ID session:abc123\"<text>\" Free-form context note \"Performance fix for P1 incident\"","path":["Commit Context Tracing"],"tags":[]},{"location":"cli/trace/#storage","level":3,"title":"Storage","text":"
Context trace data is stored in the .context/ directory:
File Purpose Lifecycle state/pending-context.jsonl Accumulates refs during work Truncated after each commit trace/history.jsonl Permanent commit-to-context map Append-only, never truncated trace/overrides.jsonl Manual tags for existing commits Append-only","path":["Commit Context Tracing"],"tags":[]},{"location":"home/","level":1,"title":"Home","text":"
ctx is not a prompt.
ctx is version-controlled cognitive state.
ctx is the persistence layer for human-AI reasoning.
\"Creation, not code; Context, not prompts; Verification, not vibes.\"
Read the ctx Manifesto →
\"Without durable context, intelligence resets; with ctx, creation compounds.\"
Without persistent memory, every session starts at zero; ctx makes sessions cumulative.
Join the ctx Community →
","path":["Home","About"],"tags":[]},{"location":"home/about/#what-is-ctx","level":2,"title":"What Is ctx?","text":"
ctx (Context) is a file-based system that enables AI coding assistants to persist project knowledge across sessions. It lives in a .context/ directory in your repo.
Context files let AI tools remember decisions, conventions, and learnings:
Context files are explicit and versionable contracts between you and your agents.
","path":["Home","About"],"tags":[]},{"location":"home/about/#why-do-i-keep-re-explaining-my-codebase","level":2,"title":"Why Do I Keep Re-Explaining My Codebase?!?!","text":"
You open a new AI session. The first thing you do is re-explain your project.
Again.
The architecture, the database choice, the naming conventions, the thing you tried last week that didn't work. You have said all of this before (maybe yesterday, maybe an hour ago), but the AI doesn't know that.
You explain the same architecture every session;
The AI suggests an approach you already rejected, again;
A decision you made three sessions ago gets relitigated from scratch;
You spend more time setting context than building features.
This isn't AI failure: It is staleness.
Without persistent memory, every session starts at zero.
","path":["Home","About"],"tags":[]},{"location":"home/about/#day-and-night","level":3,"title":"Day and Night","text":"
ctx gives your AI a memory that persists across sessions:
Without ctxWith ctx
Session 12: Monday morning\n\nAI: \"I'll scaffold a MySQL schema for this.\"\nYou: \"We use PostgreSQL.\"\n\nAI: \"For auth, session-based would be simplest.\"\nYou: \"We decided on JWT.\"\n\nAI: \"Endpoints in camelCase?\"\nYou: \"snake_case! Like the rest of the API!!!!!!\"\n\nYou: *scrolls up to copy an existing endpoint*\n\n20 minutes spent restoring decisions that already exist.\n
Session 12: Monday morning\n\nYou: \"Let's continue the auth implementation.\"\nAI: \"Per DECISIONS.md (2026-02-12): JWT, not sessions: \n The project uses PostgreSQL with JWT\n authentication. The API follows snake_case conventions.\n Last session you completed the login endpoint: the\n registration endpoint is next on TASKS.md. Want me to\n start there?\"\n\n0 minutes re-explaining. Straight to building.\n
ctx creates a .context/ directory in your project that stores structured knowledge files:
File What It Remembers TASKS.md What you're working on and what's next DECISIONS.md Architectural choices and why you made them LEARNINGS.md Gotchas, bugs, things that didn't work CONVENTIONS.md Naming patterns, code style, project rules CONSTITUTION.md Hard rules the AI must never violate
These files can version with your code in git:
They load automatically at the session start (via hooks in Claude Code, or manually with ctx agent for other tools).
The AI reads them, cites them, and builds on them, instead of asking you to start over.
And when it acts, it can point to the exact file and line that justifies the choice.
Every decision you record, every lesson you capture, makes the next session smarter.
ctx accumulates.
Connect with ctx
Join the Community →: ask questions, share workflows, and help shape what comes next
Read the Blog →: real-world patterns, ponderings, and lessons learned from building ctx using ctx
Ready to Get Started?
Getting Started →: full installation and setup
Your First Session →: step-by-step walkthrough from ctx init to verified recall
# Add a task\nctx add task \"Implement user authentication\"\n\n# Record a decision (full ADR fields required)\nctx add decision \"Use PostgreSQL for primary database\" \\\n --context \"Need a reliable database for production\" \\\n --rationale \"PostgreSQL offers ACID compliance and JSON support\" \\\n --consequence \"Team needs PostgreSQL training\"\n\n# Note a learning\nctx add learning \"Mock functions must be hoisted in Jest\" \\\n --context \"Tests failed with undefined mock errors\" \\\n --lesson \"Jest hoists mock calls to top of file\" \\\n --application \"Place jest.mock() before imports\"\n\n# Mark task complete\nctx task complete \"user auth\"\n
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#leave-a-reminder-for-next-session","level":2,"title":"Leave a Reminder for Next Session","text":"
Drop a note that surfaces automatically at the start of your next session:
# Leave a reminder\nctx remind \"refactor the swagger definitions\"\n\n# Date-gated: don't surface until a specific date\nctx remind \"check CI after the deploy\" --after 2026-02-25\n\n# List pending reminders\nctx remind list\n\n# Dismiss a reminder by ID\nctx remind dismiss 1\n
Reminders are relayed verbatim at session start by the check-reminders hook and repeat every session until you dismiss them.
Import session transcripts to a browsable static site with search, navigation, and topic indices.
The ctx journal command requires zensical (Python >= 3.10).
zensical is a Python-based static site generator from the Material for MkDocs team.
(why zensical?).
If you don't have it on your system, install zensical once with pipx:
# One-time setup\npipx install zensical\n
Avoid pip install zensical
pip install often fails: For example, on macOS, system Python installs a non-functional stub (zensical requires Python >= 3.10), and Homebrew Python blocks system-wide installs (PEP 668).
pipx creates an isolated environment with the correct Python version automatically.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#import-and-serve","level":3,"title":"Import and Serve","text":"
Then, import and serve:
# Import all sessions to .context/journal/ (only new files)\nctx journal import --all\n\n# Generate and serve the journal site\nctx journal site --serve\n
Open http://localhost:8000 to browse.
To update after new sessions, run the same two commands again.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#safe-by-default","level":3,"title":"Safe By Default","text":"
ctx journal import --all is safe by default:
It only imports new sessions and skips existing files.
Locked entries (via ctx journal lock) are always skipped by both import and enrichment skills.
If you add locked: true to frontmatter during enrichment, run ctx journal sync to propagate the lock state to .state.json.
Store short, sensitive one-liners in an encrypted scratchpad that travels with the project:
# Write a note\nctx pad set db-password \"postgres://user:pass@localhost/mydb\"\n\n# Read it back\nctx pad get db-password\n\n# List all keys\nctx pad list\n
The scratchpad is encrypted with a key stored at ~/.ctx/.ctx.key (outside the project, never committed).
See Scratchpad for details.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#run-an-autonomous-loop","level":2,"title":"Run an Autonomous Loop","text":"
Generate a script that iterates an AI agent until a completion signal is detected:
ctx loop\nchmod +x loop.sh\n./loop.sh\n
See Autonomous Loops for configuration and advanced usage.
Link your git commits back to the decisions, tasks, and learnings that motivated them. Enable the hook once:
# Install the git hook (one-time setup)\nctx trace hook enable\n
From now on, every git commit automatically gets a ctx-context trailer linking it to relevant context. No extra steps needed — just use ctx add, ctx task complete, and commit as usual.
# Later: why was this commit made?\nctx trace abc123\n\n# Recent commits with their context\nctx trace --last 10\n\n# Context trail for a specific file\nctx trace file src/auth.go\n\n# Manually tag a commit after the fact\nctx trace tag HEAD --note \"Hotfix for production outage\"\n
The first thing an AI agent should do at session start is discover where context lives:
ctx system bootstrap\n
This prints the resolved context directory, the files in it, and the operating rules. The CLAUDE.md template instructs the agent to run this automatically. See CLI Reference: bootstrap.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#the-two-skills-you-should-always-use","level":2,"title":"The Two Skills You Should Always Use","text":"
Using /ctx-remember at session start and /ctx-wrap-up at session end are the highest-value skills in the entire catalog:
# session begins:\n/ctx-remember\n... do work ...\n# before closing the session:\n/ctx-wrap-up\n
Let's provide some context, because this is important:
Although the agent will eventually discover your context through CLAUDE.md → AGENT_PLAYBOOK.md, /ctx-remember hydrates the full context up front (tasks, decisions, recent sessions) so the agent starts informed rather than piecing things together over several turns.
/ctx-wrap-up is the other half: A structured review that captures learnings, decisions, and tasks before you close the window.
Hooks like check-persistence remind you (the user) mid-session that context hasn't been saved in a while, but they don't trigger persistence automatically: You still have to act. Also, a CTRL+C can end things at any moment with no reliable \"before session end\" event.
In short, /ctx-wrap-up is the deliberate checkpoint that makes sure nothing slips through. And /ctx-remember it its mirror skill to be used at session start.
See Session Ceremonies for the full workflow.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#cli-commands-vs-ai-skills","level":2,"title":"CLI Commands vs. AI Skills","text":"
Most ctx operations come in two flavors: a CLI command you run in your terminal and an AI skill (slash command) you invoke inside your coding assistant.
Commands and skills are not interchangeable: Each has a distinct role.
ctx CLI command ctx AI skill Runs where Your terminal Inside the AI assistant Speed Fast (milliseconds) Slower (LLM round-trip) Cost Free Consumes tokens and context Analysis Deterministic heuristics Semantic / judgment-based Best for Quick checks, scripting, CI Deep analysis, generation, workflow orchestration","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#paired-commands","level":3,"title":"Paired Commands","text":"
These have both a CLI and a skill counterpart. Use the CLI for quick, deterministic checks; use the skill when you need the agent's judgment.
CLI Skill When to prefer the skill ctx drift/ctx-drift Semantic analysis: catches meaning drift the CLI misses ctx status/ctx-status Interpreted summary with recommendations ctx add task/ctx-add-task Agent decomposes vague goals into concrete tasks ctx add decision/ctx-add-decision Agent drafts rationale and consequences from discussion ctx add learning/ctx-add-learning Agent extracts the lesson from a debugging session ctx add convention/ctx-add-convention Agent observes a repeated pattern and codifies it ctx task archive/ctx-archive Agent reviews which tasks are truly done ctx pad/ctx-pad Agent reads/writes scratchpad entries in conversation flow ctx journal/ctx-history Agent searches session history with semantic understanding ctx agent/ctx-agent Agent loads and acts on the context packet ctx loop/ctx-loop Agent tailors the loop script to your project ctx doctor/ctx-doctor Agent adds semantic analysis to structural checks ctx pause/ctx-pause Agent pauses hooks with session-aware reasoning ctx resume/ctx-resume Agent resumes hooks after a pause ctx remind/ctx-remind Agent manages reminders in conversation flow","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#ai-only-skills","level":3,"title":"AI-Only Skills","text":"
These have no CLI equivalent. They require the agent's reasoning.
Skill Purpose /ctx-remember Load context and present structured readback at session start /ctx-wrap-up End-of-session ceremony: persist learnings, decisions, tasks /ctx-next Suggest 1-3 concrete next actions from context /ctx-commit Commit with integrated context capture /ctx-reflect Pause and assess session progress /ctx-consolidate Merge overlapping learnings or decisions /ctx-prompt-audit Analyze prompting patterns for improvement /ctx-import-plans Import Claude Code plan files into project specs /ctx-implement Execute a plan step-by-step with verification /ctx-worktree Manage parallel agent worktrees /ctx-journal-enrich Add metadata, tags, and summaries to journal entries /ctx-journal-enrich-all Full journal pipeline: export if needed, then batch-enrich /ctx-blog Generate a blog post (zensical-flavored Markdown) /ctx-blog-changelog Generate themed blog post from commits between releases /ctx-architecture Build and maintain architecture maps (ARCHITECTURE.md, DETAILED_DESIGN.md)","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#cli-only-commands","level":3,"title":"CLI-Only Commands","text":"
These are infrastructure: used in scripts, CI, or one-time setup.
Command Purpose ctx init Initialize .context/ directory ctx load Output assembled context for piping ctx task complete Mark a task done by substring match ctx sync Reconcile context with codebase state ctx compact Consolidate and clean up context files ctx trace Show context behind git commits ctx trace hook Enable/disable commit context tracing hook ctx setup Generate AI tool integration config ctx watch Watch AI output and auto-apply context updates ctx serve Serve any zensical directory (default: journal) ctx permission snapshot Save settings as a golden image ctx permission restore Restore settings from golden image ctx journal site Generate browsable journal from exports ctx notify setup Configure webhook notifications ctx decision List and filter decisions ctx learning List and filter learnings ctx task List tasks, manage archival and snapshots ctx why Read the philosophy behind ctx ctx guide Quick-reference cheat sheet ctx site Site management commands ctx config Manage runtime configuration profiles ctx system System diagnostics and hook commands ctx system backup Back up context and Claude data to tar.gz / SMB ctx completion Generate shell autocompletion scripts
Rule of Thumb
Quick check? Use the CLI.
Need judgment? Use the skill.
When in doubt, start with the CLI: It's free and instant.
Escalate to the skill when heuristics aren't enough.
Next Up: Context Files →: what each .context/ file does and how to use it
See Also:
Recipes: targeted how-to guides for specific tasks
Knowledge Capture: patterns for recording decisions, learnings, and conventions
Context Health: keeping your .context/ accurate and drift-free
Session Archaeology: digging into past sessions
Task Management: tracking and completing work items
We are the builders who care about durable context, verifiable decisions, and human-AI workflows that compound over time.
","path":["Home","#ctx"],"tags":[]},{"location":"home/community/#help-ctx-change-how-ai-remembers","level":2,"title":"Help ctx Change How AI Remembers","text":"
If you like the idea, a star helps ctx reach engineers who run into context drift every day:
The .ctxrc file is an optional YAML file placed in the project root (next to your .context/ directory). It lets you set project-level defaults that apply to every ctx command.
ctx looks for .ctxrc in the current working directory when any command runs. There is no global or user-level config file: Configuration is always per-project.
Contributors: Dev Configuration Profile
The ctx repo ships two .ctxrc source profiles (.ctxrc.base and .ctxrc.dev). The working copy is gitignored and swapped between them via ctx config switch dev / ctx config switch base. See Contributing: Configuration Profiles.
Using a Different .context Directory
The default .context/ directory can be changed per-project via the context_dir key in .ctxrc, the CTX_DIR environment variable, or the --context-dir CLI flag.
See Environment Variables and CLI Global Flags below for details.
A commented .ctxrc showing all options and their defaults:
# .ctxrc: ctx runtime configuration\n# https://ctx.ist/configuration/\n#\n# All settings are optional. Missing values use defaults.\n# Priority: CLI flags > environment variables > .ctxrc > defaults\n#\n# context_dir: .context\n# token_budget: 8000\n# auto_archive: true\n# archive_after_days: 7\n# scratchpad_encrypt: true\n# allow_outside_cwd: false\n# event_log: false\n# entry_count_learnings: 30\n# entry_count_decisions: 20\n# convention_line_count: 200\n# injection_token_warn: 15000\n# context_window: 200000 # auto-detected for Claude Code; override for other tools\n# billing_token_warn: 0 # one-shot warning at this token count (0 = disabled)\n#\n# stale_age_days: 30 # days before drift flags a context file as stale (0 = disabled)\n# key_rotation_days: 90\n# task_nudge_interval: 5 # Edit/Write calls between task completion nudges\n#\n# notify: # requires: ctx notify setup\n# events: # required: no events sent unless listed\n# - loop\n# - nudge\n# - relay\n#\n# priority_order:\n# - CONSTITUTION.md\n# - TASKS.md\n# - CONVENTIONS.md\n# - ARCHITECTURE.md\n# - DECISIONS.md\n# - LEARNINGS.md\n# - GLOSSARY.md\n# - AGENT_PLAYBOOK.md\n
","path":["Home","Configuration"],"tags":[]},{"location":"home/configuration/#option-reference","level":3,"title":"Option Reference","text":"Option Type Default Description context_dirstring.context Context directory name (relative to project root) token_budgetint8000 Default token budget for ctx agent and ctx loadauto_archivebooltrue Auto-archive completed tasks during ctx compactarchive_after_daysint7 Days before completed tasks are archived scratchpad_encryptbooltrue Encrypt scratchpad with AES-256-GCM allow_outside_cwdboolfalse Allow context directory outside the current working directory event_logboolfalse Enable local hook event logging to .context/state/events.jsonlentry_count_learningsint30 Drift warning when LEARNINGS.md exceeds this entry count (0 = disable) entry_count_decisionsint20 Drift warning when DECISIONS.md exceeds this entry count (0 = disable) convention_line_countint200 Drift warning when CONVENTIONS.md exceeds this line count (0 = disable) injection_token_warnint15000 Warn when auto-injected context exceeds this token count (0 = disable) context_windowint200000 Context window size in tokens. Auto-detected for Claude Code (200k/1M); override for other AI tools billing_token_warnint0 (off) One-shot warning when session tokens exceed this threshold (0 = disabled). For plans where tokens beyond an included allowance cost extra stale_age_daysint30 Days before ctx drift flags a context file as stale (0 = disable) key_rotation_daysint90 Days before encryption key rotation nudge task_nudge_intervalint5 Edit/Write calls between task completion nudges notify.events[]string (all) Event filter for webhook notifications (empty = all) priority_order[]string (see below) Custom file loading priority for context assembly
Default priority order (used when priority_order is not set):
CONSTITUTION.md
TASKS.md
CONVENTIONS.md
ARCHITECTURE.md
DECISIONS.md
LEARNINGS.md
GLOSSARY.md
AGENT_PLAYBOOK.md
See Context Files for the rationale behind this ordering.
Environment variables override .ctxrc values but are overridden by CLI flags.
Variable Description Equivalent .ctxrc key CTX_DIR Override the context directory path context_dirCTX_TOKEN_BUDGET Override the default token budget token_budget","path":["Home","Configuration"],"tags":[]},{"location":"home/configuration/#examples","level":3,"title":"Examples","text":"
# Use a shared context directory\nCTX_DIR=/shared/team-context ctx status\n\n# Increase token budget for a single run\nCTX_TOKEN_BUDGET=16000 ctx agent\n
","path":["Home","Configuration"],"tags":[]},{"location":"home/configuration/#cli-global-flags","level":2,"title":"CLI Global Flags","text":"
CLI flags have the highest priority and override both environment variables and .ctxrc settings. These flags are available on every ctx command.
Flag Description --context-dir <path> Override context directory (default: .context/) --allow-outside-cwd Allow context directory outside current working directory --version Show version and exit --help Show command help and exit","path":["Home","Configuration"],"tags":[]},{"location":"home/configuration/#examples_1","level":3,"title":"Examples","text":"
# Point to a different context directory:\nctx status --context-dir /path/to/shared/.context\n\n# Allow external context directory (skips boundary check):\nctx status --context-dir /mnt/nas/project-context --allow-outside-cwd\n
Layer Value Wins? --context-dir/tmp/ctx Yes CTX_DIR/shared/context No .ctxrc.my-context No Default .context No
The CLI flag /tmp/ctx is used because it has the highest priority.
If the CLI flag were absent, CTX_DIR=/shared/context would win. If neither the flag nor the env var were set, the .ctxrc value .my-context would be used. With nothing configured, the default .context applies.
Get a one-shot warning when your session crosses a token threshold where extra charges begin (e.g., Claude Pro includes 200k tokens; beyond that costs extra):
# .ctxrc\nbilling_token_warn: 180000 # warn before hitting the 200k paid boundary\n
The warning fires once per session the first time token usage exceeds the threshold. Set to 0 (or omit) to disable.
Hook messages control what text hooks emit when they fire. Each message can be overridden per-project by placing a text file at the matching path under .context/:
.context/hooks/messages/{hook}/{variant}.txt\n
The override takes priority over the embedded default compiled into the ctx binary. An empty file silences the message while preserving the hook's logic (counting, state tracking, cooldowns).
Use ctx system message to discover and manage overrides:
ctx system message list # see all messages\nctx system message show qa-reminder gate # view the current template\nctx system message edit qa-reminder gate # copy default for editing\nctx system message reset qa-reminder gate # revert to default\n
See Customizing Hook Messages for detailed examples including Python, JavaScript, and silence configurations.
AI agents need to know the resolved context directory at session start. The ctx system bootstrap command prints the context path, file list, and operating rules in both text and JSON formats:
ctx system bootstrap # text output for agents\nctx system bootstrap -q # just the context directory path\nctx system bootstrap --json # structured output for automation\n
The CLAUDE.md template instructs the agent to run this as its first action. Every nudge (context checkpoint, persistence reminder, etc.) also includes a Context: <dir> footer that re-anchors the agent to the correct directory throughout the session.
This replaces the previous approach of hardcoding .context/ paths in agent instructions.
See CLI Reference: bootstrap for full details.
See also: CLI Reference | Context Files | Scratchpad
Each context file in .context/ serves a specific purpose.
Files are designed to be human-readable, AI-parseable, and token-efficient.
","path":["Home","Context Files"],"tags":[]},{"location":"home/context-files/#file-overview","level":2,"title":"File Overview","text":"File Purpose Priority CONSTITUTION.md Hard rules that must NEVER be violated 1 (highest) TASKS.md Current and planned work 2 CONVENTIONS.md Project patterns and standards 3 ARCHITECTURE.md System overview and components 4 DECISIONS.md Architectural decisions with rationale 5 LEARNINGS.md Lessons learned, gotchas, tips 6 GLOSSARY.md Domain terms and abbreviations 7 AGENT_PLAYBOOK.md Instructions for AI tools 8 (lowest) templates/ Entry format templates for ctx add (optional)","path":["Home","Context Files"],"tags":[]},{"location":"home/context-files/#read-order-rationale","level":2,"title":"Read Order Rationale","text":"
The priority order follows a logical progression for AI tools:
CONSTITUTION.md: Inviolable rules first. The AI tool must know what it cannot do before attempting anything.
TASKS.md: Current work items. What the AI tool should focus on.
CONVENTIONS.md: How to write code. Patterns and standards to follow when implementing tasks.
ARCHITECTURE.md: System structure. Understanding of components and boundaries before making changes.
DECISIONS.md: Historical context. Why things are the way they are, to avoid re-debating settled decisions.
LEARNINGS.md: Gotchas and tips. Lessons from past work that inform the current implementation.
GLOSSARY.md: Reference material. Domain terms and abbreviations for lookup as needed.
AGENT_PLAYBOOK.md: Meta instructions last. How to use this context system itself. Loaded last because the agent should understand the content (rules, tasks, patterns) before the operating manual.
# Constitution\n\nThese rules are INVIOLABLE. If a task requires violating these, the task \nis wrong.\n\n## Security Invariants\n\n* [ ] Never commit secrets, tokens, API keys, or credentials\n* [ ] Never store customer/user data in context files\n* [ ] Never disable security linters without documented exception\n\n## Quality Invariants\n\n* [ ] All code must pass tests before commit\n* [ ] No `any` types in TypeScript without documented reason\n* [ ] No TODO comments in main branch (*move to `TASKS.md`*)\n\n## Process Invariants\n\n* [ ] All architectural changes require a decision record\n* [ ] Breaking changes require version bump\n* [ ] Generated files are never committed\n
Tag Values Purpose #priorityhigh, medium, low Task urgency #areacore, cli, docs, tests Codebase area #estimate1h, 4h, 1d Time estimate (optional) #in-progress (none) Currently being worked on
Lifecycle tags (for session correlation):
Tag Format When to add #addedYYYY-MM-DD-HHMMSS Auto-added by ctx add task#startedYYYY-MM-DD-HHMMSS When beginning work on the task #doneYYYY-MM-DD-HHMMSS When marking the task [x]
These timestamps help correlate tasks with session files and track which session started vs completed work.
# Decisions\n\n## [YYYY-MM-DD] Decision Title\n\n**Status**: Accepted | Superseded | Deprecated\n\n**Context**: What situation prompted this decision?\n\n**Decision**: What was decided?\n\n**Rationale**: Why was this the right choice?\n\n**Consequence**: What are the implications?\n\n**Alternatives Considered**:\n* Alternative A: Why rejected\n* Alternative B: Why rejected\n
## [2025-01-15] Use TypeScript Strict Mode\n\n**Status**: Accepted\n\n**Context**: Starting a new project, need to choose the type-checking level.\n\n**Decision**: Enable TypeScript strict mode with all strict flags.\n\n**Rationale**: Catches more bugs at compile time. Team has experience\nwith strict mode. Upfront cost pays off in reduced runtime errors.\n\n**Consequence**: More verbose type annotations required. Some\nthird-party libraries need type assertions.\n\n**Alternatives Considered**:\n- Basic TypeScript: Rejected because it misses null checks\n- JavaScript with JSDoc: Rejected because tooling support is weaker\n
","path":["Home","Context Files"],"tags":[]},{"location":"home/context-files/#status-values","level":3,"title":"Status Values","text":"Status Meaning Accepted Current, active decision Superseded Replaced by newer decision (link to it) Deprecated No longer relevant","path":["Home","Context Files"],"tags":[]},{"location":"home/context-files/#learningsmd","level":2,"title":"LEARNINGS.md","text":"
Purpose: Capture lessons learned, gotchas, and tips that shouldn't be forgotten.
# Learnings\n\n## Category Name\n\n### Learning Title\n\n**Discovered**: YYYY-MM-DD\n\n**Context**: When/how was this learned?\n\n**Lesson**: What's the takeaway?\n\n**Application**: How should this inform future work?\n
## Testing\n\n### Vitest Mocks Must Be Hoisted\n\n**Discovered**: 2025-01-15\n\n**Context**: Tests were failing intermittently when mocking fs module.\n\n**Lesson**: Vitest requires `vi.mock()` calls to be hoisted to the\ntop of the file. Dynamic mocks need `vi.doMock()` instead.\n\n**Application**: Always use `vi.mock()` at file top. Use `vi.doMock()`\nonly when mock needs runtime values.\n
# Conventions\n\n## Naming\n\n* **Files**: kebab-case for all source files\n* **Components**: PascalCase for React components\n* **Functions**: camelCase, verb-first (getUser, parseConfig)\n* **Constants**: SCREAMING_SNAKE_CASE\n\n## Patterns\n\n### Pattern Name\n\n**When to use**: Situation description\n\n**Implementation**:\n// in triple backticks\n// Example code\n\n**Why**: Rationale for this pattern\n
# Architecture\n\n## Overview\n\nBrief description of what the system does and how it's organized.\n\n## Components\n\n### Component Name\n\n**Responsibility**: What this component does\n\n**Dependencies**: What it depends on\n\n**Dependents**: What depends on it\n\n**Key Files**:\n* path/to/file.ts: Description\n\n## Data Flow\n\nDescription or diagram of how data moves through the system.\n\n## Boundaries\n\nWhat's in scope vs out of scope for this codebase.\n
# Glossary\n\n## Domain Terms\n\n### Term Name\n\n**Definition**: What it means in this project's context\n\n**Not to be confused with**: Similar terms that mean different things\n\n**Example**: How it's used\n\n## Abbreviations\n\n| Abbrev | Expansion | Context |\n|--------|-------------------------------|------------------------|\n| ADR | Architectural Decision Record | Decision documentation |\n| SUT | System Under Test | Testing |\n
Read Order: Priority order for loading context files
When to Update: Events that trigger context updates
How to Avoid Hallucinating Memory: Critical rules:
Never assume: If not in files, you don't know it
Never invent history: Don't claim \"we discussed\" without evidence
Verify before referencing: Search files before citing
When uncertain, say so
Trust files over intuition
Context Update Commands: Format for automated updates via ctx watch:
<context-update type=\"task\">Implement rate limiting</context-update>\n<context-update type=\"complete\">user auth</context-update>\n<context-update type=\"learning\"\n context=\"Debugging hooks\"\n lesson=\"Hooks receive JSON via stdin\"\n application=\"Parse JSON stdin with the host language\"\n>Hook Input Format</context-update>\n<context-update type=\"decision\"\n context=\"Need a caching layer\"\n rationale=\"Redis is fast and team has experience\"\n consequence=\"Must provision Redis infrastructure\"\n>Use Redis for caching</context-update>\n
Purpose: Format templates for ctx add decision and ctx add learning. These control the structure of new entries appended to DECISIONS.md and LEARNINGS.md.
Edit the templates directly. Changes take effect immediately on the next ctx add command. For example, to add a \"References\" section to all new decisions, edit .context/templates/decision.md.
Templates are committed to git, so customizations are shared with the team.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#1-fork-or-clone-the-repository","level":3,"title":"1. Fork (or Clone) the Repository","text":"
# Fork on GitHub, then:\ngit clone https://github.com/<you>/ctx.git\ncd ctx\n\n# Or, if you have push access:\ngit clone https://github.com/ActiveMemory/ctx.git\ncd ctx\n
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#2-build-and-install-the-binary","level":3,"title":"2. Build and Install the Binary","text":"
make build\nsudo make install\n
This compiles the ctx binary and places it in /usr/local/bin/.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#3-install-the-plugin-from-your-local-clone","level":3,"title":"3. Install the Plugin from Your Local Clone","text":"
The repository ships a Claude Code plugin under internal/assets/claude/. Point Claude Code at your local copy so that skills and hooks reflect your working tree: no reinstall needed after edits:
Launch claude;
Type /plugin and press Enter;
Select Marketplaces → Add Marketplace
Enter the absolute path to the root of your clone, e.g. ~/WORKSPACE/ctx (this is where .claude-plugin/marketplace.json lives: it points Claude Code to the actual plugin in internal/assets/claude);
Back in /plugin, select Install and choose ctx.
Claude Code Caches Plugin Files
Even though the marketplace points at a directory on disk, Claude Code caches skills and hooks. After editing files under internal/assets/claude/, clear the cache and restart:
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#skills-two-directories-one-rule","level":3,"title":"Skills: Two Directories, One Rule","text":"Directory What lives here Distributed to users? internal/assets/claude/skills/ The 39 ctx-* skills that ship with the plugin Yes .claude/skills/ Dev-only skills (release, QA, backup, etc.) No
internal/assets/claude/skills/ is the single source of truth for user-facing skills. If you are adding or modifying a ctx-* skill, edit it there.
.claude/skills/ holds skills that only make sense inside this repository (release automation, QA checks, backup scripts). These are never distributed to users.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#dev-only-skills-reference","level":4,"title":"Dev-Only Skills Reference","text":"Skill When to use /_ctx-absorb Merge deltas from a parallel worktree or separate checkout /_ctx-audit Detect code-level drift after YOLO sprints or before releases /_ctx-backup Backup context and Claude data to SMB share /_ctx-qa Run QA checks before committing /_ctx-release Run the full release process /_ctx-release-notes Generate release notes for dist/RELEASE_NOTES.md/_ctx-alignment-audit Audit doc claims against agent instructions /_ctx-update-docs Check docs/code consistency after changes
Six skills previously in this list have been promoted to bundled plugin skills and are now available to all ctx users: /ctx-brainstorm, /ctx-check-links, /ctx-sanitize-permissions, /ctx-skill-creator, /ctx-spec.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#how-to-add-things","level":2,"title":"How To Add Things","text":"","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#adding-a-new-cli-command","level":3,"title":"Adding a New CLI Command","text":"
Create a package under internal/cli/<name>/;
Implement Cmd() *cobra.Command as the entry point;
Register it in internal/bootstrap/bootstrap.go (add import + call in Initialize);
Use cmd.Printf/cmd.Println for output (not fmt.Print);
Add tests in the same package (<name>_test.go);
Add a section to the appropriate CLI doc page in docs/cli/.
Pattern to follow: internal/cli/pad/pad.go (parent with subcommands) or internal/cli/drift/drift.go (single command).
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#adding-a-new-session-parser","level":3,"title":"Adding a New Session Parser","text":"
The journal system uses a SessionParser interface. To add support for a new AI tool (e.g. Aider, Cursor):
Create internal/journal/parser/<tool>.go;
Implement parsing logic that returns []*Session;
Register the parser in FindSessions() / FindSessionsForCWD();
Use config.Tool* constants for the tool identifier;
Add test fixtures and parser tests.
Pattern to follow: the Claude Code JSONL parser in internal/journal/parser/.
Multilingual session headers
The Markdown parser recognizes session header prefixes configured via session_prefixes in .ctxrc (default: Session:). To support a new language, users add a prefix to their .ctxrc - no code change needed. New parser implementations can use rc.SessionPrefixes() if they also need prefix-based header detection.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#adding-a-bundled-skill","level":3,"title":"Adding a Bundled Skill","text":"
The repo ships two .ctxrc source profiles. The working copy (.ctxrc) is gitignored and swapped between them:
File Purpose .ctxrc.base Golden baseline: all defaults, no logging .ctxrc.dev Dev profile: notify events enabled, verbose logging .ctxrc Working copy (gitignored: copied from one of the above)
Use ctx commands to switch:
ctx config switch dev # switch to dev profile\nctx config switch base # switch to base profile\nctx config status # show which profile is active\n
After cloning, run ctx config switch dev to get started with full logging.
See Configuration for the full .ctxrc option reference.
Back up project context and global Claude Code data with:
ctx system backup # both project + global (default)\nctx system backup --scope project # .context/, .claude/, ideas/ only\nctx system backup --scope global # ~/.claude/ only\n
Archives are saved to /tmp/. When CTX_BACKUP_SMB_URL is configured, they are also copied to an SMB share. See CLI Reference: backup for details.
make test # fast: all tests\nmake audit # full: fmt + vet + lint + drift + docs + test\nmake smoke # build + run basic commands end-to-end\n
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#running-the-docs-site-locally","level":3,"title":"Running the Docs Site Locally","text":"
make site-setup # one-time: install zensical via pipx\nmake site-serve # serve at localhost\n
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#submitting-changes","level":2,"title":"Submitting Changes","text":"","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#before-you-start","level":3,"title":"Before You Start","text":"
Check existing issues to avoid duplicating effort;
For large changes, open an issue first to discuss the approach;
Markdown is human-readable, version-controllable, and tool-agnostic. Every AI model can parse it natively. Every developer can read it in a terminal, a browser, or a code review. There's no schema to learn, no binary format to decode, no vendor lock-in. You can inspect your context with cat, diff it with git diff, and review it in a PR.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#does-ctx-work-offline","level":2,"title":"Does ctx work offline?","text":"
Yes. ctx is completely local. It reads and writes files on disk, generates context packets from local state, and requires no network access. The only feature that touches the network is the optional webhook notifications hook, which you have to explicitly configure.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#what-gets-committed-to-git","level":2,"title":"What gets committed to git?","text":"
The .context/ directory: yes, commit it. That's the whole point. Team members and AI agents read the same context files.
What not to commit:
.ctx.key: your encryption key. Stored at ~/.ctx/.ctx.key, never in the repo. ctx init handles this automatically.
journal/ and logs/: generated data, potentially large. ctx init adds these to .gitignore.
scratchpad.enc: your choice. It's encrypted, so it's safe to commit if you want shared scratchpad state. See Scratchpad for details.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#how-big-should-my-token-budget-be","level":2,"title":"How big should my token budget be?","text":"
The default is 8000 tokens, which works well for most projects. Configure it via .ctxrc or the CTX_TOKEN_BUDGET environment variable:
# In .ctxrc\ntoken_budget = 12000\n\n# Or as an environment variable\nexport CTX_TOKEN_BUDGET=12000\n\n# Or per-invocation\nctx agent --budget 4000\n
Higher budgets include more context but cost more tokens per request. Lower budgets force sharper prioritization: ctx drops lower-priority content first, so CONSTITUTION and TASKS always make the cut.
See Configuration for all available settings.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#why-not-a-database","level":2,"title":"Why not a database?","text":"
Files are inspectable, diffable, and reviewable in pull requests. You can grep them, cat them, pipe them through jq or awk. They work with every version control system and every text editor.
A database would add a dependency, require migrations, and make context opaque. The design bet is that context should be as visible and portable as the code it describes.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#does-it-work-with-tools-other-than-claude-code","level":2,"title":"Does it work with tools other than Claude Code?","text":"
Yes. ctx agent outputs a context packet that any AI tool can consume: paste it into ChatGPT, Cursor, Copilot, Aider, or anything else that accepts text input.
Claude Code gets first-class integration via the ctx plugin (hooks, skills, automatic context loading). VS Code Copilot Chat has a dedicated ctx extension. Other tools integrate via generated instruction files or manual pasting.
See Integrations for tool-specific setup, including the multi-tool recipe.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#can-i-use-ctx-on-an-existing-project","level":2,"title":"Can I use ctx on an existing project?","text":"
Yes. Run ctx init in any repo and it creates .context/ with template files. Start recording decisions, tasks, and conventions as you work. Context grows naturally; you don't need to backfill everything on day one.
See Getting Started for the full setup flow, or Joining a ctx Project if someone else already initialized it.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#what-happens-when-context-files-get-too-big","level":2,"title":"What happens when context files get too big?","text":"
Token budgeting handles this automatically. ctx agent prioritizes content by file priority (CONSTITUTION first, GLOSSARY last) and trims lower-priority entries when the budget is tight.
For manual maintenance, ctx compact archives completed tasks and old entries, keeping active context lean. You can also run ctx task archive to move completed tasks out of TASKS.md.
The goal is to keep context files focused on current state. Historical entries belong in git history or the archive.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#is-context-meant-to-be-shared","level":2,"title":"Is .context/ meant to be shared?","text":"
Yes. Commit it to your repo. Every team member and every AI agent reads the same files. That's the mechanism for shared memory: decisions made in one session are visible in the next, regardless of who (or what) starts it.
The only per-user state is the encryption key (~/.ctx/.ctx.key) and the optional scratchpad. Everything else is team-shared by design.
Related:
Getting Started - installation and first setup
Configuration - .ctxrc, environment variables, and defaults
Context Files - what each file does and how to use it
","path":["Home","FAQ"],"tags":[]},{"location":"home/first-session/","level":1,"title":"Your First Session","text":"
Here's what a complete first session looks like, from initialization to the moment your AI cites your project context back to you.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-1-initialize-your-project","level":2,"title":"Step 1: Initialize Your Project","text":"
Run ctx init in your project root:
cd your-project\nctx init\n
Sample output:
Context initialized in .context/\n\n ✓ CONSTITUTION.md\n ✓ TASKS.md\n ✓ DECISIONS.md\n ✓ LEARNINGS.md\n ✓ CONVENTIONS.md\n ✓ ARCHITECTURE.md\n ✓ GLOSSARY.md\n ✓ AGENT_PLAYBOOK.md\n\nSetting up encryption key...\n ✓ ~/.ctx/.ctx.key\n\nClaude Code plugin (hooks + skills):\n Install: claude /plugin marketplace add ActiveMemory/ctx\n Then: claude /plugin install ctx@activememory-ctx\n\nNext steps:\n 1. Edit .context/TASKS.md to add your current tasks\n 2. Run 'ctx status' to see context summary\n 3. Run 'ctx agent' to get AI-ready context packet\n
This created your .context/ directory with template files.
For Claude Code, install the ctx plugin to get automatic hooks and skills.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-2-populate-your-context","level":2,"title":"Step 2: Populate Your Context","text":"
Add a task and a decision: These are the entries your AI will remember:
ctx add task \"Implement user authentication\"\n\n# Output: ✓ Added to TASKS.md\n\nctx add decision \"Use PostgreSQL for primary database\" \\\n --context \"Need a reliable database for production\" \\\n --rationale \"PostgreSQL offers ACID compliance and JSON support\" \\\n --consequence \"Team needs PostgreSQL training\"\n\n# Output: ✓ Added to DECISIONS.md\n
These entries are what the AI will recall in future sessions. You don't need to populate everything now: Context grows naturally as you work.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-3-check-your-context","level":2,"title":"Step 3: Check Your Context","text":"
Notice the token estimate: This is how much context your AI will load.
The ○ next to LEARNINGS.md means it's still empty; it will fill in as you capture lessons during development.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-4-start-an-ai-session","level":2,"title":"Step 4: Start an AI Session","text":"
With Claude Code (and the ctx plugin), start every session with:
/ctx-remember\n
This loads your context and presents a structured readback so you can confirm the agent knows what is going on. Context also loads automatically via hooks, but the explicit ceremony gives you a readback to verify.
Using VS Code?
With VS Code Copilot Chat (and the ctx extension), type @ctx /agent in chat to load your context packet, or @ctx /status to check your project context. Run ctx setup copilot --write once to generate .github/copilot-instructions.md for automatic context loading.
If you are not using Claude Code, generate a context packet for your AI tool:
ctx agent --budget 8000\n
Sample output:
# Context Packet\nGenerated: 2026-02-14T15:30:45Z | Budget: 8000 tokens | Used: ~2450\n\n## Read These Files (in order)\n1. .context/CONSTITUTION.md\n2. .context/TASKS.md\n3. .context/CONVENTIONS.md\n...\n\n## Current Tasks\n- [ ] Implement user authentication\n- [ ] Add rate limiting to API endpoints\n\n## Key Conventions\n- Use gofmt for formatting\n- Path construction uses filepath.Join\n\n## Recent Decisions\n## [2026-02-14-120000] Use PostgreSQL for the primary database\n\n**Context**: Evaluated PostgreSQL, MySQL, and SQLite...\n**Rationale**: PostgreSQL offers better JSON support...\n\n## Key Learnings\n## [2026-02-14-100000] Connection pool sizing matters\n\n**Context**: Hit connection limits under load...\n**Lesson**: Default pool size of 10 is too low for concurrent requests...\n\n## Also Noted\n- Use JWT for session management\n- Always validate input at API boundary\n
Paste this output into your AI tool's system prompt or conversation start.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-5-verify-it-works","level":2,"title":"Step 5: Verify It Works","text":"
Ask your AI: \"What are our current tasks?\"
A working setup produces a response like:
Based on the project context, you have one active task:\n\n- **Implement user authentication** (pending)\n\nThere's also a recent architectural decision to **use PostgreSQL for\nthe primary database**, chosen for its ACID compliance and JSON support.\n\nWant me to start on the authentication task?\n
That's the success moment:
The AI is citing your exact context entries from Step 2, not hallucinating or asking you to re-explain.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#what-gets-created","level":2,"title":"What Gets Created","text":"
.context/\n├── CONSTITUTION.md # Hard rules: NEVER violate these\n├── TASKS.md # Current and planned work\n├── CONVENTIONS.md # Project patterns and standards\n├── ARCHITECTURE.md # System overview\n├── DECISIONS.md # Architectural decisions with rationale\n├── LEARNINGS.md # Lessons learned, gotchas, tips\n├── GLOSSARY.md # Domain terms and abbreviations\n└── AGENT_PLAYBOOK.md # How AI tools should use this\n
Claude Code integration (hooks + skills) is provided by the ctx plugin: See Integrations/Claude Code.
VS Code Copilot Chat integration is provided by the ctx extension: See Integrations/VS Code.
See Context Files for detailed documentation of each file.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#what-to-gitignore","level":2,"title":"What to .gitignore","text":"
Rule of Thumb
If it's knowledge (decisions, tasks, learnings, conventions), commit it.
If it's generated output, raw session data, or a secret, .gitignore it.
Commit your .context/ knowledge files: that's the whole point.
You should .gitignore the generated and sensitive paths:
# Journal data (large, potentially sensitive)\n.context/journal/\n.context/journal-site/\n.context/journal-obsidian/\n\n# Hook logs (machine-specific)\n.context/logs/\n\n# Legacy encryption key path (copy to ~/.ctx/.ctx.key if needed)\n.context/.ctx.key\n\n# Claude Code local settings (machine-specific)\n.claude/settings.local.json\n
ctx init Patches Your .gitignore for You
ctx init automatically adds these entries to your .gitignore.
Review the additions with cat .gitignore after init.
See also:
Security Considerations
Scratchpad Encryption
Session Journal
Next Up: Common Workflows →: day-to-day commands for tracking context, checking health, and browsing history.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/getting-started/","level":1,"title":"Getting Started","text":"","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#prerequisites","level":2,"title":"Prerequisites","text":"
ctx does not require git, but using version control with your .context/ directory is strongly recommended:
AI sessions occasionally modify or overwrite context files inadvertently. With git, the AI can check history and restore lost content: Without it, the data is gone.
Also, several ctx features (journal changelog, blog generation) also use git history directly.
Every setup starts with the ctx binary: the CLI tool itself.
If you use Claude Code, you also install the ctx plugin, which adds hooks (context autoloading, persistence nudges) and 25+ /ctx-* skills. For other AI tools, ctx integrates via generated instruction files or manual context pasting: see Integrations for tool-specific setup.
Pick one of the options below to install the binary. Claude Code users should also follow the plugin steps included in each option.
","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#option-1-build-from-source-recommended","level":3,"title":"Option 1: Build from Source (Recommended)","text":"
Requires Go (version defined in go.mod) and Claude Code.
git clone https://github.com/ActiveMemory/ctx.git\ncd ctx\nmake build\nsudo make install\n
Install the Claude Code plugin from your local clone:
Launch claude;
Type /plugin and press Enter;
Select Marketplaces → Add Marketplace
Enter the path to the root of your clone, e.g. ~/WORKSPACE/ctx (this is where .claude-plugin/marketplace.json lives: It points Claude Code to the actual plugin in internal/assets/claude)
Back in /plugin, select Install and choose ctx
This points Claude Code at the plugin source on disk. Changes you make to hooks or skills take effect immediately: No reinstall is needed.
Local Installs Need Manual Enablement
Unlike marketplace installs, local plugin installs are not auto-enabled globally. The plugin will only work in projects that explicitly enable it. Run ctx init in each project (it auto-enables the plugin), or add the entry to ~/.claude/settings.json manually:
Download ctx-0.8.1-windows-amd64.exe from the releases page and add it to your PATH.
Claude Code users: install the plugin from the marketplace:
Launch claude;
Type /plugin and press Enter;
Select Marketplaces → Add Marketplace;
Enter ActiveMemory/ctx;
Back in /plugin, select Install and choose ctx.
Other tool users: see Integrations for tool-specific setup (Cursor, Copilot, Aider, Windsurf, etc.).
Verify the Plugin Is Enabled
After installing, confirm the plugin is enabled globally. Check ~/.claude/settings.json for an enabledPlugins entry. If missing, run ctx init in your project (it auto-enables the plugin), or add it manually:
This creates a .context/ directory with template files and an encryption key at ~/.ctx/ for the encrypted scratchpad. For Claude Code, install the ctx plugin for automatic hooks and skills.
Shows context summary: files present, token estimate, and recent activity.
","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#3-start-using-with-ai","level":3,"title":"3. Start Using with AI","text":"
With Claude Code (and the ctx plugin installed), context loads automatically via hooks.
With VS Code Copilot Chat, install the ctx extension and use @ctx /status, @ctx /agent, and other slash commands directly in chat. Run ctx setup copilot --write to generate .github/copilot-instructions.md for automatic context loading.
For other tools, paste the output of:
ctx agent --budget 8000\n
","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#4-verify-it-works","level":3,"title":"4. Verify It Works","text":"
Ask your AI: \"Do you remember?\"
It should cite specific context: current tasks, recent decisions, or previous session topics.
","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#5-set-up-companion-tools-highly-recommended","level":3,"title":"5. Set Up Companion Tools (Highly Recommended)","text":"
ctx works on its own, but two companion MCP servers unlock significantly better agent behavior. The investment is small and the benefits compound over sessions:
Gemini Search — grounded web search with citations. Skills like /ctx-code-review and /ctx-explain use it for up-to-date documentation lookups instead of relying on training data.
GitNexus — code knowledge graph with symbol resolution, blast radius analysis, and domain clustering. Skills like /ctx-refactor and /ctx-code-review use it for impact analysis and dependency awareness.
# Index your project for GitNexus (run once, then after major changes)\nnpx gitnexus analyze\n
Both are optional MCP servers: if they are not connected, skills degrade gracefully to built-in capabilities. See Companion Tools for setup details and verification.
Next Up:
Your First Session →: a step-by-step walkthrough from ctx init to verified recall
Common Workflows →: day-to-day commands for tracking context, checking health, and browsing history
","path":["Home","Getting Started"],"tags":[]},{"location":"home/is-ctx-right/","level":1,"title":"Is It Right for Me?","text":"","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#good-fit","level":2,"title":"Good Fit","text":"
ctx shines when context matters more than code.
If any of these sound like your project, it's worth trying:
Multi-session AI work: You use AI across many sessions on the same codebase, and re-explaining is slowing you down.
Architectural decisions that matter: Your project has non-obvious choices (database, auth strategy, API design) that the AI keeps second-guessing.
\"Why\" matters as much as \"what\": you need the AI to understand rationale, not just current code
Team handoffs: Multiple people (or multiple AI tools) work on the same project and need shared context.
AI-assisted development across tools: Uou switch between Claude Code, Cursor, Copilot, or other tools and want context to follow the project, not the tool.
Long-lived projects: Anything you'll work on for weeks or months, where accumulated knowledge has compounding value.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#may-not-be-the-right-fit","level":2,"title":"May Not Be the Right Fit","text":"
ctx adds overhead that isn't worth it for every project. Be honest about when to skip it:
One-off scripts: If the project is a single file you'll finish today, there's nothing to remember.
RAG-only workflows: If retrieval from an external knowledge base already gives the agent everything it needs for each session, adding ctx may be unnecessary. RAG retrieves information; ctx defines the project's working memory: They are complementary.
No AI involvement: ctx is designed for human-AI workflows; without an AI consumer, the files are just documentation.
Enterprise-managed context platforms: If your organization provides centralized context services, ctx may duplicate that layer.
For a deeper technical comparison with RAG, prompt management tools, and agent frameworks, see ctx and Similar Tools.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#project-size-guide","level":2,"title":"Project Size Guide","text":"","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#solo-developer-single-repo","level":3,"title":"Solo Developer, Single Repo","text":"
This is ctx's sweet spot.
You get the most value here: one person, one project, decisions, and learnings accumulating over time. Setup takes 5 minutes and the .context/ directory directory stays small, and every session gets faster.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#small-team-one-or-two-repos","level":3,"title":"Small Team, One or Two Repos","text":"
Works well.
Context files commit to git, so the whole team shares the same decisions and conventions. Each person's AI starts with the team's decisions already loaded. Merge conflicts on .context/ files are rare and easy to resolve (they are just Markdown).
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#multiple-repos-or-larger-teams","level":3,"title":"Multiple Repos or Larger Teams","text":"
ctx operates per repository.
Each repo has its own .context/ directory with its own decisions, tasks, and learnings. This matches the way code, ownership, and history already work in git.
There is no built-in cross-repo context layer.
For organizations that need centralized, organization-wide knowledge, ctx complements a platform solution by providing durable, project-local working memory for AI sessions.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#5-minute-trial","level":2,"title":"5-Minute Trial","text":"
Zero commitment. Try it, and delete .context/ if it's not for you.
Using Claude Code?
Install the ctx plugin from the Marketplace for Claude-native hooks, skills, and automatic context loading:
Type /plugin and press Enter
Select Marketplaces → Add Marketplace
Enter ActiveMemory/ctx
Back in /plugin, select Install and choose ctx
You'll still need the ctx binary for the CLI: See Getting Started for install options.
# 1. Initialize\ncd your-project\nctx init\n\n# 2. Add one real decision from your project\nctx add decision \"Your actual architectural choice\" \\\n --context \"What prompted this decision\" \\\n --rationale \"Why you chose this approach\" \\\n --consequence \"What changes as a result\"\n\n# 3. Check what the AI will see\nctx status\n\n# 4. Start an AI session and ask: \"Do you remember?\"\n
If the AI cites your decision back to you, it's working.
Want to remove it later? One command:
rm -rf .context/\n
No dependencies to uninstall. No configuration to revert. Just files.
Ready to try it out?
Join the Community→: Open Source is better together.
Getting Started →: Full installation and setup.
ctx and Similar Tools →: Detailed comparison with other approaches.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/joining-a-project/","level":1,"title":"Joining a ctx Project","text":"
You've joined a team or inherited a project, and there's a .context/ directory in the repo. Good news: someone already set up persistent context. This page gets you oriented fast.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#what-to-read-first","level":2,"title":"What to Read First","text":"
The files in .context/ have a deliberate priority order. Read them top-down:
CONSTITUTION.md: Hard rules. Read this before you touch anything. These are inviolable constraints the team has agreed on.
TASKS.md: Current and planned work. Shows what's in progress, what's pending, and what's blocked.
CONVENTIONS.md: How the team writes code. Naming patterns, file organization, preferred idioms.
ARCHITECTURE.md: System overview. Components, boundaries, data flow.
DECISIONS.md: Why things are the way they are. Saves you from re-proposing something the team already evaluated and rejected.
LEARNINGS.md: Gotchas, tips, and hard-won lessons. The stuff that doesn't fit anywhere else but will save you hours.
See Context Files for detailed documentation of each file's structure and purpose.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#checking-context-health","level":2,"title":"Checking Context Health","text":"
Before you start working, check whether the context is current:
ctx status\n
This shows file counts, token estimates, and recent activity. If files haven't been touched in weeks, the context may be stale.
ctx drift\n
This compares context files against recent code changes and flags potential drift: decisions that no longer match the codebase, conventions that have shifted, or tasks that look outdated.
If things are stale, mention it to the team. Don't silently fix it yourself on day one.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#starting-your-first-session","level":2,"title":"Starting Your First Session","text":"
Generate a context packet to prime your AI:
ctx agent --budget 8000\n
This outputs a token-budgeted summary of the project context, ordered by priority. With Claude Code and the ctx plugin, context loads automatically via hooks. You can also use the /ctx-remember skill to get a structured readback of what the AI knows.
The readback is your verification step: if the AI can cite specific tasks and decisions, the context is working.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#adding-context","level":2,"title":"Adding Context","text":"
As you work, you'll discover things worth recording. Use the CLI:
# Record a decision you made or learned about\nctx add decision \"Use connection pooling for DB access\" \\\n --rationale \"Reduces connection overhead under load\"\n\n# Capture a gotcha you hit\nctx add learning \"Redis timeout defaults to 5s\" \\\n --context \"Hit timeouts during bulk operations\" \\\n --application \"Set explicit timeout for batch jobs\"\n\n# Add a convention you noticed the team follows\nctx add convention \"All API handlers return structured errors\"\n
You can also just tell the AI: \"Record this as a learning\" or \"Add this decision to context.\" With the ctx plugin, context-update commands handle the file writes.
See the Knowledge Capture recipe for the full workflow.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#session-etiquette","level":2,"title":"Session Etiquette","text":"
A few norms for working in a ctx-managed project:
Respect existing conventions. If CONVENTIONS.md says \"use filepath.Join,\" use filepath.Join. If you disagree, propose a change, don't silently diverge.
Don't restructure context files without asking. The file layout and section structure are shared state. Reorganizing them affects every team member and every AI session.
Mark tasks done when complete. Check the box ([x]) in place. Don't move tasks between sections or delete them.
Add context as you go. Decisions, learnings, and conventions you discover are valuable to the next person (or the next session).
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#common-pitfalls","level":2,"title":"Common Pitfalls","text":"
Ignoring CONSTITUTION.md. The constitution exists for a reason. If a task conflicts with a constitution rule, the task is wrong. Raise it with the team instead of working around the constraint.
Deleting tasks. Never delete a task from TASKS.md. Mark it [x] (done) or [-] (skipped with a reason). The history matters for session replay and audit.
Bypassing hooks. If the project uses ctx hooks (pre-commit nudges, context autoloading), don't disable them. They exist to keep context fresh. If a hook is noisy or broken, fix it or file a task.
Over-contributing on day one. Read first, then contribute. Adding a dozen learnings before you understand the project's norms creates noise, not signal.
Related:
Getting Started: installation and setup from scratch
Context Files: detailed file reference
Knowledge Capture: recording decisions, learnings, and conventions
Session Lifecycle: how a typical AI session flows with ctx
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/keeping-ai-honest/","level":1,"title":"Keeping AI Honest","text":"","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#the-problem","level":2,"title":"The Problem","text":"
AI agents confabulate. They invent history that never happened, claim familiarity with decisions that were never made, and sometimes declare a task complete when it is not. This is not malice - it is the default behavior of a system optimizing for plausible-sounding responses.
When your AI says \"we decided to use Redis for caching last week,\" can you verify that? When it says \"the auth module is complete,\" can you confirm it? Without grounded, persistent context, the answer is no. You are trusting vibes.
ctx replaces vibes with verifiable artifacts.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#grounded-memory","level":2,"title":"Grounded Memory","text":"
Every entry in ctx context files has a timestamp and structured fields. When the AI cites a decision, you can check it.
## [2026-01-28-143022] Use Event Sourcing for Audit Trail\n\n**Status**: Accepted\n\n**Context**: Compliance requires full mutation history.\n\n**Decision**: Event sourcing for the audit subsystem only.\n\n**Rationale**: Append-only log meets compliance requirements\nwithout imposing event sourcing on the entire domain model.\n
The timestamp 2026-01-28-143022 is not decoration. It is a verifiable anchor. If the AI references this decision, you can open DECISIONS.md, find the entry, and confirm it says what the AI claims. If the entry does not exist, the AI is hallucinating - and you know immediately.
This is grounded memory: claims that trace back to artifacts you control and can audit.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#constitutionmd-hard-guardrails","level":2,"title":"CONSTITUTION.md: Hard Guardrails","text":"
CONSTITUTION.md defines rules the AI must treat as inviolable. These are not suggestions or best practices - they are constraints that override task requirements.
# Constitution\n\nThese rules are INVIOLABLE. If a task requires violating these,\nthe task is wrong.\n\n* [ ] Never commit secrets, tokens, API keys, or credentials\n* [ ] All public API changes require a decision record\n* [ ] Never delete context files without explicit user approval\n
The AI reads these at session start, before anything else. A well- integrated agent will refuse a task that conflicts with a constitutional rule, citing the specific rule it would violate.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#the-agent-playbooks-anti-hallucination-rules","level":2,"title":"The Agent Playbook's Anti-Hallucination Rules","text":"
The AGENT_PLAYBOOK.md file includes a section called \"How to Avoid Hallucinating Memory\" with five explicit rules:
Never assume. If it is not in the context files, you do not know it.
Never invent history. Do not claim \"we discussed\" something without a file reference.
Verify before referencing. Search files before citing them.
When uncertain, say so. \"I don't see a decision on this\" is always better than a fabricated one.
Trust files over intuition. If the files say PostgreSQL but your training data suggests MySQL, the files win.
These rules create a behavioral contract. The AI is not left to guess how confident it should be - it has explicit instructions to ground every claim in the context directory.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#drift-detection","level":2,"title":"Drift Detection","text":"
Context files can go stale. You rename a package, delete a module, or finish a sprint, and suddenly ARCHITECTURE.md references paths that no longer exist. Stale context is almost as dangerous as no context: the AI treats outdated information as current truth.
ctx drift detects this divergence:
ctx drift\n
It scans context files for references to files, paths, and symbols that no longer exist in the codebase. Stale references get flagged so you can update or remove them before they mislead the next session.
Regular drift checks - weekly, or after major refactors - keep your context files honest the same way tests keep your code honest.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#the-verification-loop","level":2,"title":"The Verification Loop","text":"
The /ctx-commit skill includes a built-in verification step: before staging, it maps claims to evidence and runs self-audit questions to surface gaps. This catches inconsistencies at the point where they matter most — right before code is committed.
This closes the loop. You write context. The AI reads context. The verification step confirms that context still matches reality. When it does not, you fix it - and the next session starts from truth, not from drift.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#trust-through-structure","level":2,"title":"Trust Through Structure","text":"
The common thread across all of these mechanisms is structure over prose. Timestamps make claims verifiable. Constitutional rules make boundaries explicit. Drift detection makes staleness visible. The playbook makes behavioral expectations concrete.
You do not need to trust the AI. You need to trust the system -- and verify when it matters.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#further-reading","level":2,"title":"Further Reading","text":"
Detecting and Fixing Drift: the full workflow for keeping context files accurate
Invariants: the properties that must hold for any valid ctx implementation
Agent Security: threat model and mitigations for AI agents operating with persistent context
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/prompting-guide/","level":1,"title":"Prompting Guide","text":"
New to ctx?
This guide references context files like TASKS.md, DECISIONS.md, and LEARNINGS.md:
These are plain Markdown files that ctx maintains in your project's .context/ directory.
If terms like \"context packet\" or \"session ceremony\" are unfamiliar,
start with the ctx Manifesto for the why,
About for the big picture,
then Getting Started to set up your first project.
This guide is about crafting effective prompts for working with AI assistants in ctx-enabled projects, but the guidelines given here apply to other AI systems, too.
The right prompt triggers the right behavior.
This guide documents prompts that reliably produce good results.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#tldr","level":2,"title":"TL;DR","text":"Goal Prompt Load context \"Do you remember?\" Resume work \"What's the current state?\" What's next /ctx-next Debug \"Why doesn't X work?\" Validate \"Is this consistent with our decisions?\" Impact analysis \"What would break if we...\" Reflect /ctx-reflect Wrap up /ctx-wrap-up Persist \"Add this as a learning\" Explore \"How does X work in this codebase?\" Sanity check \"Is this the right approach?\" Completeness \"What am I missing?\" One more thing \"What's the single smartest addition?\" Set tone \"Push back if my assumptions are wrong.\" Constrain scope \"Only change files in X. Nothing else.\" Course correct \"Stop. That's not what I meant.\" Check health \"Run ctx drift\" Commit /ctx-commit","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#session-start","level":2,"title":"Session Start","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#do-you-remember","level":3,"title":"\"Do you remember?\"","text":"
Triggers the AI to silently read TASKS.md, DECISIONS.md, LEARNINGS.md, and check recent history via ctx journal before responding with a structured readback:
Last session: most recent session topic and date
Active work: pending or in-progress tasks
Recent context: 1-2 recent decisions or learnings
Next step: offer to continue or ask what to focus on
Use this at the start of every important session.
Do you remember what we were working on?\n
This question implies prior context exists. The AI checks files rather than admitting ignorance. The expected response cites specific context (session names, task counts, decisions), not vague summaries.
If the AI instead narrates its discovery process (\"Let me check if there are files...\"), it has not loaded CLAUDE.md or AGENT_PLAYBOOK.md properly.
For a detailed case study on making agents actually follow this protocol (including the failure modes, the timing problem, and the hook design that solved it) see The Dog Ate My Homework.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#whats-the-current-state","level":3,"title":"\"What's the current state?\"","text":"
Prompts reading of TASKS.md, recent sessions, and status overview.
Use this when resuming work after a break.
Variants:
\"Where did we leave off?\"
\"What's in progress?\"
\"Show me the open tasks.\"
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#during-work","level":2,"title":"During Work","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#why-doesnt-x-work","level":3,"title":"\"Why doesn't X work?\"","text":"
This triggers root cause analysis rather than surface-level fixes.
Use this when something fails unexpectedly.
Framing as \"why\" encourages investigation before action. The AI will trace through code, check configurations, and identify the actual cause.
Real Example
\"Why can't I run /ctx-reflect?\" led to discovering missing permissions in settings.local.json bootstrapping.
This was a fix that benefited all users of ctx.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#is-this-consistent-with-our-decisions","level":3,"title":"\"Is this consistent with our decisions?\"","text":"
This prompts checking DECISIONS.md before implementing.
Use this before making architectural choices.
Variants:
\"Check if we've decided on this before\"
\"Does this align with our conventions?\"
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#what-would-break-if-we","level":3,"title":"\"What would break if we...\"","text":"
This triggers defensive thinking and impact analysis.
Use this before making significant changes.
What would break if we change the Settings struct?\n
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#before-you-start-read-x","level":3,"title":"\"Before you start, read X\"","text":"
This ensures specific context is loaded before work begins.
Use this when you know the relevant context exists in a specific file.
Before you start, check ctx journal source for the auth discussion session\n
When the AI misbehaves, match the symptom to the recovery prompt:
Symptom Recovery prompt Hand-waves (\"should work now\") \"Show evidence: file/line refs, command output, or test name.\" Creates unnecessary files \"No new files. Modify the existing implementation.\" Expands scope unprompted \"Stop after the smallest working change. Ask before expanding scope.\" Narrates instead of acting \"Skip the explanation. Make the change and show the diff.\" Repeats a failed approach \"That didn't work last time. Try a different approach.\" Claims completion without proof \"Run the test. Show me the output.\"
These are recovery handles, not rules to paste into CLAUDE.md.
Use them in the moment when you see the behavior.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#reflection-and-persistence","level":2,"title":"Reflection and Persistence","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#what-did-we-learn","level":3,"title":"\"What did we learn?\"","text":"
This prompts reflection on the session and often triggers adding learnings to LEARNINGS.md.
Use this after completing a task or debugging session.
This is an explicit reflection prompt. The AI will summarize insights and often offer to persist them.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#add-this-as-a-learningdecision","level":3,"title":"\"Add this as a learning/decision\"","text":"
This is an explicit persistence request.
Use this when you have discovered something worth remembering.
Add this as a learning: \"JSON marshal escapes angle brackets by default\"\n\n# or simply.\nAdd this as a learning.\n# and let the AI autonomously infer and summarize.\n
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#save-context-before-we-end","level":3,"title":"\"Save context before we end\"","text":"
This triggers context persistence before the session closes.
Use it at the end of the session or before switching topics.
Variants:
\"Let's persist what we did\"
\"Update the context files\"
/ctx-wrap-up:the recommended end-of-session ceremony (see Session Ceremonies)
/ctx-reflect: mid-session reflection checkpoint
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#exploration-and-research","level":2,"title":"Exploration and Research","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#explore-the-codebase-for-x","level":3,"title":"\"Explore the codebase for X\"","text":"
This triggers thorough codebase search rather than guessing.
Use this when you need to understand how something works.
This works because \"Explore\" signals that investigation is needed, not immediate action.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#how-does-x-work-in-this-codebase","level":3,"title":"\"How does X work in this codebase?\"","text":"
This prompts reading actual code rather than explaining general concepts.
Use this to understand the existing implementation.
How does session saving work in this codebase?\n
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#find-all-places-where-x","level":3,"title":"\"Find all places where X\"","text":"
This triggers a comprehensive search across the codebase.
Use this before refactoring or understanding the impact.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#meta-and-process","level":2,"title":"Meta and Process","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#what-should-we-document-from-this","level":3,"title":"\"What should we document from this?\"","text":"
This prompts identifying learnings, decisions, and conventions worth persisting.
Use this after complex discussions or implementations.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#is-this-the-right-approach","level":3,"title":"\"Is this the right approach?\"","text":"
This invites the AI to challenge the current direction.
Use this when you want a sanity check.
This works because it allows AI to disagree.
AIs often default to agreeing; this prompt signals you want an honest assessment.
Stronger variant: \"Push back if my assumptions are wrong.\" This sets the tone for the entire session: The AI will flag questionable choices proactively instead of waiting to be asked.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#what-am-i-missing","level":3,"title":"\"What am I missing?\"","text":"
This prompts thinking about edge cases, overlooked requirements, or unconsidered approaches.
Use this before finalizing a design or implementation.
Forward-looking variant: \"What's the single smartest addition you could make to this at this point?\" Use this after you think you're done: It surfaces improvements you wouldn't have thought to ask for. The constraint to one thing prevents feature sprawl.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#cli-commands-as-prompts","level":2,"title":"CLI Commands as Prompts","text":"
Asking the AI to run ctx commands is itself a prompt. These load context or trigger specific behaviors:
Command What it does \"Run ctx status\" Shows context summary, file presence, staleness \"Run ctx agent\" Loads token-budgeted context packet \"Run ctx drift\" Detects dead paths, stale files, missing context","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#ctx-skills","level":3,"title":"ctx Skills","text":"
The SKILS.md Standard
Skills are formalized prompts stored as SKILL.md files.
The /slash-command syntax below is Claude Code specific.
Other agents can use the same skill files, but invocation may differ.
Use ctx skills by name:
Skill When to use /ctx-status Quick context summary /ctx-agent Load full context packet /ctx-remember Recall project context and structured readback /ctx-wrap-up End-of-session context persistence /ctx-history Browse session history for past discussions /ctx-reflect Structured reflection checkpoint /ctx-next Suggest what to work on next /ctx-commit Commit with context persistence /ctx-drift Detect and fix context drift /ctx-implement Execute a plan step-by-step with verification /ctx-loop Generate autonomous loop script /ctx-pad Manage encrypted scratchpad /ctx-archive Archive completed tasks /check-links Audit docs for dead links
Ceremony vs. Workflow Skills
Most skills work conversationally: \"what should we work on?\" triggers /ctx-next, \"save that as a learning\" triggers /ctx-add-learning. Natural language is the recommended approach.
Two skills are the exception: /ctx-remember and /ctx-wrap-up are ceremony skills for session boundaries: Invoke them as explicit slash commands: conversational triggers risk partial execution. See Session Ceremonies.
Skills combine a prompt, tool permissions, and domain knowledge into a single invocation.
Skills Beyond Claude Code
The /slash-command syntax above is Claude Code native, but the underlying SKILL.md files are a standard markdown format that any agent can consume. If you use a different coding agent, consult its documentation for how to load skill files as prompt templates.
Based on our ctx development experience (i.e., \"sipping our own champagne\") so far, here are some prompts that tend to produce poor results:
Prompt Problem Better Alternative \"Fix this\" Too vague, may patch symptoms \"Why is this failing?\" \"Make it work\" Encourages quick hacks \"What's the right way to solve this?\" \"Just do it\" Skips planning \"Plan this, then implement\" \"You should remember\" Confrontational \"Do you remember?\" \"Obviously...\" Discourages questions State the requirement directly \"Idiomatic X\" Triggers language priors \"Follow project conventions\" \"Implement everything\" No phasing, sprawl risk Break into tasks, implement one at a time \"You should know this\" Assumes context is loaded \"Before you start, read X\"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#reliability-checklist","level":2,"title":"Reliability Checklist","text":"
Before sending a non-trivial prompt, check these four elements. This is the guide's DNA in one screenful.
Goal in one sentence: What does \"done\" look like?
Files to read: What existing code or context should the AI review before acting?
Verification command: How will you prove it worked? (test name, CLI command, expected output)
Scope boundary: What should the AI not touch?
A prompt that covers all four is almost always good enough.
A prompt missing #3 is how you get \"should work now\" without evidence.
A prompting guide earns its trust by being honest about risk.
These four rules mentioned below don't change with model versions, agent frameworks, or project size.
Build them into your workflow once and stop thinking about them.
Tool-using agents can read files, run commands, and modify your codebase. That power makes them useful. It also creates a trust boundary you should be aware of.
These invariants apply regardless of which agent or model you use.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#treat-the-repository-text-as-untrusted-input","level":3,"title":"Treat the Repository Text as \"Untrusted Input\"","text":"
Issue descriptions, PR comments, commit messages, documentation, and even code comments can contain text that looks like instructions. An agent that reads a GitHub issue and then runs a command found inside it is executing untrusted input.
The rule: Before running any command the agent found in repo text (issues, docs, comments), restate the command explicitly and confirm it does what you expect. Don't let the agent copy-paste from untrusted sources into a shell.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#ask-before-destructive-operations","level":3,"title":"Ask Before Destructive Operations","text":"
git push --force, rm -rf, DROP TABLE, docker system prune: these are irreversible or hard to reverse. A good agent should pause before running them, but don't rely on that.
The rule: For any operation that deletes data, overwrites history, or affects shared infrastructure, require explicit confirmation. If the agent runs something destructive without asking, that's a course-correction moment: \"Stop. Never run destructive commands without asking first.\"
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#scope-the-blast-radius","level":3,"title":"Scope the Blast Radius","text":"
An agent told to \"fix the tests\" might modify test fixtures, change assertions, or delete tests that inconveniently fail. An agent told to \"deploy\" might push to production. Broad mandates create broad risk.
The rule: Constrain scope before starting work. The Reliability Checklist's scope boundary (#4) is your primary safety lever. When in doubt, err on the side of a tighter boundary.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#secrets-never-belong-in-context","level":3,"title":"Secrets Never Belong in Context","text":"
LEARNINGS.md, DECISIONS.md, and session transcripts are plain-text files that may be committed to version control.
Don't persist API keys, passwords, tokens, or credentials in context files.
The rule: If the agent encounters a secret during work, it should use it transiently (environment variable, an alias to the secret instead of the actual secret, etc.) and never write it to a context file.
Any Secret Seen IS Exposed
If you see a secret in a context file, remove it immediately and rotate the credential.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#explore-plan-implement","level":2,"title":"Explore → Plan → Implement","text":"
For non-trivial work, name the phase you want:
Explore src/auth and summarize the current flow.\nThen propose a plan. After I approve, implement with tests.\n
This prevents the AI from jumping straight to code.
The three phases map to different modes of thinking:
Explore: read, search, understand: no changes
Plan: propose approach, trade-offs, scope: no changes
Implement: write code, run tests, verify: changes
Small fixes skip straight to implement. Complex or uncertain work benefits from all three.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#prompts-by-task-type","level":2,"title":"Prompts by Task Type","text":"
Different tasks need different prompt structures. The pattern: symptom + location + verification.
Users report search returns empty results for queries with hyphens.\nReproduce in src/search/. Write a failing test for \"foo-bar\",\nfix the root cause, run: go test ./internal/search/...\n
Inspect src/auth/ and list duplication hotspots.\nPropose a refactor plan scoped to one module.\nAfter approval, remove duplication without changing behavior.\nAdd a test if coverage is missing. Run: make audit\n
Update docs/cli-reference.md to reflect the new --format flag.\nConfirm the flag exists in the code and the example works.\n
Notice each prompt includes what to verify and how. Without that, you get a \"should work now\" instead of evidence.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#writing-tasks-as-prompts","level":2,"title":"Writing Tasks as Prompts","text":"
Tasks in TASKS.md are indirect prompts to the AI. How you write them shapes how the AI approaches the work.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#state-the-motivation-not-just-the-goal","level":3,"title":"State the Motivation, Not Just the Goal","text":"
Tell the AI why you are building something, not just what.
Bad: \"Build a calendar view.\"
Good: \"Build a calendar view. The motivation is that all notes and tasks we build later should be viewable here.\"
The second version lets the AI anticipate downstream requirements:
It will design the calendar's data model to be compatible with future features: Without you having to spell out every integration point. Motivation turns a one-off task into a directional task.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#state-the-deliverable-not-just-steps","level":3,"title":"State the Deliverable, Not Just Steps","text":"
For complex tasks, add explicit \"done when\" criteria:
- [ ] T2.0: Authentication system\n **Done when**:\n - [ ] User can register with email\n - [ ] User can log in and get a token\n - [ ] Protected routes reject unauthenticated requests\n
This prevents premature \"task complete\" when only the implementation details are done, but the feature doesn't actually work.
Completing all subtasks does not mean the parent task is complete.
The parent task describes what the user gets.
Subtasks describe how to build it.
Always re-read the parent task description before marking it complete. Verify the stated deliverable exists and works.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#why-do-these-approaches-work","level":2,"title":"Why Do These Approaches Work?","text":"
The patterns in this guide aren't invented here: They are practitioner translations of well-established, peer-reviewed research, most of which predate the current AI (hype) wave.
The underlying ideas come from decades of work in machine learning, cognitive science, and numerical optimization. For a concrete case study showing how these principles play out when an agent decides whether to follow instructions (attention competition, optimization toward least-resistance paths, and observable compliance as a design goal) see The Dog Ate My Homework.
Phased work (\"Explore → Plan → Implement\") applies chain-of-thought reasoning: Decomposing a problem into sequential steps before acting. Forcing intermediate reasoning steps measurably improves output quality in language models, just as it does in human problem-solving. Wei et al., Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (2022).
Root-cause prompts (\"Why doesn't X work?\") use step-back abstraction: Retreating to a higher-level question before diving into specifics. This mirrors how experienced engineers debug: they ask \"what should happen?\" before asking \"what went wrong?\" Zheng et al., Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models (2023).
Exploring alternatives (\"Propose 2-3 approaches\") leverages self-consistency: Generating multiple independent reasoning paths and selecting the most coherent result. The idea traces back to ensemble methods in ML: A committee of diverse solutions outperforms any single one. Wang et al., Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022).
Impact analysis (\"What would break if we...\") is a form of tree-structured exploration: Branching into multiple consequence paths before committing. This is the same principle behind game-tree search (minimax, MCTS) that has powered decision-making systems since the 1950s. Yao et al., Tree of Thoughts: Deliberate Problem Solving with Large Language Models (2023).
Motivation prompting (\"Build X because Y\") works through goal conditioning: Providing the objective function alongside the task. In optimization terms, you are giving the gradient direction, not just the loss. The model can make locally coherent decisions that serve the global objective because it knows what \"better\" means.
Scope constraints (\"Only change files in X\") apply constrained optimization: Bounding the search space to prevent divergence. This is the same principle behind regularization in ML: Without boundaries, powerful optimizers find solutions that technically satisfy the objective but are practically useless.
CLI commands as prompts (\"Run ctx status\") interleave reasoning with acting: The model thinks, acts on external tools, observes results, then thinks again. Grounding reasoning in real tool output reduces hallucination because the model can't ignore evidence it just retrieved. Yao et al., ReAct: Synergizing Reasoning and Acting in Language Models (2022).
Task decomposition (\"Prompts by Task Type\") applies least-to-most prompting: Breaking a complex problem into subproblems and solving them sequentially, each building on the last. This is the research version of \"plan, then implement one slice.\" Zhou et al., Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2022).
Explicit planning (\"Explore → Plan → Implement\") is directly supported by plan-and-solve prompting, which addresses missing-step failures in zero-shot reasoning by extracting a plan before executing. The phased structure prevents the model from jumping to code before understanding the problem. Wang et al., Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models (2023).
Session reflection (\"What did we learn?\", /ctx-reflect) is a form of verbal reinforcement learning: Improving future performance by persisting linguistic feedback as memory rather than updating weights. This is exactly what LEARNINGS.md and DECISIONS.md provide: a durable feedback signal across sessions. Shinn et al., Reflexion: Language Agents with Verbal Reinforcement Learning (2023).
These aren't prompting \"hacks\" that you will find in the \"1000 AI Prompts for the Curious\" listicles: They are applications of foundational principles:
Decomposition,
Abstraction,
Ensemble Reasoning,
Search,
and Constrained Optimization.
They work because language models are, at their core, optimization systems navigating probabilistic landscapes.
The Attention Budget: Why your AI forgets what you just told it, and how token budgets shape context strategy
The Dog Ate My Homework: A case study in making agents follow instructions: attention timing, delegation decay, and observable compliance as a design goal
Found a prompt that works well? Open an issue or PR with:
The prompt text;
What behavior it triggers;
When to use it;
Why it works (optional but helpful).
Dive Deeper:
Recipes: targeted how-to guides for specific tasks
CLI Reference: all commands and flags
Integrations: setup for Claude Code, Cursor, Aider
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/repeated-mistakes/","level":1,"title":"My AI Keeps Making the Same Mistakes","text":"","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#the-problem","level":2,"title":"The Problem","text":"
You found a bug last Tuesday. You debugged it, understood the root cause, and moved on. Today, a new session hits the exact same bug. The AI rediscovers it from scratch, burning twenty minutes on something you already solved.
Worse: you spent an hour last week evaluating two database migration strategies, picked one, documented why in a comment somewhere, and now the AI is cheerfully suggesting the approach you rejected. Again.
This is not a model problem. It is a memory problem. Without persistent context, every session starts with amnesia.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#how-ctx-stops-the-loop","level":2,"title":"How ctx Stops the Loop","text":"
ctx gives your AI three files that directly prevent repeated mistakes, each targeting a different failure mode.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#decisionsmd-stop-relitigating-settled-choices","level":3,"title":"DECISIONS.md: Stop Relitigating Settled Choices","text":"
When you make an architectural decision, record it with rationale and rejected alternatives. The AI reads this at session start and treats it as settled.
## [2026-02-12] Use JWT for Authentication\n\n**Status**: Accepted\n\n**Context**: Need stateless auth for the API layer.\n\n**Decision**: JWT with short-lived access tokens and refresh rotation.\n\n**Rationale**: Stateless, scales horizontally, team has prior experience.\n\n**Alternatives Considered**:\n- Session-based auth: Rejected. Requires sticky sessions or shared store.\n- API keys only: Rejected. No user identity, no expiry rotation.\n
Next session, when the AI considers auth, it reads this entry and builds on the decision instead of re-debating it. If someone asks \"why not sessions?\", the rationale is already there.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#learningsmd-capture-gotchas-once","level":3,"title":"LEARNINGS.md: Capture Gotchas Once","text":"
Learnings are the bugs, quirks, and non-obvious behaviors that cost you time the first time around. Write them down so they cost you zero time the second time.
## Build\n\n### CGO Required for SQLite on Alpine\n\n**Discovered**: 2026-01-20\n\n**Context**: Docker build failed silently with \"no such table\" at runtime.\n\n**Lesson**: The go-sqlite3 driver requires CGO_ENABLED=1 and gcc\ninstalled in the build stage. Alpine needs apk add build-base.\n\n**Application**: Always use the golang:alpine image with build-base\nfor SQLite builds. Never set CGO_ENABLED=0.\n
Without this entry, the next session that touches the Dockerfile will hit the same wall. With it, the AI knows before it starts.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#constitutionmd-draw-hard-lines","level":3,"title":"CONSTITUTION.md: Draw Hard Lines","text":"
Some mistakes are not about forgetting - they are about boundaries the AI should never cross. CONSTITUTION.md sets inviolable rules.
* [ ] Never commit secrets, tokens, API keys, or credentials\n* [ ] Never disable security linters without a documented exception\n* [ ] All database migrations must be reversible\n
The AI reads these as absolute constraints. It does not weigh them against convenience. It refuses tasks that would violate them.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#the-accumulation-effect","level":2,"title":"The Accumulation Effect","text":"
Each of these files grows over time. Session one captures two decisions. Session five adds a tricky learning about timezone handling. Session twelve records a convention about error message formatting.
By session twenty, your AI has a knowledge base that no single person carries in their head. New team members - human or AI - inherit it instantly.
The key insight: you are not just coding. You are building a knowledge layer that makes every future session faster.
ctx files version with your code in git. They survive branch switches, team changes, and model upgrades. The context outlives any single session.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#getting-started","level":2,"title":"Getting Started","text":"
Capture your first decision or learning right now:
ctx add decision \"Use PostgreSQL\" \\\n --context \"Need a relational database for the project\" \\\n --rationale \"Team expertise, JSONB support, mature ecosystem\"\n\nctx add learning \"Vitest mock hoisting\" \\\n --context \"Tests failing intermittently\" \\\n --lesson \"vi.mock() must be at file top level\" \\\n --application \"Use vi.doMock() for dynamic mocks\"\n
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#further-reading","level":2,"title":"Further Reading","text":"
Knowledge Capture: the full workflow for persisting decisions, learnings, and conventions
Context Files Reference: structure and format for every file in .context/
About ctx: the bigger picture - why persistent context changes how you work with AI
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"operations/","level":1,"title":"Operations","text":"
Guides for installing, upgrading, integrating, and running ctx.
Run an unattended AI agent that works through tasks overnight, with ctx providing persistent memory between iterations.
","path":["Operations"],"tags":[]},{"location":"operations/autonomous-loop/","level":1,"title":"Autonomous Loops","text":"","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#autonomous-ai-development","level":2,"title":"Autonomous AI Development","text":"
Iterate until done.
An autonomous loop is an iterative AI development workflow where an agent works on tasks until completion, without constant human intervention.
ctx provides the memory that makes this possible:
ctx provides the memory: persistent context that survives across iterations
The loop provides the automation: continuous execution until done
Together, they enable fully autonomous AI development where the agent remembers everything across iterations.
Origin
This pattern is inspired by Geoffrey Huntley's Ralph Wiggum technique.
We use generic terminology here so the concepts remain clear regardless of trends.
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#how-it-works","level":2,"title":"How It Works","text":"
graph TD\n A[Start Loop] --> B[Load .context/loop.md]\n B --> C[AI reads .context/]\n C --> D[AI picks task from TASKS.md]\n D --> E[AI completes task]\n E --> F[AI updates context files]\n F --> G[AI commits changes]\n G --> H{Check signals}\n H -->|SYSTEM_CONVERGED| I[Done - all tasks complete]\n H -->|SYSTEM_BLOCKED| J[Done - needs human input]\n H -->|Continue| B
Loop reads .context/loop.md and invokes AI
AI loads context from .context/
AI picks one task and completes it
AI updates context files (mark task done, add learnings)
AI commits changes
Loop checks for completion signals
Repeat until converged or blocked
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#quick-start-shell-while-loop-recommended","level":2,"title":"Quick Start: Shell While Loop (Recommended)","text":"
The best way to run an autonomous loop is a plain shell script that invokes your AI tool in a fresh process on each iteration. This is \"pure ralph\":
The only state that carries between iterations is what lives in .context/ and the git history. No context window bleed, no accumulated tokens, no hidden state.
Create a loop.sh:
#!/bin/bash\n# loop.sh: an autonomous iteration loop\n\nPROMPT_FILE=\"${1:-.context/loop.md}\"\nMAX_ITERATIONS=\"${2:-10}\"\nOUTPUT_FILE=\"/tmp/loop_output.txt\"\n\nfor i in $(seq 1 $MAX_ITERATIONS); do\n echo \"=== Iteration $i ===\"\n\n # Invoke AI with prompt\n cat \"$PROMPT_FILE\" | claude --print > \"$OUTPUT_FILE\" 2>&1\n\n # Display output\n cat \"$OUTPUT_FILE\"\n\n # Check for completion signals\n if grep -q \"SYSTEM_CONVERGED\" \"$OUTPUT_FILE\"; then\n echo \"Loop complete: All tasks done\"\n break\n fi\n\n if grep -q \"SYSTEM_BLOCKED\" \"$OUTPUT_FILE\"; then\n echo \"Loop blocked: Needs human input\"\n break\n fi\n\n sleep 2\ndone\n
Make it executable and run:
chmod +x loop.sh\n./loop.sh\n
You can also generate this script with ctx loop (see CLI Reference).
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#why-do-we-use-a-shell-loop","level":3,"title":"Why Do We Use a Shell Loop?","text":"
Each iteration starts a fresh AI process with zero context window history. The agent knows only what it reads from .context/ files: Exactly the information you chose to persist.
This is the core loop principle: memory is explicit, not accidental.
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#alternative-claude-codes-built-in-loop","level":2,"title":"Alternative: Claude Code's Built-in Loop","text":"
This is convenient for quick iterations, but be aware of important caveats:
This Loop Is not Pure
Claude Code's /loop runs all iterations within the same session. This means:
State leaks between iterations: The context window accumulates output from every previous iteration. The agent \"remembers\" things it saw earlier (even if they were never persisted to .context/).
Token budget degrades: Each iteration adds to the context window, leaving less room for actual work in later iterations.
Not ergonomic for long runs: Users report that the built-in loop is less predictable for 10+ iteration runs compared to a shell loop.
For short explorations (2-5 iterations) or interactive use, /loop works fine. For overnight unattended runs or anything where iteration independence matters, use the shell while loop instead.
The prompt file instructs the AI on how to work autonomously. Here's a template:
# Autonomous Development Prompt\n\nYou are working on this project autonomously. Follow these steps:\n\n## 1. Load Context\n\nRead these files in order:\n\n1. `.context/CONSTITUTION.md`: NEVER violate these rules\n2. `.context/TASKS.md`: Find work to do\n3. `.context/CONVENTIONS.md`: Follow these patterns\n4. `.context/DECISIONS.md`: Understand past choices\n\n## 2. Pick One Task\n\nFrom `.context/TASKS.md`, select ONE task that is:\n\n- Not blocked\n- Highest priority available\n- Within your capabilities\n\n## 3. Complete the Task\n\n- Write code following conventions\n- Run tests if applicable\n- Keep changes focused and minimal\n\n## 4. Update Context\n\nAfter completing work:\n\n- Mark task complete in `TASKS.md`\n- Add any learnings to `LEARNINGS.md`\n- Add any decisions to `DECISIONS.md`\n\n## 5. Commit Changes\n\nCreate a focused commit with clear message.\n\n## 6. Signal Status\n\nEnd your response with exactly ONE of:\n\n- `SYSTEM_CONVERGED`: All tasks in TASKS.md are complete\n- `SYSTEM_BLOCKED`: Cannot proceed, need human input (explain why)\n- (no signal): More work remains, continue to next iteration\n\n## Rules\n\n- ONE task per iteration\n- NEVER skip tests\n- NEVER violate CONSTITUTION.md\n- Commit after each task\n
Signal Meaning When to Use SYSTEM_CONVERGED All tasks complete No pending tasks in TASKS.md SYSTEM_BLOCKED Cannot proceed Needs clarification, access, or decision BOOTSTRAP_COMPLETE Initial setup done Project scaffolding finished","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#example-usage","level":3,"title":"Example Usage","text":"
converged state
I've completed all tasks in TASKS.md:\n- [x] Set up project structure\n- [x] Implement core API\n- [x] Add authentication\n- [x] Write tests\n\nNo pending tasks remain.\n\nSYSTEM_CONVERGED\n
blocked state
I cannot proceed with the \"Deploy to production\" task because:\n- Missing AWS credentials\n- Need confirmation on region selection\n\nPlease provide credentials and confirm deployment region.\n\nSYSTEM_BLOCKED\n
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#why-ctx-and-loops-work-well-together","level":2,"title":"Why ctx and Loops Work Well Together","text":"Without ctx With ctx Each iteration starts fresh Each iteration has full history Decisions get re-made Decisions persist in DECISIONS.md Learnings are lost Learnings accumulate in LEARNINGS.md Tasks can be forgotten Tasks tracked in TASKS.md","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#automatic-context-updates","level":3,"title":"Automatic Context Updates","text":"
During the loop, the AI should update context files:
End EVERY response with one of:\n- SYSTEM_CONVERGED (if all tasks done)\n- SYSTEM_BLOCKED (if stuck)\n
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#context-not-persisting","level":3,"title":"Context Not Persisting","text":"
Cause: AI not updating context files
Fix: Add explicit instructions to .context/loop.md:
After completing a task, you MUST:\n1. Run: ctx task complete \"<task>\"\n2. Add learnings: ctx add learning \"...\"\n
Cause: Task not marked complete before next iteration
Fix: Ensure commit happens after context update:
Order of operations:\n1. Complete coding work\n2. Update context files (*`ctx task complete`, `ctx add`*)\n3. Commit **ALL** changes including `.context/`\n4. Then signal status\n
# From the ctx repository\nclaude /plugin install ./internal/assets/claude\n\n# Or from the marketplace\nclaude /plugin marketplace add ActiveMemory/ctx\nclaude /plugin install ctx@activememory-ctx\n
Ensure the Plugin Is Enabled
Installing a plugin registers it, but local installs may not auto-enable it globally. Verify ~/.claude/settings.json contains:
Without this, the plugin's hooks and skills won't appear in other projects. Running ctx init auto-enables the plugin; use --no-plugin-enable to skip this step.
This gives you:
Component Purpose .context/ All context files CLAUDE.md Bootstrap instructions Plugin hooks Lifecycle automation Plugin skills Agent Skills","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#how-it-works","level":3,"title":"How It Works","text":"
graph TD\n A[Session Start] --> B[Claude reads CLAUDE.md]\n B --> C[PreToolUse hook runs]\n C --> D[ctx agent loads context]\n D --> E[Work happens]\n E --> F[Session End]
Session start: Claude reads CLAUDE.md, which tells it to check .context/
First tool use: PreToolUse hook runs ctx agent and emits the context packet (subsequent invocations within the cooldown window are silent)
Next session: Claude reads context files and continues with context
The ctx plugin provides lifecycle hooks implemented as Go subcommands (ctx system *):
Hook Event Purpose ctx system context-load-gate PreToolUse (.*) Auto-inject context on first tool use ctx system block-non-path-ctx PreToolUse (Bash) Block ./ctx or go run: force $PATH install ctx system qa-reminder PreToolUse (Bash) Remind agent to lint/test before committing ctx system specs-nudge PreToolUse (EnterPlanMode) Nudge agent to use project specs when planning ctx system check-context-size UserPromptSubmit Nudge context assessment as sessions grow ctx system check-ceremonies UserPromptSubmit Nudge /ctx-remember and /ctx-wrap-up adoption ctx system check-persistence UserPromptSubmit Remind to persist learnings/decisions ctx system check-journal UserPromptSubmit Remind to export/enrich journal entries ctx system check-reminders UserPromptSubmit Relay pending reminders at session start ctx system check-version UserPromptSubmit Warn when binary/plugin versions diverge ctx system check-resources UserPromptSubmit Warn when memory/swap/disk/load hit DANGER level ctx system check-knowledge UserPromptSubmit Nudge when knowledge files grow large ctx system check-map-staleness UserPromptSubmit Nudge when ARCHITECTURE.md is stale ctx system heartbeat UserPromptSubmit Session-alive signal with prompt count metadata ctx system post-commit PostToolUse (Bash) Nudge context capture and QA after git commits
A catch-all PreToolUse hook also runs ctx agent on every tool use (with cooldown) to autoload context.
The --session $PPID flag isolates the cooldown per session: $PPID resolves to the Claude Code process PID, so concurrent sessions don't interfere. The default cooldown is 10 minutes; use --cooldown 0 to disable it.
When developing ctx locally (adding skills, hooks, or changing plugin behavior), Claude Code caches the plugin by version. You must bump the version in both files and update the marketplace for changes to take effect:
Start a new Claude Code session: skill changes aren't reflected in existing sessions.
Both Version Files Must Match
If you only bump plugin.json but not marketplace.json (or vice versa), Claude Code may not detect the update. Always bump both together.
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#troubleshooting","level":3,"title":"Troubleshooting","text":"Issue Solution Context not loading Check ctx is in PATH: which ctx Hook errors Verify plugin is installed: claude /plugin list New skill not visible Bump version in both plugin.json files, update marketplace","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#manual-context-load","level":3,"title":"Manual Context Load","text":"
If hooks aren't working, manually load context:
# Get context packet\nctx agent --budget 4000\n\n# Or paste into conversation\ncat .context/TASKS.md\n
The ctx plugin ships Agent Skills following the agentskills.io specification.
These are invoked in Claude Code with /skill-name.
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#session-lifecycle-skills","level":4,"title":"Session Lifecycle Skills","text":"Skill Description /ctx-remember Recall project context at session start (ceremony) /ctx-wrap-up End-of-session context persistence (ceremony) /ctx-status Show context summary (tasks, decisions, learnings) /ctx-agent Get AI-optimized context packet /ctx-next Suggest 1-3 concrete next actions from context /ctx-commit Commit with integrated context capture /ctx-reflect Review session and suggest what to persist /ctx-remind Manage session-scoped reminders /ctx-pause Pause context hooks for this session /ctx-resume Resume context hooks after a pause","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#context-persistence-skills","level":4,"title":"Context Persistence Skills","text":"Skill Description /ctx-add-task Add a task to TASKS.md /ctx-add-learning Add a learning to LEARNINGS.md /ctx-add-decision Add a decision with context/rationale/consequence /ctx-add-convention Add a coding convention to CONVENTIONS.md /ctx-archive Archive completed tasks","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#scratchpad-skills","level":4,"title":"Scratchpad Skills","text":"Skill Description /ctx-pad Manage encrypted scratchpad entries","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#session-history-skills","level":4,"title":"Session History Skills","text":"Skill Description /ctx-history Browse AI session history /ctx-journal-enrich Enrich a journal entry with frontmatter/tags /ctx-journal-enrich-all Full journal pipeline: export if needed, then batch-enrich","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#blogging-skills","level":4,"title":"Blogging Skills","text":"
Blogging is a Better Way of Creating Release Notes
The blogging workflow can also double as generating release notes:
AI reads your git commit history and creates a \"narrative\", which is essentially what a release note is for.
Skill Description /ctx-blog Generate blog post from recent activity /ctx-blog-changelog Generate blog post from commit range with theme","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#auditing-health-skills","level":4,"title":"Auditing & Health Skills","text":"Skill Description /ctx-doctor Troubleshoot ctx behavior with structural health checks /ctx-drift Detect and fix context drift (structural + semantic) /ctx-consolidate Merge redundant learnings or decisions into denser entries /ctx-alignment-audit Audit doc claims against playbook instructions /ctx-prompt-audit Analyze session logs for vague prompts /check-links Audit docs for dead internal and external links","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#planning-execution-skills","level":4,"title":"Planning & Execution Skills","text":"Skill Description /ctx-loop Generate a Ralph Loop iteration script /ctx-implement Execute a plan step-by-step with checks /ctx-import-plans Import Claude Code plan files into project specs /ctx-worktree Manage git worktrees for parallel agents /ctx-architecture Build and maintain architecture maps","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#usage-examples","level":4,"title":"Usage Examples","text":"
// split to multiple lines for readability\n{\n \"ai.systemPrompt\": \"Read .context/TASKS.md and \n .context/CONVENTIONS.md before responding. \n Follow rules in .context/CONSTITUTION.md.\",\n}\n
The --write flag creates .github/copilot-instructions.md, which Copilot reads automatically at the start of every session. This file contains your project's constitution rules, current tasks, conventions, and architecture: giving Copilot persistent context without manual copy-paste.
Re-run ctx setup copilot --write after updating your .context/ files to regenerate the instructions.
The ctx VS Code extension adds a @ctx chat participant to GitHub Copilot Chat, giving you direct access to all context commands from within the editor.
Typing @ctx without a command shows help with all available commands. The extension also supports natural language: asking @ctx about \"status\" or \"drift\" routes to the correct command automatically.
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#configuration_2","level":4,"title":"Configuration","text":"Setting Default Description ctx.executablePathctx Path to the ctx binary. Set this if ctx is not in your PATH.","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#follow-up-suggestions","level":4,"title":"Follow-Up Suggestions","text":"
After each command, the extension suggests relevant next steps. For example, after /init it suggests /status and /hook; after /drift it suggests /sync.
ctx init creates a .context/sessions/ directory for storing session data from non-Claude tools. The Markdown session parser scans this directory during ctx journal, enabling session history for Copilot and other tools.
These patterns work without the extension, using Copilot's built-in file awareness:
Pattern 1: Keep context files open
Open .context/CONVENTIONS.md in a split pane. Copilot will reference it.
Pattern 2: Reference in comments
// See .context/CONVENTIONS.md for naming patterns\n// Following decision in .context/DECISIONS.md: Use PostgreSQL\n\nfunction getUserById(id: string) {\n // Copilot now has context\n}\n
Pattern 3: Paste context into Copilot Chat
ctx agent --budget 2000\n
Paste output into Copilot Chat for context-aware responses.
// Split to multiple lines for readability\n{\n \"ai.customInstructions\": \"Always read .context/CONSTITUTION.md first. \n Check .context/TASKS.md for current work. \n Follow patterns in .context/CONVENTIONS.md.\"\n}\n
You are working on a project with persistent context in .context/\n\nBefore responding:\n1. Read .context/CONSTITUTION.md - NEVER violate these rules\n2. Check .context/TASKS.md for current work\n3. Follow .context/CONVENTIONS.md patterns\n4. Reference .context/DECISIONS.md for architectural choices\n\nWhen you learn something new, note it for .context/LEARNINGS.md\nWhen you make a decision, document it for .context/DECISIONS.md\n
<context-update type=\"task\">Implement rate limiting</context-update>\n<context-update type=\"convention\">Use kebab-case for files</context-update>\n<context-update type=\"complete\">rate limiting</context-update>\n
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#structured-format-learnings-decisions","level":3,"title":"Structured Format (learnings, decisions)","text":"
Learnings and decisions support structured attributes for better documentation:
Learning with full structure:
<context-update type=\"learning\"\n context=\"Debugging Claude Code hooks\"\n lesson=\"Hooks receive JSON via stdin, not environment variables\"\n application=\"Parse JSON stdin with the host language (Go, Python, etc.): no jq needed\"\n>Hook Input Format</context-update>\n
Decision with full structure:
<context-update type=\"decision\"\n context=\"Need a caching layer for API responses\"\n rationale=\"Redis is fast, well-supported, and team has experience\"\n consequence=\"Must provision Redis infrastructure; team training on Redis patterns\"\n>Use Redis for caching</context-update>\n
Learnings require: context, lesson, application attributes. Decisions require: context, rationale, consequence attributes. Updates missing required attributes are rejected with an error.
Skills That Fight the Platform: Common pitfalls in skill design that work against the host tool
The Anatomy of a Skill That Works: What makes a skill reliable: the E/A/R framework and quality gates
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/migration/","level":1,"title":"Integration","text":"","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#adopting-ctx-in-existing-projects","level":2,"title":"Adopting ctx in Existing Projects","text":"
Claude Code User?
You probably want the plugin instead of this page.
Install ctx from the marketplace: (/plugin → search \"ctx\" → Install) and you're done: hooks, skills, and updates are handled for you.
See Getting Started for the full walkthrough.
This guide covers adopting ctx in existing projects regardless of which tools your team uses.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#quick-paths","level":2,"title":"Quick Paths","text":"You have... Command What happens Nothing (greenfield) ctx init Creates .context/, CLAUDE.md, permissions Existing CLAUDE.mdctx init --merge Backs up your file, inserts ctx block after the H1 Existing CLAUDE.md + ctx markers ctx init --force Replaces the ctx block, leaves your content intact .cursorrules / .aider.conf.ymlctx initctx ignores those files: they coexist cleanly Team repo, first adopter ctx init --merge && git add .context/ CLAUDE.md Initialize and commit for the team","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#existing-claudemd","level":2,"title":"Existing CLAUDE.md","text":"
This is the most common scenario:
You have a CLAUDE.md with project-specific instructions and don't want to lose them.
You Own CLAUDE.md
After initialization, CLAUDE.md is yours: edit it freely.
Add project instructions, remove sections you don't need, reorganize as you see fit.
The only part ctx manages is the block between the <!-- ctx:context --> and <!-- ctx:end --> markers; everything outside those markers is yours to change at any time.
If you remove the markers, nothing breaks: ctx simply treats the file as having no ctx content and will offer to merge again on the next ctx init.
When ctx init detects an existing CLAUDE.md, it checks for ctx markers (<!-- ctx:context --> ... <!-- ctx:end -->):
State Default behavior With --merge With --force No CLAUDE.md Creates from template Creates from template Creates from template Exists, no ctx markers Prompts to merge Auto-merges (no prompt) Auto-merges (no prompt) Exists, has ctx markers Skips (already set up) Skips Replaces the ctx block only","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#the-merge-flag","level":3,"title":"The --merge Flag","text":"
--merge auto-merges without prompting. The merge process:
Backs up your existing CLAUDE.md to CLAUDE.md.<timestamp>.bak;
Finds the H1 heading (e.g., # My Project) in your file;
Inserts the ctx block immediately after it;
Preserves everything else untouched.
Your content before and after the ctx block remains exactly as it was.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#before-after-example","level":3,"title":"Before / After Example","text":"
Before: your existing CLAUDE.md:
# My Project\n\n## Build Commands\n\n-`npm run build`: production build\n- `npm test`: run tests\n\n## Code Style\n\n- Use TypeScript strict mode\n- Prefer named exports\n
After ctx init --merge:
# My Project\n\n<!-- ctx:context -->\n<!-- DO NOT REMOVE: This marker indicates ctx-managed content -->\n\n## IMPORTANT: You Have Persistent Memory\n\nThis project uses Context (`ctx`) for context persistence across sessions.\n...\n\n<!-- ctx:end -->\n\n## Build Commands\n\n- `npm run build`: production build\n- `npm test`: run tests\n\n## Code Style\n\n- Use TypeScript strict mode\n- Prefer named exports\n
Your build commands and code style sections are untouched. The ctx block sits between markers and can be updated independently.
If your CLAUDE.md already has ctx markers (from a previous ctx init), the default behavior is to skip it. Use --force to replace the ctx block with the latest template: This is useful after upgrading ctx:
ctx init --force\n
This only replaces content between <!-- ctx:context --> and <!-- ctx:end -->. Your own content outside the markers is preserved. A timestamped backup is created before any changes.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#undoing-a-merge","level":3,"title":"Undoing a Merge","text":"
ctx doesn't touch tool-specific config files. It creates its own files (.context/, CLAUDE.md) and coexists with whatever you already have.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#what-does-ctx-create","level":3,"title":"What Does ctx Create?","text":"ctx creates ctx does NOT touch .context/ directory .cursorrulesCLAUDE.md (or merges into) .aider.conf.yml.claude/settings.local.json (seeded by ctx init; the plugin manages hooks and skills) .github/copilot-instructions.md.windsurfrules Any other tool-specific config
Claude Code hooks and skills are provided by the ctx plugin, installed from the Claude Code marketplace (/plugin → search \"ctx\" → Install).
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#running-ctx-alongside-other-tools","level":3,"title":"Running ctx Alongside Other Tools","text":"
The .context/ directory is the source of truth. Tool-specific configs point to it:
Cursor: Reference .context/ files in your system prompt (see Cursor setup)
Aider: Add .context/ files to the read: list in .aider.conf.yml (see Aider setup)
Copilot: Keep .context/ files open or reference them in comments (see Copilot setup)
You can generate a tool-specific configuration with:
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#migrating-content-into-context","level":3,"title":"Migrating Content Into .context/","text":"
If you have project knowledge scattered across .cursorrules or custom prompt files, consider migrating it:
Rules / invariants → .context/CONSTITUTION.md
Code patterns → .context/CONVENTIONS.md
Architecture notes → .context/ARCHITECTURE.md
Known issues / tips → .context/LEARNINGS.md
You don't need to delete the originals: ctx and tool-specific files can coexist. But centralizing in .context/ means every tool gets the same context.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#team-adoption","level":2,"title":"Team Adoption","text":"","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#context-is-designed-to-be-committed","level":3,"title":".context/ Is Designed to Be Committed","text":"
The context files (tasks, decisions, learnings, conventions, architecture) are meant to live in version control. However, some subdirectories are personal or sensitive and should not be committed.
ctx init automatically adds these .gitignore entries:
# Journals contain full session transcripts: personal, potentially large\n.context/journal/\n.context/journal-site/\n.context/journal-obsidian/\n\n# Legacy encryption key path (copy to ~/.ctx/.ctx.key if needed)\n.context/.ctx.key\n\n# Runtime state and logs (ephemeral, machine-specific):\n.context/state/\n.context/logs/\n\n# Claude Code local settings (machine-specific)\n.claude/settings.local.json\n
With those in place, committing is straightforward:
# One person initializes\nctx init --merge\n\n# Commit context files (journals and keys are already gitignored)\ngit add .context/ CLAUDE.md\ngit commit -m \"Add ctx context management\"\ngit push\n
Teammates pull and immediately have context. No per-developer setup needed.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#what-about-claude","level":3,"title":"What About .claude/?","text":"
The .claude/ directory contains permissions that ctx init seeds. Hooks and skills are provided by the ctx plugin (not per-project files).
Context files are plain Markdown. Resolve conflicts the same way you would for any other documentation file:
# After a conflicting pull\ngit diff .context/TASKS.md # See both sides\n# Edit to keep both sets of tasks, then:\ngit add .context/TASKS.md\ngit commit\n
Common conflict scenarios:
TASKS.md: Two people added tasks: Keep both.
DECISIONS.md: Same decision recorded differently: Unify the entry.
CLAUDE.md instructions work immediately for Claude Code users;
Other tool users can adopt at their own pace using ctx setup <tool>;
Context files benefit everyone who reads them, even without tool integration.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#verifying-it-worked","level":2,"title":"Verifying It Worked","text":"","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#check-status","level":3,"title":"Check Status","text":"
ctx status\n
You should see your context files listed with token counts and no warnings.
Start a new AI session and ask: \"Do you remember?\"
The AI should cite specific context:
Current tasks from .context/TASKS.md;
Recent decisions or learnings;
Session history (if you've had prior sessions);
If it responds with generic \"I don't have memory\", check that ctx is in your PATH (which ctx) and that hooks are configured (see Troubleshooting).
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#verify-the-merge","level":3,"title":"Verify the Merge","text":"
If you used --merge, check that your original content is intact:
# Your original content should still be there\ncat CLAUDE.md\n\n# The ctx block should be between markers\ngrep -c \"ctx:context\" CLAUDE.md # Should print 1\ngrep -c \"ctx:end\" CLAUDE.md # Should print 1\n
","path":["Operations","Integration"],"tags":[]},{"location":"operations/release/","level":1,"title":"Cutting a Release","text":"","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#prerequisites","level":2,"title":"Prerequisites","text":"
Before you can cut a release you need:
Push access to origin (GitHub)
GPG signing configured (make gpg-test)
Go installed (version in go.mod)
Zensical installed (make site-setup)
A clean working tree (git status shows nothing to commit)
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#step-by-step","level":2,"title":"Step-by-Step","text":"","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#1-update-the-version-file","level":3,"title":"1. Update the VERSION File","text":"
echo \"0.9.0\" > VERSION\ngit add VERSION\ngit commit -m \"chore: bump version to 0.9.0\"\n
The VERSION file uses bare semver (0.9.0), no v prefix. The release script adds the v prefix for git tags.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#2-generate-release-notes","level":3,"title":"2. Generate Release Notes","text":"
In Claude Code:
/_ctx-release-notes\n
This analyzes commits since the last tag and writes dist/RELEASE_NOTES.md. The release script refuses to proceed without this file.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#3-verify-docs-and-commit-any-remaining-changes","level":3,"title":"3. Verify Docs and Commit Any Remaining Changes","text":"
/ctx-check-links # audit docs for dead links\nmake audit # full check: fmt, vet, lint, style, test\ngit status # must be clean\n
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#4-run-the-release","level":3,"title":"4. Run the Release","text":"
make release\n
Or, if you are in a Claude Code session:
/_ctx-release\n
The release script does everything in order:
Step What happens 1 Reads VERSION, verifies release notes exist 2 Verifies working tree is clean 3 Updates version in 4 config files (plugin.json, marketplace.json, VS Code package.json + lock) 4 Updates download URLs in 3 doc files (index.md, getting-started.md, integrations.md) 5 Adds new row to versions.md 6 Rebuilds the documentation site (make site) 7 Commits all version and docs updates 8 Runs make test and make smoke 9 Builds binaries for all 6 platforms via hack/build-all.sh 10 Creates a signed git tag (v0.9.0) 11 Pushes the tag to origin 12 Updates and pushes the latest tag","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#5-github-ci-takes-over","level":3,"title":"5. GitHub CI Takes Over","text":"
Pushing a v* tag triggers .github/workflows/release.yml:
Checks out the tagged commit
Runs the full test suite
Builds binaries for all platforms
Creates a GitHub Release with auto-generated notes
Uploads binaries and SHA256 checksums
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#6-verify","level":3,"title":"6. Verify","text":"
GitHub Releases shows the new version
All 6 binaries are attached (linux/darwin x amd64/arm64, windows x amd64)
SHA256 files are attached
Release notes look correct
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#what-gets-updated-automatically","level":2,"title":"What Gets Updated Automatically","text":"
The release script updates 8 files so you do not have to:
File What changes internal/assets/claude/.claude-plugin/plugin.json Plugin version .claude-plugin/marketplace.json Marketplace version (2 fields) editors/vscode/package.json VS Code extension version editors/vscode/package-lock.json VS Code lock version (2 fields) docs/index.md Download URLs docs/home/getting-started.md Download URLs docs/operations/integrations.md VSIX filename version docs/reference/versions.md New version row + latest pointer
The Go binary version is injected at build time via -ldflags from the VERSION file. No source file needs editing.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#build-targets-reference","level":2,"title":"Build Targets Reference","text":"Target What it does make release Full release (script + tag + push) make build Build binary for current platform make build-all Build all 6 platform binaries make test Unit tests make smoke Integration smoke tests make audit Full check (fmt + vet + lint + drift + docs + test) make site Rebuild documentation site","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#troubleshooting","level":2,"title":"Troubleshooting","text":"","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#release-notes-not-found","level":3,"title":"\"Release notes not found\"","text":"
ERROR: dist/RELEASE_NOTES.md not found.\n
Run /_ctx-release-notes in Claude Code first, or write dist/RELEASE_NOTES.md manually.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#working-tree-is-not-clean","level":3,"title":"\"Working tree is not clean\"","text":"
ERROR: Working tree is not clean.\n
Commit or stash all changes before running make release.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#tag-already-exists","level":3,"title":"\"Tag already exists\"","text":"
ERROR: Tag v0.9.0 already exists.\n
You cannot release the same version twice. Either bump VERSION to a new version, or delete the old tag if the release was incomplete:
git tag -d v0.9.0\ngit push origin :refs/tags/v0.9.0\n
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#ci-build-fails-after-tag-push","level":3,"title":"CI build fails after tag push","text":"
The tag is already published. Fix the issue, bump to a patch version (e.g. 0.9.1), and release again. Do not force-push tags that others may have already fetched.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/upgrading/","level":1,"title":"Upgrade","text":"","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#upgrade","level":2,"title":"Upgrade","text":"
New versions of ctx may ship updated permissions, CLAUDE.md directives, or plugin hooks and skills.
Claude Code User?
The marketplace can update skills, hooks, and prompts independently: /plugin → select ctx → Update now (or enable auto-update).
The ctx binary is separate: rebuild from source or download a new release when one is available, then run ctx init --force --merge. Knowledge files are preserved automatically.
# Plugin users (Claude Code)\n# /plugin → select ctx → Update now\n# Then update the binary and reinitialize:\nctx init --force --merge\n\n# From-source / manual users\n# install new ctx binary, then:\nctx init --force --merge\n# /plugin → select ctx → Update now (if using Claude Code)\n
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#what-changes-between-versions","level":2,"title":"What Changes Between Versions","text":"
ctx init generates two categories of files:
Category Examples Changes between versions? Infrastructure .claude/settings.local.json (permissions), ctx-managed sections in CLAUDE.md, ctx plugin (hooks + skills) Yes Knowledge .context/TASKS.md, DECISIONS.md, LEARNINGS.md, CONVENTIONS.md, ARCHITECTURE.md, GLOSSARY.md, CONSTITUTION.md, AGENT_PLAYBOOK.md No: this is your data
Infrastructure is regenerated by ctx init and plugin updates. Knowledge files are yours and should never be overwritten.
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#upgrade-steps","level":2,"title":"Upgrade Steps","text":"","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#1-install-the-new-version","level":3,"title":"1. Install the New Version","text":"
Build from source or download the binary:
cd /path/to/ctx-source\ngit pull\nmake build\nsudo make install\nctx --version # verify\n
--force regenerates infrastructure files (permissions, ctx-managed sections in CLAUDE.md).
--merge preserves your content outside ctx markers.
Knowledge files (.context/TASKS.md, DECISIONS.md, etc.) are preserved automatically: ctx init only overwrites infrastructure, never your data.
Encryption key: The encryption key lives at ~/.ctx/.ctx.key (outside the project). Reinit does not affect it. If you have a legacy key at .context/.ctx.key or ~/.local/ctx/keys/, copy it manually (see Syncing Scratchpad Notes).
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#3-update-the-ctx-plugin","level":3,"title":"3. Update the ctx Plugin","text":"
If you use Claude Code, update the plugin to get new hooks and skills:
Open /plugin in Claude Code.
Select ctx.
Click Update now.
Or enable auto-update so the plugin stays current without manual steps.
If you made manual backups, remove them once satisfied:
rm -rf .context.bak .claude.bak CLAUDE.md.bak\n
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#what-if-i-skip-the-upgrade","level":2,"title":"What If I Skip the Upgrade?","text":"
The old binary still works with your existing .context/ files. But you may miss:
New plugin hooks that enforce better practices or catch mistakes;
Updated skill prompts that produce better results;
New .gitignore entries for directories added in newer versions;
Bug fixes in the CLI itself.
The plugin and the binary can be updated independently. You can update the plugin (for new hooks/skills) even if you stay on an older binary, and vice versa.
Context files are plain Markdown: They never break between versions.
Workflow recipes combining ctx commands and skills to solve specific problems.
","path":["Recipes"],"tags":[]},{"location":"recipes/#getting-started","level":2,"title":"Getting Started","text":"","path":["Recipes"],"tags":[]},{"location":"recipes/#guide-your-agent","level":3,"title":"Guide Your Agent","text":"
How commands, skills, and conversational patterns work together. Train your agent to be proactive through ask, guide, reinforce.
","path":["Recipes"],"tags":[]},{"location":"recipes/#setup-across-ai-tools","level":3,"title":"Setup Across AI Tools","text":"
Initialize ctx and configure hooks for Claude Code, Cursor, Aider, Copilot, or Windsurf. Includes shell completion, watch mode for non-native tools, and verification.
","path":["Recipes"],"tags":[]},{"location":"recipes/#keeping-context-in-a-separate-repo","level":3,"title":"Keeping Context in a Separate Repo","text":"
Store context files outside the project tree: in a private repo, shared directory, or anywhere else. Useful for open source projects with private context or multi-repo setups.
The two bookend rituals for every session: /ctx-remember at the start to load and confirm context, /ctx-wrap-up at the end to review the session and persist learnings, decisions, and tasks.
","path":["Recipes"],"tags":[]},{"location":"recipes/#browsing-and-enriching-past-sessions","level":3,"title":"Browsing and Enriching Past Sessions","text":"
Export your AI session history to a browsable journal site. Enrich entries with metadata and search across months of work.
Leave a message for your next session. Reminders surface automatically at session start and repeat until dismissed. Date-gate reminders to surface only after a specific date.
Silence all nudge hooks for a quick task that doesn't need ceremony overhead. Session-scoped: Other sessions are unaffected. Security hooks still fire.
","path":["Recipes"],"tags":[]},{"location":"recipes/#knowledge-tasks","level":2,"title":"Knowledge & Tasks","text":"","path":["Recipes"],"tags":[]},{"location":"recipes/#persisting-decisions-learnings-and-conventions","level":3,"title":"Persisting Decisions, Learnings, and Conventions","text":"
Record architectural decisions with rationale, capture gotchas and lessons learned, and codify conventions so they survive across sessions and team members.
","path":["Recipes"],"tags":[]},{"location":"recipes/#using-the-scratchpad","level":3,"title":"Using the Scratchpad","text":"
Use the encrypted scratchpad for quick notes, working memory, and sensitive values during AI sessions. Natural language in, encrypted storage out.
Uses: ctx pad, /ctx-pad, ctx pad show, ctx pad edit
","path":["Recipes"],"tags":[]},{"location":"recipes/#syncing-scratchpad-notes-across-machines","level":3,"title":"Syncing Scratchpad Notes Across Machines","text":"
Distribute your scratchpad encryption key, push and pull encrypted notes via git, and resolve merge conflicts when two machines edit simultaneously.
Uses: ctx init, ctx pad, ctx pad resolve, scp
","path":["Recipes"],"tags":[]},{"location":"recipes/#bridging-claude-code-auto-memory","level":3,"title":"Bridging Claude Code Auto Memory","text":"
Mirror Claude Code's auto memory (MEMORY.md) into .context/ for version control, portability, and drift detection. Import entries into structured context files with heuristic classification.
Choose the right output pattern for your Claude Code hooks: VERBATIM relay for user-facing reminders, hard gates for invariants, agent directives for nudges, and five more patterns across the spectrum.
Customize what hooks say without changing what they do. Override the QA gate for Python (pytest instead of make lint), silence noisy ceremony nudges, or tailor post-commit instructions for your stack.
Uses: ctx system message list, ctx system message show, ctx system message edit, ctx system message reset
Mermaid sequence diagrams for every system hook: entry conditions, state reads, output, throttling, and exit points. Includes throttling summary table and state file reference.
Uses: All ctx system hooks
","path":["Recipes"],"tags":[]},{"location":"recipes/#auditing-system-hooks","level":3,"title":"Auditing System Hooks","text":"
The 12 system hooks that run invisibly during every session: what each one does, why it exists, and how to verify they're actually firing. Covers webhook-based audit trails, log inspection, and detecting silent hook failures.
Get push notifications when loops complete, hooks fire, or agents hit milestones. Webhook URL is encrypted: never stored in plaintext. Works with IFTTT, Slack, Discord, ntfy.sh, or any HTTP endpoint.
","path":["Recipes"],"tags":[]},{"location":"recipes/#maintenance","level":2,"title":"Maintenance","text":"","path":["Recipes"],"tags":[]},{"location":"recipes/#detecting-and-fixing-drift","level":3,"title":"Detecting and Fixing Drift","text":"
Keep context files accurate by detecting structural drift (stale paths, missing files, stale file ages) and task staleness.
Diagnose hook failures, noisy nudges, stale context, and configuration issues. Start with ctx doctor for a structural health check, then use /ctx-doctor for agent-driven analysis of event patterns.
Keep .claude/settings.local.json clean: recommended safe defaults, what to never pre-approve, and a maintenance workflow for cleaning up session debris.
","path":["Recipes"],"tags":[]},{"location":"recipes/#importing-claude-code-plans","level":3,"title":"Importing Claude Code Plans","text":"
Import Claude Code plan files (~/.claude/plans/*.md) into specs/ as permanent project specs. Filter by date, select interactively, and optionally create tasks referencing each imported spec.
Uses: /ctx-import-plans, /ctx-add-task
","path":["Recipes"],"tags":[]},{"location":"recipes/#design-before-coding","level":3,"title":"Design Before Coding","text":"
Front-load design with a four-skill chain: brainstorm the approach, spec the design, task the work, implement step-by-step. Each step produces an artifact that feeds the next.
Encode repeating workflows into reusable skills the agent loads automatically. Covers the full cycle: identify a pattern, create the skill, test with realistic prompts, and iterate until it triggers correctly.
Uses: /ctx-skill-creator, ctx init
","path":["Recipes"],"tags":[]},{"location":"recipes/#running-an-unattended-ai-agent","level":3,"title":"Running an Unattended AI Agent","text":"
Set up a loop where an AI agent works through tasks overnight without you at the keyboard, using ctx for persistent memory between iterations.
This recipe shows how ctx supports long-running agent loops without losing context or intent.
","path":["Recipes"],"tags":[]},{"location":"recipes/#when-to-use-a-team-of-agents","level":3,"title":"When to Use a Team of Agents","text":"
Decision framework for choosing between a single agent, parallel worktrees, and a full agent team.
This recipe covers the file overlap test, when teams make things worse, and what ctx provides at each level.
Uses: /ctx-worktree, /ctx-next, ctx status
","path":["Recipes"],"tags":[]},{"location":"recipes/#parallel-agent-development-with-git-worktrees","level":3,"title":"Parallel Agent Development with Git Worktrees","text":"
Split a large backlog across 3-4 agents using git worktrees, each on its own branch and working directory. Group tasks by file overlap, work in parallel, merge back.
Map your project's internal and external dependency structure. Auto-detects Go, Node.js, Python, and Rust. Output as Mermaid, table, or JSON.
Uses: ctx dep, ctx drift
","path":["Recipes"],"tags":[]},{"location":"recipes/autonomous-loops/","level":1,"title":"Running an Unattended AI Agent","text":"","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-problem","level":2,"title":"The Problem","text":"
You have a project with a clear list of tasks, and you want an AI agent to work through them autonomously: overnight, unattended, without you sitting at the keyboard.
Each iteration needs to remember what the previous one did, mark tasks as completed, and know when to stop.
Without persistent memory, every iteration starts fresh and the loop collapses. With ctx, each iteration can pick up where the last one left off, but only if the agent persists its context as part of the work.
Unattended operation works because the agent treats context persistence as a first-class deliverable, not an afterthought.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#tldr","level":2,"title":"TL;DR","text":"
ctx init # 1. init context\n# Edit TASKS.md with phased work items\nctx loop --tool claude --max-iterations 10 # 2. generate loop.sh\n./loop.sh 2>&1 | tee /tmp/loop.log & # 3. run the loop\nctx watch --log /tmp/loop.log # 4. process context updates\n# Next morning:\nctx status && ctx load # 5. review the results\n
Read on for permissions, isolation, and completion signals.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx init Command Initialize project context and prompt templates ctx loop Command Generate the loop shell script ctx watch Command Monitor AI output and persist context updates ctx load Command Display assembled context (for debugging) /ctx-loop Skill Generate loop script from inside Claude Code /ctx-implement Skill Execute a plan step-by-step with verification","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-1-initialize-for-unattended-operation","level":3,"title":"Step 1: Initialize for Unattended Operation","text":"
Start by creating a .context/ directory configured so the agent can work without human input.
ctx init\n
This creates .context/ with the template files (including a loop prompt at .context/loop.md), and seeds Claude Code permissions in .claude/settings.local.json. Install the ctx plugin for hooks and skills.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-2-populate-tasksmd-with-phased-work","level":3,"title":"Step 2: Populate TASKS.md with Phased Work","text":"
Open .context/TASKS.md and organize your work into phases. The agent works through these systematically, top to bottom, using priority tags to break ties.
# Tasks\n\n## Phase 1: Foundation\n\n- [ ] Set up project structure and build system `#priority:high`\n- [ ] Configure testing framework `#priority:high`\n- [ ] Create CI pipeline `#priority:medium`\n\n## Phase 2: Core Features\n\n- [ ] Implement user registration `#priority:high`\n- [ ] Add email verification `#priority:high`\n- [ ] Create password reset flow `#priority:medium`\n\n## Phase 3: Hardening\n\n- [ ] Add rate limiting to API endpoints `#priority:medium`\n- [ ] Improve error messages `#priority:low`\n- [ ] Write integration tests `#priority:medium`\n
Phased organization matters because it gives the agent natural boundaries. Phase 1 tasks should be completable without Phase 2 code existing yet.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-3-configure-the-loop-prompt","level":3,"title":"Step 3: Configure the Loop Prompt","text":"
The loop prompt at .context/loop.md instructs the agent to operate autonomously:
Read .context/CONSTITUTION.md first (hard rules, never violated)
Load context from .context/ files
Pick one task per iteration
Complete the task and update context files
Commit changes (including .context/)
Signal status with a completion signal
You can customize .context/loop.md for your project. The critical parts are the one-task-per-iteration discipline, proactive context persistence, and completion signals at the end:
## Signal Status\n\nEnd your response with exactly ONE of:\n\n* `SYSTEM_CONVERGED`: All tasks in `TASKS.md` are complete (*this is the\n signal the loop script detects by default*)\n* `SYSTEM_BLOCKED`: Cannot proceed, need human input (explain why)\n* (*no signal*): More work remains, continue to the next iteration\n\nNote: the loop script only checks for `SYSTEM_CONVERGED` by default.\n`SYSTEM_BLOCKED` is a convention for the human reviewing the log.\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-4-configure-permissions","level":3,"title":"Step 4: Configure Permissions","text":"
An unattended agent needs permission to use tools without prompting. By default, Claude Code asks for confirmation on file writes, bash commands, and other operations, which stops the loop and waits for a human who is not there.
There are two approaches.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#option-a-explicit-allowlist-recommended","level":4,"title":"Option A: Explicit Allowlist (Recommended)","text":"
Grant only the permissions the agent needs. In .claude/settings.local.json:
Adjust the Bash patterns for your project's toolchain. The agent can run make, go, git, and ctx commands but cannot run arbitrary shell commands.
This is recommended even in sandboxed environments because it limits blast radius.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#option-b-skip-all-permission-checks","level":4,"title":"Option B: Skip All Permission Checks","text":"
Claude Code supports a --dangerously-skip-permissions flag that disables all permission prompts:
claude --dangerously-skip-permissions -p \"$(cat .context/loop.md)\"\n
This Flag Means What It Says
With --dangerously-skip-permissions, the agent can execute any shell command, write to any file, and make network requests without confirmation.
Only use this on a sandboxed machine: ideally a virtual machine with no access to host credentials, no SSH keys, and no access to production systems.
If you would not give an untrusted intern sudo on this machine, do not use this flag.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#enforce-isolation-at-the-os-level","level":4,"title":"Enforce Isolation at the OS Level","text":"
The only controls an agent cannot override are the ones enforced by the operating system, the container runtime, or the hypervisor.
Do Not Skip This Section
This is not optional hardening:
An unattended agent with unrestricted OS access is an unattended shell with unrestricted OS access.
The allowlist above is a strong first layer, but do not rely on a single runtime boundary.
For unattended runs, enforce isolation at the infrastructure level:
Layer What to enforce User account Run the agent as a dedicated unprivileged user with no sudo access and no membership in privileged groups (docker, wheel, adm). Filesystem Restrict the project directory via POSIX permissions or ACLs. The agent should have no access to other users' files or system directories. Container Run inside a Docker/Podman sandbox. Mount only the project directory. Drop capabilities (--cap-drop=ALL). Disable network if not needed (--network=none). Never mount the Docker socket and do not run privileged containers. Prefer rootless containers. Virtual machine Prefer a dedicated VM with no shared folders, no host passthrough, and no keys to other machines. Network If the agent does not need the internet, disable outbound access entirely. If it does, restrict to specific domains via firewall rules. Resource limits Apply CPU, memory, and disk limits (cgroups/container limits). A runaway loop should not fill disk or consume all RAM. Self-modification Make instruction files read-only. CLAUDE.md, .claude/settings.local.json, and .context/CONSTITUTION.md should not be writable by the agent user. If using project-local hooks, protect those too.
Use multiple layers together: OS-level isolation (the boundary the agent cannot cross), a permission allowlist (what Claude Code will do within that boundary), and CONSTITUTION.md (a soft nudge for the common case).
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-5-generate-the-loop-script","level":3,"title":"Step 5: Generate the Loop Script","text":"
Use ctx loop to generate a loop.sh tailored to your AI tool:
# Generate for Claude Code with a 10-iteration cap\nctx loop --tool claude --max-iterations 10\n\n# Generate for Aider\nctx loop --tool aider --max-iterations 10\n\n# Custom prompt file and output filename\nctx loop --tool claude --prompt my-prompt.md --output my-loop.sh\n
The generated script reads .context/loop.md, runs the tool, checks for completion signals, and loops until done or the cap is reached.
You can also use the /ctx-loop skill from inside Claude Code.
A Shell Loop Is the Best Practice
The shell loop approach spawns a fresh AI process each iteration, so the only state that carries between iterations is what lives in .context/ and git.
Claude Code's built-in /loop runs iterations within the same session, which can allow context window state to leak between iterations. This can be convenient for short runs, but it is less reliable for unattended loops.
See Shell Loop vs Built-in Loop for details.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-6-run-with-watch-mode","level":3,"title":"Step 6: Run with Watch Mode","text":"
Open two terminals. In the first, run the loop. In the second, run ctx watch to process context updates from the AI output.
# Terminal 1: Run the loop\n./loop.sh 2>&1 | tee /tmp/loop.log\n\n# Terminal 2: Watch for context updates\nctx watch --log /tmp/loop.log\n
The watch command parses XML context-update commands from the AI output and applies them:
<context-update type=\"complete\">user registration</context-update>\n<context-update type=\"learning\"\n context=\"Setting up user registration\"\n lesson=\"Email verification needs SMTP configured\"\n application=\"Add SMTP setup to deployment checklist\"\n>SMTP Requirement</context-update>\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-7-completion-signals-end-the-loop","level":3,"title":"Step 7: Completion Signals End the Loop","text":"
The generated script checks for one completion signal per run. By default this is SYSTEM_CONVERGED. You can change it with the --completion flag:
ctx loop --tool claude --completion BOOTSTRAP_COMPLETE --max-iterations 5\n
The following signals are conventions used in .context/loop.md:
Signal Convention How the script handles it SYSTEM_CONVERGED All tasks in TASKS.md are done Detected by default (--completion default value) SYSTEM_BLOCKED Agent cannot proceed Only detected if you set --completion to this BOOTSTRAP_COMPLETE Initial scaffolding done Only detected if you set --completion to this
The script uses grep -q on the agent's output, so any string works as a signal. If you need to detect multiple signals in one run, edit the generated loop.sh to add additional grep checks.
When you return in the morning, check the log and the context files:
tail -100 /tmp/loop.log\nctx status\nctx load\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-8-use-ctx-implement-for-plan-execution","level":3,"title":"Step 8: Use /ctx-implement for Plan Execution","text":"
Within each iteration, the agent can use /ctx-implement to execute multi-step plans with verification between steps. This is useful for complex tasks that touch multiple files.
The skill breaks a plan into atomic, verifiable steps:
Step 1/6: Create user model .................. OK\nStep 2/6: Add database migration ............. OK\nStep 3/6: Implement registration handler ..... OK\nStep 4/6: Write unit tests ................... OK\nStep 5/6: Run test suite ..................... FAIL\n -> Fixed: missing test dependency\n -> Re-verify ............................... OK\nStep 6/6: Update TASKS.md .................... OK\n
Each step is verified (build, test, syntax check) before moving to the next.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
A typical overnight run:
ctx init\n# Edit TASKS.md and .context/loop.md\n\nctx loop --tool claude --max-iterations 20\n\n./loop.sh 2>&1 | tee /tmp/loop.log &\nctx watch --log /tmp/loop.log\n\n# Next morning:\nctx status\nctx load\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#why-autonomous-loops-work-proactive-context-persistence","level":2,"title":"Why Autonomous Loops Work: Proactive Context Persistence","text":"
The autonomous loop pattern works because the agent persists context as part of the job.
Without proactive persistence, the loop degrades into disconnected iterations that repeat work, contradict decisions, and lose track of progress. The agent cannot rely on a human to prompt it. It must treat context updates as part of every task, not as a separate step.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-agent-playbook-self-check-drives-autonomy","level":3,"title":"The Agent Playbook Self-Check Drives Autonomy","text":"
The Agent Playbook defines a self-check prompt the agent applies continuously:
\"If this session ended right now, would the next session know what happened?\"\n
In interactive sessions, a human might remind the agent to save context. When the agent runs unattended, nobody is there to remind it.
The self-check becomes the trigger: after meaningful work, the agent evaluates whether the context files reflect reality and updates them immediately if they do not.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#what-the-agent-does-proactively-between-iterations","level":3,"title":"What the Agent Does Proactively Between Iterations","text":"
At milestones within an iteration, the agent persists without waiting for instructions:
Milestone What the agent does proactively Completes a task Marks it [x] in TASKS.md with #done timestamp Discovers a gotcha Adds it to LEARNINGS.md Makes a design choice Records it in DECISIONS.md with rationale Identifies follow-up work Creates new tasks in TASKS.md with #added timestamp Hits an unexpected failure Documents the root cause before moving on","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#example-what-proactive-persistence-looks-like","level":3,"title":"Example: What Proactive Persistence Looks Like","text":"
Here is what the agent does during a single iteration beyond writing code:
Iteration 4:\n 1. Read TASKS.md -> pick \"Add email verification\" (#priority:high)\n 2. Add #started:2026-01-25-030012 to the task\n 3. Implement the feature (code, tests, docs if needed)\n 4. Tests pass -> mark task [x], add #done:2026-01-25-031544\n 5. Add learning: \"SMTP config must be set before verification handler registers. Order matters in init().\"\n 6. Add decision: \"Use token-based verification links (not codes) because links work better in automated tests.\"\n 7. Create follow-up task: \"Add rate limiting to verification endpoint\" #added:...\n 8. Commit all changes including `.context/`\n 9. No signal emitted -> loop continues to iteration 5\n
Steps 2, 4, 5, 6, and 7 are proactive context persistence:
The agent was not asked to do any of them.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#context-persistence-at-milestones","level":3,"title":"Context Persistence at Milestones","text":"
For long autonomous runs, the agent persists context at natural boundaries, often at phase transitions or after completing a cluster of related tasks. It updates TASKS.md, DECISIONS.md, and LEARNINGS.md as it goes.
If the loop crashes at 4 AM, the context files tell you exactly where to resume. You can also use ctx journal source to review the session transcripts.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-persistence-contract","level":3,"title":"The Persistence Contract","text":"
The autonomous loop has an implicit contract:
Every iteration reads context: TASKS.md, DECISIONS.md, LEARNINGS.md
Every iteration writes context: task updates, new learnings, decisions
Every commit includes .context/ so the next iteration sees changes
Context stays current: if the loop stopped right now, nothing important is lost
Break any part of this contract and the loop degrades.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#tips","level":2,"title":"Tips","text":"
Markdown Is Not Enforcement
Your real guardrails are permissions and isolation, not Markdown. CONSTITUTION.md can nudge the agent, but it is probabilistic.
The permission allowlist and OS isolation are deterministic:
For unattended runs, trust the sandbox and the allowlist, not the prose.
Start with a small iteration cap. Use --max-iterations 5 on your first run.
Keep tasks atomic. Each task should be completable in a single iteration.
Check signal discipline. If the loop runs forever, the agent is not emitting SYSTEM_CONVERGED or SYSTEM_BLOCKED. Make the signal requirement explicit in .context/loop.md.
Commit after context updates. Finish code, update .context/, commit including .context/, then signal.
Set up webhook notifications to get notified when the loop completes, hits max iterations, or when hooks fire nudges. The generated loop script includes ctx notify calls automatically.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#next-up","level":2,"title":"Next Up","text":"
When to Use a Team of Agents →: Decision framework for choosing between a single agent, parallel worktrees, and a full agent team.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#see-also","level":2,"title":"See Also","text":"
Tracking Work Across Sessions: structuring TASKS.md
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/building-skills/","level":1,"title":"Building Project Skills","text":"","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#the-problem","level":2,"title":"The Problem","text":"
You have workflows your agent needs to repeat across sessions: a deploy checklist, a review protocol, a release process. Each time, you re-explain the steps. The agent gets it mostly right but forgets edge cases you corrected last time.
Skills solve this by encoding domain knowledge into a reusable document the agent loads automatically when triggered. A skill is not code - it is a structured prompt that captures what took you sessions to learn.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#tldr","level":2,"title":"TL;DR","text":"
/ctx-skill-creator\n
The skill-creator walks you through: identify a repeating workflow, draft a skill, test with realistic prompts, iterate until it triggers correctly and produces good output.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-skill-creator Skill Interactive skill creation and improvement workflow ctx init Command Deploys template skills to .claude/skills/ on first setup","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-1-identify-a-repeating-pattern","level":3,"title":"Step 1: Identify a Repeating Pattern","text":"
Good skill candidates:
Checklists you repeat: deploy steps, release prep, code review
Decisions the agent gets wrong: if you keep correcting the same behavior, encode the correction
Multi-step workflows: anything with a sequence of commands and conditional branches
Domain knowledge: project-specific terminology, architecture constraints, or conventions the agent cannot infer from code alone
Not good candidates: one-off instructions, things the platform already handles (file editing, git operations), or tasks too narrow to reuse.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-2-create-the-skill","level":3,"title":"Step 2: Create the Skill","text":"
Invoke the skill-creator:
You: \"I want a skill for our deploy process\"\n\nAgent: [Asks about the workflow: what steps, what tools,\n what edge cases, what the output should look like]\n
Or capture a workflow you just did:
You: \"Turn what we just did into a skill\"\n\nAgent: [Extracts the steps from conversation history,\n confirms understanding, drafts the skill]\n
The skill-creator produces a SKILL.md file in .claude/skills/your-skill/.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-3-test-with-realistic-prompts","level":3,"title":"Step 3: Test with Realistic Prompts","text":"
The skill-creator proposes 2-3 test prompts - the kind of thing a real user would say. It runs each one and shows the result alongside a baseline (same prompt without the skill) so you can compare.
Agent: \"Here are test prompts I'd try:\n 1. 'Deploy to staging'\n 2. 'Ship the hotfix'\n 3. 'Run the release checklist'\n Want to adjust these?\"\n
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-4-iterate-on-the-description","level":3,"title":"Step 4: Iterate on the Description","text":"
The description field in frontmatter determines when a skill triggers. Claude tends to undertrigger - descriptions need to be specific and slightly \"pushy\":
# Weak - too vague, will undertrigger\ndescription: \"Use for deployments\"\n\n# Strong - covers situations and synonyms\ndescription: >-\n Use when deploying to staging or production, running the release\n checklist, or when the user says 'ship it', 'deploy this', or\n 'push to prod'. Also use after merging to main when a deploy\n is expected.\n
The skill-creator helps you tune this iteratively.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-5-deploy-as-template-optional","level":3,"title":"Step 5: Deploy as Template (Optional)","text":"
If the skill should be available to all projects (not just this one), place it in internal/assets/claude/skills/ so ctx init deploys it to new projects automatically.
Most project-specific skills stay in .claude/skills/ and travel with the repo.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#skill-anatomy","level":2,"title":"Skill Anatomy","text":"
my-skill/\n SKILL.md # Required: frontmatter + instructions (<500 lines)\n scripts/ # Optional: deterministic code the skill can execute\n references/ # Optional: detail loaded on demand (not always)\n assets/ # Optional: output templates, not loaded into context\n
Key sections in SKILL.md:
Section Purpose Required? Frontmatter Name, description (trigger) Yes When to Use Positive triggers Yes When NOT to Use Prevents false activations Yes Process Steps and commands Yes Examples Good/bad output pairs Recommended Quality Checklist Verify before reporting completion For complex skills","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#tips","level":2,"title":"Tips","text":"
Description is everything. A great skill with a vague description never fires. Spend time on trigger coverage - synonyms, concrete situations, edge cases.
Stay under 500 lines. If your skill is growing past this, move detail into references/ files and point to them from SKILL.md.
Do not duplicate the platform. If the agent already knows how to do something (edit files, run git commands), do not restate it. Tag paragraphs as Expert/Activation/Redundant and delete Redundant ones.
Explain why, not just what. \"Sort by date because users want recent results first\" beats \"ALWAYS sort by date.\" The agent generalizes from reasoning better than from rigid rules.
Test negative triggers. Make sure the skill does not fire on unrelated prompts. A skill that activates too broadly becomes noise.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#next-up","level":2,"title":"Next Up","text":"
Parallel Agent Development with Git Worktrees ->: Split work across multiple agents using git worktrees.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#see-also","level":2,"title":"See Also","text":"
Skills Reference: full listing of all bundled and project-local skills
Guide Your Agent: how commands, skills, and conversational patterns work together
Design Before Coding: the four-skill chain for front-loading design work
Claude Code's .claude/settings.local.json controls what the agent can do without asking. Over time, this file accumulates one-off permissions from individual sessions: Exact commands with hardcoded paths, duplicate entries, and stale skill references.
A noisy \"allowlist\" makes it harder to spot dangerous permissions and increases the surface area for unintended behavior.
Since settings.local.json is .gitignored, it drifts independently of your codebase. There is no PR review, no CI check: just whatever you clicked \"Allow\" on.
This recipe shows what a well-maintained permission file looks like and how to keep it clean.
","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Command/Skill Role in this workflow ctx init Populates default ctx permissions /ctx-drift Detects missing or stale permission entries /ctx-sanitize-permissions Audits for dangerous patterns (security-focused)","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#recommended-defaults","level":2,"title":"Recommended Defaults","text":"
After running ctx init, your settings.local.json will have the ctx defaults pre-populated. Here is an opinionated safe starting point for a Go project using ctx:
The goal is intentional permissions: Every entry should be there because you decided it belongs, not because you clicked \"Allow\" once during debugging.
Use wildcards for trusted binaries: If you trust the binary (your own project's CLI, make, go), a single wildcard like Bash(ctx:*) beats twenty subcommand entries. It reduces noise and means new subcommands work without re-prompting.
Keep git commands granular: Unlike ctx or make, git has both safe commands (git log, git status) and destructive ones (git reset --hard, git clean -f). Listing safe commands individually prevents accidentally pre-approving dangerous ones.
Pre-approve all ctx- skills: Skills shipped with ctx (Skill(ctx-*)) are safe to pre-approve. They are part of your project and you control their content. This prevents the agent from prompting on every skill invocation.
ctx init automatically populates permissions.deny with rules that block dangerous operations. Deny rules are evaluated before allow rules: A denied pattern always prompts the user, even if it also matches an allow entry.
The defaults block:
Pattern Why Bash(sudo *) Cannot enter password; will hang Bash(git push *) Must be explicit user action Bash(rm -rf /*) etc. Recursive delete of system/home directories Bash(curl *) / wget Arbitrary network requests Bash(chmod 777 *) World-writable permissions Read/Edit(**/.env*) Secrets and credentials Read(**/*.pem, *.key) Private keys
Read/Edit Deny Rules
Read() and Edit() deny rules have known upstream enforcement issues (claude-code#6631,#24846).
They are included as defense-in-depth and intent documentation.
Blocked by default deny rules: no action needed, ctx init handles these:
Pattern Risk Bash(git push:*) Must be explicit user action Bash(sudo:*) Privilege escalation Bash(rm -rf:*) Recursive delete with no confirmation Bash(curl:*) / Bash(wget:*) Arbitrary network requests
Requires manual discipline: Never add these to allow:
Pattern Risk Bash(git reset:*) Can discard uncommitted work Bash(git clean:*) Deletes untracked files Skill(ctx-sanitize-permissions) Edits this file: self-modification vector Skill(release) Runs the release pipeline: high impact","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#hooks-regex-safety-net","level":2,"title":"Hooks: Regex Safety Net","text":"
Deny rules handle prefix-based blocking natively. Hooks complement them by catching patterns that require regex matching: Things deny rules can't express.
The ctx plugin ships these blocking hooks:
Hook What it blocks ctx system block-non-path-ctx Running ctx from wrong path
Project-local hooks (not part of the plugin) catch regex edge cases:
Hook What it blocks block-dangerous-commands.sh Mid-command sudo/git push (after &&), copies to bin dirs, absolute-path ctx
Pre-Approved + Hook-Blocked = Silent Block
If you pre-approve a command that a hook blocks, the user never sees the confirmation dialog. The agent gets a block response and must handle it, which is confusing.
It's better not to pre-approve commands that hooks are designed to intercept.
If manual cleanup is too tedious, use a golden image to automate it:
Snapshot a curated permission set, then restore at session start to automatically drop session-accumulated permissions. See the Permission Snapshots recipe for the full workflow.
","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#adapting-for-other-languages","level":2,"title":"Adapting for Other Languages","text":"
The recommended defaults above are Go-specific. For other stacks, swap the build/test tooling:
","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/context-health/","level":1,"title":"Detecting and Fixing Drift","text":"","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#the-problem","level":2,"title":"The Problem","text":"
ctx files drift: you rename a package, delete a module, or finish a sprint, and suddenly ARCHITECTURE.md references paths that no longer exist, TASKS.md is 80 percent completed checkboxes, and CONVENTIONS.md describes patterns you stopped using two months ago.
Stale context is worse than no context:
An AI tool that trusts outdated references will hallucinate confidently.
This recipe shows how to detect drift, fix it, and keep your .context/ directory lean and accurate.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#tldr","level":2,"title":"TL;DR","text":"
ctx drift # detect problems\nctx drift --fix # auto-fix the easy ones\nctx sync --dry-run && ctx sync # reconcile after refactors\nctx compact --archive # archive old completed tasks\nctx status # verify\n
Or just ask your agent: \"Is our context clean?\"
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx drift Command Detect stale paths, missing files, violations ctx drift --fix Command Auto-fix simple issues ctx sync Command Reconcile context with codebase structure ctx compact Command Archive completed tasks, clean up empty sections ctx status Command Quick health overview /ctx-drift Skill Structural plus semantic drift detection /ctx-architecture Skill Refresh ARCHITECTURE.md from actual codebase /ctx-status Skill In-session context summary /ctx-prompt-audit Skill Audit prompt quality and token efficiency","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#the-workflow","level":2,"title":"The Workflow","text":"
The best way to maintain context health is conversational: Ask your agent, guide it, and let it detect problems, explain them, and fix them with your approval. CLI commands exist for CI pipelines, scripting, and fine-grained control.
For day-to-day maintenance, talk to your agent.
Your Questions Reinforce the Pattern
Asking \"is our context clean?\" does two things:
It triggers a drift check right now
It reinforces the habit
This is reinforcement, not enforcement.
Do not wait for the agent to be proactive on its own:
Guide your agent, especially in early sessions.
Over time, you will ask less and the agent will start offering more.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-1-ask-your-agent","level":3,"title":"Step 1: Ask Your Agent","text":"
The simplest way to check context health:
Is our context clean?\nAnything stale?\nHow healthy are our context files?\n
Or invoke the skill directly:
/ctx-drift\n
The agent performs two layers of analysis:
Layer 1, structural checks (via ctx drift): Dead paths, missing files, completed task counts, constitution violations. Fast and programmatic.
Layer 2, semantic analysis (agent-driven): Does CONVENTIONS.md describe patterns the code no longer follows? Does DECISIONS.md contain entries whose rationale no longer applies? Are there learnings about bugs that are now fixed? This is where the agent adds value the CLI cannot: It reads both context files and source code and compares them.
The agent reports both layers together, explains each finding in plain language, and offers to fix what it can.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-2-maintenance-at-session-start","level":3,"title":"Step 2: Maintenance at Session Start","text":"
You do not need to ask explicitly.
Using Claude Code
ctx ships with Claude Code hooks that remind the agent at the right time to take initiative.
Checking context health at the session start, offering to persist learnings before you quit, and flagging drift when it matters. The agent stays proactive without you having to prompt it:
Agent: Good morning. I've loaded the context files. A few things\n before we start:\n\n - ARCHITECTURE.md references `pkg/auth/` which is now empty\n - DECISIONS.md hasn't been updated in 40 days\n - There are 18 completed tasks ready for archival\n\n Want me to run a quick maintenance pass, or should we jump\n straight into today's work?\n
☝️️ this is what persistent, initiative-driven sessions feel like when context is treated as a system instead of a prompt.
If the agent does not offer this on its own, a gentle nudge is enough:
Anything stale before we start?\nHow's the context looking?\n
This turns maintenance from a scheduled chore into a conversation that happens when it matters.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-3-real-time-detection-during-work","level":3,"title":"Step 3: Real-Time Detection During Work","text":"
Agents can notice drift while working: When a mismatch is directly in the path of their current task. If an agent reads ARCHITECTURE.md to find where to add a handler and internal/handlers/ doesn't exist, it will notice because the stale reference blocks its work:
Agent: ARCHITECTURE.md references `internal/handlers/` but that directory\n doesn't exist. I'll look at the actual source tree to find where\n handlers live now.\n
This happens reliably when the drift intersects the task. What is less reliable is the agent generalizing from one mismatch to \"there might be more stale references; let me run drift detection\" That leap requires the agent to know /ctx-drift exists and to decide the current task should pause for maintenance.
If you want that behavior, reinforce it:
Good catch. Yes, run /ctx-drift and clean up any other stale references.\n
Over time, agents that have seen this pattern will start offering proactively. But do not expect it from a cold start.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-4-archival-and-cleanup","level":3,"title":"Step 4: Archival and Cleanup","text":"
ctx drift detects when TASKS.md has more than 10 completed items and flags it as a staleness warning. Running ctx drift --fix archives completed tasks automatically.
You can also run /ctx-archive to compact on demand.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#knowledge-health-flow","level":3,"title":"Knowledge Health Flow","text":"
Over time, LEARNINGS.md and DECISIONS.md accumulate entries that overlap or partially repeat each other. The check-persistence hook detects when entry counts exceed a configurable threshold and surfaces a nudge:
\"LEARNINGS.md has 25+ entries. Consider running /ctx-consolidate to merge overlapping items.\"
The consolidation workflow:
Review: /ctx-consolidate groups entries by keyword similarity and presents candidate merges for your approval.
Merge: Approved groups are combined into single entries that preserve the key information from each original.
Archive: Originals move to .context/archive/, not deleted -- the full history is preserved in git and the archive directory.
Verify: Run ctx drift after consolidation to confirm no cross-references were broken by the merge.
This replaces ad-hoc cleanup with a repeatable, nudge-driven cycle: detect accumulation, review candidates, merge with approval, archive originals.
See also: Knowledge Capture for the recording workflow that feeds into this maintenance cycle.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-doctor-the-superset-check","level":2,"title":"ctx doctor: The Superset Check","text":"
ctx doctor combines drift detection with hook auditing, configuration checks, event logging status, and token size reporting in a single command. If you want one command that covers structural health, hooks, and state:
ctx doctor # everything in one pass\nctx doctor --json # machine-readable for scripting\n
Use /ctx-doctor Too
For agent-driven diagnosis that adds semantic analysis on top of the structural checks, use /ctx-doctor.
See the Troubleshooting recipe for the full workflow.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#cli-reference","level":2,"title":"CLI Reference","text":"
The conversational approach above uses CLI commands under the hood. When you need direct control, use the commands directly.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-drift","level":3,"title":"ctx drift","text":"
Scan context files for structural problems:
ctx drift\n
Sample output:
Drift Report\n============\n\nWarnings (3):\n ARCHITECTURE.md:14 path \"internal/api/router.go\" does not exist\n ARCHITECTURE.md:28 path \"pkg/auth/\" directory is empty\n CONVENTIONS.md:9 path \"internal/handlers/\" not found\n\nViolations (1):\n TASKS.md 31 completed tasks (recommend archival)\n\nStaleness:\n DECISIONS.md last modified 45 days ago\n LEARNINGS.md last modified 32 days ago\n\nExit code: 1 (warnings found)\n
Level Meaning Action Warning Stale path references, missing files Fix or remove Violation Constitution rule heuristic failures, heavy clutter Fix soon Staleness Files not updated recently Review content
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-drift-fix","level":3,"title":"ctx drift --fix","text":"
Auto-fix mechanical issues:
ctx drift --fix\n
This handles removing dead path references, updating unambiguous renames, clearing empty sections. Issues requiring judgment are flagged but left for you.
Run ctx drift again afterward to confirm what remains.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-sync","level":3,"title":"ctx sync","text":"
After a refactor, reconcile context with the actual codebase structure:
ctx sync scans for structural changes, compares with ARCHITECTURE.md, checks for new dependencies worth documenting, and identifies context referring to code that no longer exists.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-compact","level":3,"title":"ctx compact","text":"
Consolidate completed tasks and clean up empty sections:
ctx compact # move completed tasks to Completed section,\n # remove empty sections\nctx compact --archive # also archive old tasks to .context/archive/\n
Tasks: moves completed items (with all subtasks done) into the Completed section of TASKS.md
All files: removes empty sections left behind
With --archive: writes tasks older than 7 days to .context/archive/tasks-YYYY-MM-DD.md
Without --archive, nothing is deleted: Tasks are reorganized in place.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-status","level":3,"title":"ctx status","text":"
Quick health overview:
ctx status --verbose\n
Shows file counts, token estimates, modification times, and drift warnings in a single glance.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-prompt-audit","level":3,"title":"/ctx-prompt-audit","text":"
Checks whether your context files are readable, compact, and token-efficient for the model.
/ctx-prompt-audit\n
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
Conversational approach (recommended):
Is our context clean? -> agent runs structural plus semantic checks\nFix what you can -> agent auto-fixes and proposes edits\nArchive the done tasks -> agent runs ctx compact --archive\nHow's token usage? -> agent checks ctx status\n
CLI approach (for CI, scripts, or direct control):
ctx drift # 1. Detect problems\nctx drift --fix # 2. Auto-fix the easy ones\nctx sync --dry-run && ctx sync # 3. Reconcile after refactors\nctx compact --archive # 4. Archive old completed tasks\nctx status # 5. Verify\n
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#tips","level":2,"title":"Tips","text":"
Agents cross-reference context files with source code during normal work. When drift intersects their current task, they will notice: a renamed package, a deleted directory, a path that doesn't resolve. But they rarely generalize from one mismatch to a full audit on their own. Reinforce the pattern: when an agent mentions a stale reference, ask it to run /ctx-drift. Over time, it starts offering.
When an agent says \"this reference looks stale,\" it is usually right.
Semantic drift is more damaging than structural drift: ctx drift catches dead paths. But CONVENTIONS.md describing a pattern your code stopped following three weeks ago is worse. When you ask \"is our context clean?\", the agent can do both checks.
Use ctx status as a quick check: It shows file counts, token estimates, and drift warnings in a single glance. Good for a fast \"is everything ok?\" before diving into work.
Drift detection in CI: add ctx drift --json to your CI pipeline and fail on exit code 3 (violations). This catches constitution-level problems before they reach upstream.
Do not over-compact: Completed tasks have historical value. The --archive flag preserves them in .context/archive/ so you can search past work without cluttering active context.
Sync is cautious by default: Use --dry-run after large refactors, then apply.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#next-up","level":2,"title":"Next Up","text":"
Claude Code Permission Hygiene →: Recommended permission defaults and maintenance workflow for Claude Code.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#see-also","level":2,"title":"See Also","text":"
Troubleshooting: full diagnostic workflow using ctx doctor, event logs, and /ctx-doctor
Tracking Work Across Sessions: task lifecycle and archival
Persisting Decisions, Learnings, and Conventions: keeping knowledge files current
The Complete Session: where maintenance fits in the daily workflow
CLI Reference: full flag documentation for all commands
Context Files: structure and purpose of each .context/ file
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/customizing-hook-messages/","level":1,"title":"Customizing Hook Messages","text":"","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#the-problem","level":2,"title":"The Problem","text":"
ctx hooks speak ctx's language, not your project's. The QA gate says \"lint the ENTIRE project\" and \"make build,\" but your Python project uses pytest and ruff. The post-commit nudge suggests running lints, but your project uses npm test. You could remove the hook entirely, but then you lose the logic (counting, state tracking, adaptive frequency) just to change the words.
How do you customize what hooks say without removing what they do?
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#tldr","level":2,"title":"TL;DR","text":"
ctx system message list # see all hooks and their messages\nctx system message show qa-reminder gate # view the current template\nctx system message edit qa-reminder gate # copy default to .context/ for editing\nctx system message reset qa-reminder gate # revert to embedded default\n
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#commands-used","level":2,"title":"Commands Used","text":"Tool Type Purpose ctx system message list CLI command Show all hook messages with category and override status ctx system message show CLI command Print the effective message template ctx system message edit CLI command Copy embedded default to .context/ for editing ctx system message reset CLI command Delete user override, revert to default","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#how-it-works","level":2,"title":"How It Works","text":"
Hook messages use a 3-tier fallback:
User override: .context/hooks/messages/{hook}/{variant}.txt
Embedded default: compiled into the ctx binary
Hardcoded fallback: belt-and-suspenders safety net
The hook logic (when to fire, counting, state tracking, cooldowns) is unchanged. Only the content (what text gets emitted) comes from the template. You customize what the hook says without touching how it decides to speak.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#finding-the-original-templates","level":3,"title":"Finding the Original Templates","text":"
The default templates live in the ctx source tree at:
You can also browse them on GitHub: internal/assets/hooks/messages/
Or use ctx system message show to print any template without digging through source code:
ctx system message show qa-reminder gate # QA gate instructions\nctx system message show check-persistence nudge # persistence nudge\nctx system message show post-commit nudge # post-commit reminder\n
The show output includes the template source and available variables -- everything you need to write a replacement.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#template-variables","level":3,"title":"Template Variables","text":"
Some messages use Go text/template variables for dynamic content:
No context files updated in {{.PromptsSinceNudge}}+ prompts.\nHave you discovered learnings, made decisions,\nestablished conventions, or completed tasks\nworth persisting?\n
The show and edit commands list available variables for each message. When writing a replacement, keep the same {{.VariableName}} placeholders to preserve dynamic content. Variables that you omit render as <no value>: no error, but the output may look odd.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#intentional-silence","level":3,"title":"Intentional Silence","text":"
An empty template file (0 bytes or whitespace-only) means \"don't emit a message\". The hook still runs its logic but produces no output. This lets you silence specific messages without removing the hook from hooks.json.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#example-python-project-qa-gate","level":2,"title":"Example: Python Project QA Gate","text":"
The default QA gate says \"lint the ENTIRE project\" and references make lint. For a Python project, you want pytest and ruff:
# See the current default\nctx system message show qa-reminder gate\n\n# Copy it to .context/ for editing\nctx system message edit qa-reminder gate\n\n# Edit the override\n
Replace the content in .context/hooks/messages/qa-reminder/gate.txt:
HARD GATE! DO NOT COMMIT without completing ALL of these steps first:\n(1) Run the full test suite: pytest -x\n(2) Run the linter: ruff check .\n(3) Verify a clean working tree\nRun tests and linter BEFORE every git commit, no exceptions.\n
The hook still fires on every Edit call. The logic is identical. Only the instructions changed.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#example-silencing-ceremony-nudges","level":2,"title":"Example: Silencing Ceremony Nudges","text":"
The ceremony check nudges you to use /ctx-remember and /ctx-wrap-up. If your team has a different workflow and finds these noisy:
ctx system message edit check-ceremonies both\nctx system message edit check-ceremonies remember\nctx system message edit check-ceremonies wrapup\n
The hooks still track ceremony usage internally, but they no longer emit any visible output.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#example-javascript-project-post-commit","level":2,"title":"Example: JavaScript Project Post-Commit","text":"
The default post-commit nudge mentions generic \"lints and tests.\" For a JavaScript project:
ctx system message edit post-commit nudge\n
Replace with:
Commit succeeded. 1. Offer context capture to the user: Decision (design\nchoice?), Learning (gotcha?), or Neither. 2. Ask the user: \"Want me to\nrun npm test and eslint before you push?\" Do NOT push. The user pushes\nmanually.\n
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#the-two-categories","level":2,"title":"The Two Categories","text":"
Not all messages are equal. The list command shows each message's category:
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#customizable-17-messages","level":3,"title":"Customizable (17 messages)","text":"
Messages that are opinions: project-specific wording that benefits from customization. These are the primary targets for override.
Templates that reference undefined variables render <no value>: no error, graceful degradation.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#tips","level":2,"title":"Tips","text":"
Override files are version-controlled: they live in .context/ alongside your other context files. Team members get the same customized messages.
Start with show: always check the current default before editing. The embedded template is the baseline your override replaces.
Use reset to undo: if a customization causes confusion, reset reverts to the embedded default instantly.
Empty file = silence: you don't need to delete the hook. An empty override file silences the message while preserving the hook's logic.
JSON output for scripting: ctx system message list --json returns structured data for automation.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#see-also","level":2,"title":"See Also","text":"
Hook Output Patterns: understanding VERBATIM relays, agent directives, and hard gates
Auditing System Hooks: verifying hooks are running and auditing their output
Understanding how packages relate to each other is the first step in onboarding, refactoring, and architecture review. ctx dep generates dependency graphs from source code so you can see the structure at a glance instead of tracing imports by hand.
# Auto-detect ecosystem and output Mermaid (default)\nctx dep\n\n# Table format for a quick terminal overview\nctx dep --format table\n\n# JSON for programmatic consumption\nctx dep --format json\n
By default, only internal (first-party) dependencies are shown. Add --external to include third-party packages:
ctx dep --external\nctx dep --external --format table\n
This is useful when auditing transitive dependencies or checking which packages pull in heavy external libraries.
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#when-to-use-it","level":2,"title":"When to Use It","text":"
Onboarding. Generate a Mermaid graph and drop it into the project wiki. New contributors see the architecture before reading code.
Refactoring. Before moving packages, check what depends on them. Combine with ctx drift to find stale references after the move.
Architecture review. Table format gives a quick overview; Mermaid format goes into design docs and PRs.
Pre-commit. Run in CI to detect unexpected new dependencies between packages.
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#combining-with-other-commands","level":2,"title":"Combining with Other Commands","text":"","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#refactoring-with-ctx-drift","level":3,"title":"Refactoring with ctx drift","text":"
# See the dependency structure before refactoring\nctx dep --format table\n\n# After moving packages, check for broken references\nctx drift\n
Use JSON output as input for context files or architecture documentation:
# Generate a dependency snapshot for the context directory\nctx dep --format json > .context/deps.json\n\n# Or pipe into other tools\nctx dep --format mermaid >> docs/architecture.md\n
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#monorepos-and-multi-ecosystem-projects","level":2,"title":"Monorepos and Multi-Ecosystem Projects","text":"
In a monorepo with multiple ecosystems, ctx dep picks the first manifest it finds (Go beats Node.js beats Python beats Rust). Use --type to target a specific ecosystem:
# In a repo with both go.mod and package.json\nctx dep --type node\nctx dep --type go\n
For separate subdirectories, run from each root:
cd services/api && ctx dep --format table\ncd frontend && ctx dep --type node --format mermaid\n
Start with table format. It is the fastest way to get a mental model of the dependency structure. Switch to Mermaid when you need a visual for documentation or a PR.
Pipe JSON to jq. Filter for specific packages, count edges, or extract subgraphs programmatically.
Skip --external unless you need it. Internal-only graphs are cleaner and load faster. Add external deps when you are specifically auditing third-party usage.
Force --type in CI. Auto-detection is convenient locally, but explicit types prevent surprises when the repo structure changes.
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/design-before-coding/","level":1,"title":"Design Before Coding","text":"","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#the-problem","level":2,"title":"The Problem","text":"
You start coding a feature. Halfway through, you realize the approach doesn't handle a key edge case. You refactor. Then you discover the CLI interface doesn't fit the existing patterns. More refactoring.
The design work happened during implementation, mixed in with debugging and trial-and-error. The result works, but the spec was never written down, the trade-offs were never recorded, and the next session has no idea why things are shaped this way.
How do you front-load design so the implementation is straightforward?
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#tldr","level":2,"title":"TL;DR","text":"
/ctx-brainstorm # explore the design space\n/ctx-spec # write the spec document\n/ctx-add-task # break it into tasks\n/ctx-implement # execute step-by-step\n
Four skills, used in sequence. Each produces an artifact that feeds the next.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-brainstorm Skill Structured design dialogue: explore approaches /ctx-spec Skill Scaffold and fill out a spec from the template /ctx-add-task Skill Add implementation tasks to TASKS.md /ctx-implement Skill Execute a plan step-by-step with verification /ctx-add-decision Skill Record design choices made during brainstorming","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-1-brainstorm-the-design","level":3,"title":"Step 1: Brainstorm the Design","text":"
Start with /ctx-brainstorm when you have a rough idea but haven't committed to an approach. The skill follows a gated process:
Understand context: what exists today, what's the gap
Clarify the idea: one question at a time, not a wall of questions
Lock understanding: confirm before exploring solutions
Explore 2-3 approaches: with trade-offs for each
Stress-test: poke holes in the chosen approach
Present the design: detailed enough to implement from
You: \"I want to add RSS feed generation to the blog.\"\n\nAgent: \"Before we design, let me understand: is this for the journal\n site, the docs site, or both?\"\n\nYou: \"Journal site only.\"\n\nAgent: [asks about update frequency, content scope, existing infra...]\n [presents 3 approaches with trade-offs]\n\nYou: \"Option 2 - generate at build time.\"\n\nAgent: \"Let me stress-test that. What happens when...\"\n [probes edge cases]\n \"Design looks solid. Want me to record the approach as a\n decision and write a spec?\"\n
The brainstorm produces a validated design in conversation. It doesn't produce a file - that's what the spec is for.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-2-write-the-spec","level":3,"title":"Step 2: Write the Spec","text":"
Once the design is clear, use /ctx-spec to produce the written artifact. The skill reads the project's spec template (specs/tpl/spec-template.md) and walks through each section:
You: /ctx-spec (rss feed generation)\n\nAgent: \"Creating specs/rss-feed.md from template.\"\n\n \"Problem: What user-visible problem does this solve? Why now?\"\n\nYou: \"Users can't subscribe to blog updates. They have to check\n the site manually.\"\n\nAgent: [writes the Problem section, moves to Approach]\n\n \"Approach: High-level, how does this work?\"\n\nYou: \"Generate feed.xml during ctx journal site. Atom format.\"\n\nAgent: [writes Approach, moves to Happy Path, then Edge Cases...]\n
The skill spends extra time on Edge Cases and Error Handling: These are where specs earn their value. The skill pushes for at least three edge cases and won't accept \"none\" without a challenge.
Sections that don't apply can be skipped. The result is a complete spec at specs/{feature-name}.md.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-3-break-into-tasks","level":3,"title":"Step 3: Break Into Tasks","text":"
After the spec is written, the skill offers to create tasks:
Agent: \"Spec written to specs/rss-feed.md. Want me to break this\n into tasks in TASKS.md?\"\n\nYou: \"Yes.\"\n\nAgent: [creates a Phase section referencing the spec, adds tasks]\n \"Added Phase 0.8: RSS/Atom Feed Generation with 4 tasks:\n - [ ] Implement feed.xml generation in ctx journal site\n - [ ] Add --feed flag and .ctxrc feed config\n - [ ] Write tests for feed generation\n - [ ] Add recipe and update docs\"\n
Each task is independently completable and references the spec via Spec: specs/rss-feed.md in the Phase header.
You can also invoke /ctx-add-task directly at any point to add individual tasks.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-4-implement-step-by-step","level":3,"title":"Step 4: Implement Step-by-Step","text":"
Pick up tasks with /ctx-implement. The skill loads the spec, breaks work into atomic steps, and checkpoints after every 3-5 steps with a build and test verification:
You: /ctx-implement (specs/rss-feed.md)\n\nAgent: \"Loaded spec. Breaking into steps:\n 1. Add FeedEntry type to journal package\n 2. Implement feed.xml template\n 3. Wire into site generation pipeline\n [builds and tests after step 3]\n 4. Add --feed flag to ctx journal site\n 5. Add .ctxrc feed configuration\n [builds and tests after step 5]\n ...\"\n
If a build or test fails, the agent stops, diagnoses, and fixes before continuing.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#when-to-skip-steps","level":2,"title":"When to Skip Steps","text":"
Not every feature needs all four steps. Use your judgment:
Situation Start at Vague idea, multiple valid approaches Step 1: Brainstorm Clear approach, need to document it Step 2: Spec Spec already exists, need to plan work Step 3: Tasks Tasks exist, ready to code Step 4: Implement
A brainstorm without a spec is fine for small decisions. A spec without a brainstorm is fine when the design is obvious. The full chain is for features complex enough to warrant front-loaded design.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't need skill names. Natural language works:
You say What happens \"Let's think through this feature\" /ctx-brainstorm \"Spec this out\" /ctx-spec \"Write a design doc for...\" /ctx-spec \"Break this into tasks\" /ctx-add-task \"Implement the spec\" /ctx-implement \"Let's design before we build\" Starts at brainstorm","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#tips","level":2,"title":"Tips","text":"
Brainstorm first when uncertain. If you can articulate the approach in two sentences, skip to spec. If you can't, brainstorm.
Specs prevent scope creep. The Non-Goals section is as important as the approach. Writing down what you won't do keeps implementation focused.
Edge cases are the point. A spec that only describes the happy path isn't a spec - it's a wish. The /ctx-spec skill pushes for at least 3 edge cases because that's where designs break.
Record decisions during brainstorming. When you choose between approaches, the agent offers to persist the trade-off via /ctx-add-decision. Accept - future sessions need to know why, not just what.
Specs are living documents. Update them when implementation reveals new constraints. A spec that diverges from reality is worse than no spec.
The spec template is customizable. Edit specs/tpl/spec-template.md to match your project's needs. The /ctx-spec skill reads whatever template it finds there.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#see-also","level":2,"title":"See Also","text":"
Skills Reference: /ctx-spec: spec scaffolding from template
Skills Reference: /ctx-implement: step-by-step execution with verification
Tracking Work Across Sessions: task lifecycle and archival
Importing Claude Code Plans: turning ephemeral plans into permanent specs
Persisting Decisions, Learnings, and Conventions: capturing design trade-offs
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/external-context/","level":1,"title":"Keeping Context in a Separate Repo","text":"","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#the-problem","level":2,"title":"The Problem","text":"
ctx files contain project-specific decisions, learnings, conventions, and tasks. By default, they live in .context/ inside the project tree, and that works well when the context can be public.
But sometimes you need the context outside the project:
Open-source projects with private context: Your architectural notes, internal task lists, and scratchpad entries shouldn't ship with the public repo.
Compliance or IP concerns: Context files reference sensitive design rationale that belongs in a separate access-controlled repository.
Personal preference: You want a single context repo that covers multiple projects, or you just prefer keeping notes separate from code.
ctx supports this through three configuration methods. This recipe shows how to set them up and how to tell your AI assistant where to find the context.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#tldr","level":2,"title":"TL;DR","text":"
All ctx commands now use the external directory automatically.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx init CLI command Initialize context directory --context-dir Global flag Point ctx at a non-default directory --allow-outside-cwd Global flag Permit context outside the project root .ctxrc Config file Persist the context directory setting CTX_DIR Env variable Override context directory per-session /ctx-status Skill Verify context is loading correctly","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-1-create-the-private-context-repo","level":3,"title":"Step 1: Create the Private Context Repo","text":"
Create a separate repository for your context files. This can live anywhere: a private GitHub repo, a shared drive, a sibling directory:
# Create the context repo\nmkdir ~/repos/myproject-context\ncd ~/repos/myproject-context\ngit init\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-2-initialize-ctx-pointing-at-it","level":3,"title":"Step 2: Initialize ctx Pointing at It","text":"
From your project root, initialize ctx with --context-dir pointing to the external location. Because the directory is outside your project tree, you also need --allow-outside-cwd:
cd ~/repos/myproject\nctx --context-dir ~/repos/myproject-context \\\n --allow-outside-cwd \\\n init\n
This creates the full .context/-style file set inside ~/repos/myproject-context/ instead of ~/repos/myproject/.context/.
Boundary Validation
ctx validates that the .context directory is within the current working directory.
If your external directory is truly outside the project root:
Either every ctx command needs --allow-outside-cwd,
or you can persist the setting in .ctxrc (next step).
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-3-make-it-stick","level":3,"title":"Step 3: Make It Stick","text":"
Typing --context-dir and --allow-outside-cwd on every command is tedious. Pick one of these methods to make the configuration permanent.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#option-a-ctxrc-recommended","level":4,"title":"Option A: .ctxrc (Recommended)","text":"
Create a .ctxrc file in your project root:
# .ctxrc: committed to the project repo\ncontext_dir: ~/repos/myproject-context\nallow_outside_cwd: true\n
ctx reads .ctxrc automatically. Every command now uses the external directory without extra flags:
ctx status # reads from ~/repos/myproject-context\nctx add learning \"Redis MULTI doesn't roll back on error\"\n
Commit .ctxrc
.ctxrc belongs in the project repo. It contains no secrets: It's just a path and a boundary override.
.ctxrc lets teammates share the same configuration.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#option-b-ctx_dir-environment-variable","level":4,"title":"Option B: CTX_DIR Environment Variable","text":"
Good for CI pipelines, temporary overrides, or when you don't want to commit a .ctxrc:
# In your shell profile (~/.bashrc, ~/.zshrc)\nexport CTX_DIR=~/repos/myproject-context\n
Or for a single session:
CTX_DIR=~/repos/myproject-context ctx status\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#option-c-shell-alias","level":4,"title":"Option C: Shell Alias","text":"
If you prefer a shell alias over .ctxrc:
# ~/.bashrc or ~/.zshrc\nalias ctx='ctx --context-dir ~/repos/myproject-context --allow-outside-cwd'\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#priority-order","level":4,"title":"Priority Order","text":"
When multiple methods are set, ctx resolves the context directory in this order (highest priority first):
--context-dir flag
CTX_DIR environment variable
context_dir in .ctxrc
Default: .context/
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-4-agent-auto-discovery-via-bootstrap","level":3,"title":"Step 4: Agent Auto-Discovery via Bootstrap","text":"
When context lives outside the project tree, your AI assistant needs to know where to find it. The ctx system bootstrap command resolves the configured context directory and communicates it to the agent automatically:
$ ctx system bootstrap\nctx bootstrap\n=============\n\ncontext_dir: /home/user/repos/myproject-context\n\nFiles:\n CONSTITUTION.md, TASKS.md, DECISIONS.md, ...\n
The CLAUDE.md template generated by ctx init already instructs the agent to run ctx system bootstrap at session start. Because .ctxrc is in the project root, your agent inherits the external path automatically via the ctx system boostrap call instruction.
Here is the relevant section from CLAUDE.md for reference:
<!-- CLAUDE.md -->\n1. **Run `ctx system bootstrap`**: CRITICAL, not optional.\n This tells you where the context directory is. If it fails or returns\n no context_dir, STOP and warn the user.\n
Moreover, every nudge (context checkpoint, persistence reminder, etc.) also includes a Context: /home/user/repos/myproject-context footer, so the agent remains anchored to the correct directory even in long sessions.
If you use CTX_DIR instead of .ctxrc, export it in your shell profile so the hook process inherits it:
export CTX_DIR=~/repos/myproject-context\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-5-share-with-teammates","level":3,"title":"Step 5: Share with Teammates","text":"
Teammates clone both repos and set up .ctxrc:
# Clone the project\ngit clone git@github.com:org/myproject.git\ncd myproject\n\n# Clone the private context repo\ngit clone git@github.com:org/myproject-context.git ~/repos/myproject-context\n
If .ctxrc is already committed to the project, they're done: ctx commands will find the external context automatically.
If teammates use different paths, each developer sets their own CTX_DIR:
export CTX_DIR=~/my-own-path/myproject-context\n
For encryption key distribution across the team, see the Syncing Scratchpad Notes recipe.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-6-day-to-day-sync","level":3,"title":"Step 6: Day-to-Day Sync","text":"
The external context repo has its own git history. Treat it like any other repo: Commit and push after sessions:
cd ~/repos/myproject-context\n\n# After a session\ngit add -A\ngit commit -m \"Session: refactored auth module, added rate-limit learning\"\ngit push\n
Your AI assistant can do this too. When ending a session:
You: \"Save what we learned and push the context repo.\"\n\nAgent: [runs ctx add learning, then commits and pushes the context repo]\n
You can also set up a post-session habit: project code gets committed to the project repo, context gets committed to the context repo.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't need to remember the flags; simply ask your assistant:
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#set-up-your-system-using-natural-language","level":3,"title":"Set Up Your System Using Natural Language","text":"
You: \"Set up ctx to use ~/repos/myproject-context as the context directory.\"\n\nAgent: \"I'll create a .ctxrc in the project root pointing to that path.\n I'll also update CLAUDE.md so future sessions know where to find\n context. Want me to initialize the context files there too?\"\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#configure-separate-repo-for-context-folder-using-natural-language","level":3,"title":"Configure Separate Repo for .context Folder Using Natural Language","text":"
You: \"My context is in a separate repo. Can you load it?\"\n\nAgent: [reads .ctxrc, finds the path, loads context from the external dir]\n \"Loaded. You have 3 pending tasks, last session was about the auth\n refactor.\"\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#tips","level":2,"title":"Tips","text":"
Start simple. If you don't need external context yet, don't set it up. The default .context/ in-tree is the easiest path. Move to an external repo when you have a concrete reason.
One context repo per project. Sharing a single context directory across multiple projects creates confusion. Keep the mapping 1:1.
Use .ctxrc over env vars when the path is stable. It's committed, documented, and works for the whole team without per-developer shell setup.
Don't forget the boundary flag. The most common error is Error: context directory is outside the project root. Set allow_outside_cwd: true in .ctxrc or pass --allow-outside-cwd.
Commit both repos at session boundaries. Context without code history (or code without context history) loses half the value.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#next-up","level":2,"title":"Next Up","text":"
The Complete Session →: Walk through a full ctx session from start to finish.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#see-also","level":2,"title":"See Also","text":"
Setting Up ctx Across AI Tools: initial setup recipe
Syncing Scratchpad Notes Across Machines: distribute encryption keys when context is shared
CLI Reference: all global flags including --context-dir and --allow-outside-cwd
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/guide-your-agent/","level":1,"title":"Guide Your Agent","text":"
Commands vs. Skills
Commands (ctx status, ctx add task) run in your terminal.
Skills (/ctx-reflect, /ctx-next) run inside your AI coding assistant.
Recipes combine both.
Think of commands as structure and skills as behavior.
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/guide-your-agent/#proactive-behavior","level":2,"title":"Proactive Behavior","text":"
These recipes show explicit commands and skills, but agents trained on the ctx playbook are proactive: They offer to save learnings after debugging, record decisions after trade-offs, create follow-up tasks after completing work, and suggest what to work on next.
Your questions train the agent. Asking \"what have we learned?\" or \"is our context clean?\" does two things:
It triggers the workflow right now,
and it reinforces the pattern.
The more you guide, the more the agent habituates the behavior and begins offering on its own.
Each recipe includes a Conversational Approach section showing these natural-language patterns.
Tip
Don't wait passively for proactive behavior: especially in early sessions.
Ask, guide, reinforce. Over time, you ask less and the agent offers more.
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/guide-your-agent/#next-up","level":2,"title":"Next Up","text":"
Setup Across AI Tools →: Initialize ctx and configure hooks for Claude Code, Cursor, Aider, Copilot, or Windsurf.
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/guide-your-agent/#see-also","level":2,"title":"See Also","text":"
The Complete Session: full session lifecycle from start to finish
Prompting Guide: general tips for working effectively with AI coding assistants
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/hook-output-patterns/","level":1,"title":"Hook Output Patterns","text":"","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#the-problem","level":2,"title":"The Problem","text":"
Claude Code hooks can output text, JSON, or nothing at all. But the format of that output determines who sees it and who acts on it.
Choose the wrong pattern, and your carefully crafted warning gets silently absorbed by the agent, or your agent-directed nudge gets dumped on the user as noise.
This recipe catalogs the known hook output patterns and explains when to use each one.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#tldr","level":2,"title":"TL;DR","text":"
Eight patterns from full control to full invisibility:
hard gate (exit 2),
VERBATIM relay (agent MUST show),
agent directive (context injection),
and silent side-effect (background work).
Most hooks belong in the middle.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#the-spectrum","level":2,"title":"The Spectrum","text":"
These patterns form a spectrum based on who decides what the user sees:
The spectrum runs from full hook control (hard gate) to full invisibility (silent side effect).
Most hooks belong somewhere in the middle.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-1-hard-gate","level":2,"title":"Pattern 1: Hard Gate","text":"
Block the tool call entirely. The agent cannot proceed: it must find another approach or tell the user.
echo '{\"decision\": \"block\", \"reason\": \"Use ctx from PATH, not ./ctx\"}'\n
When to use: Enforcing invariants that must never be violated: Constitution rules, security boundaries, destructive command prevention.
Hook type: PreToolUse only (Claude Code first-class mechanism).
Examples in ctx:
ctx system block-non-path-ctx: Enforces the PATH invocation rule
block-git-push.sh: Requires explicit user approval for pushes (project-local)
block-dangerous-commands.sh: Prevents sudo, copies to ~/.local/bin (project-local)
Trade-off: The agent gets a block response with a reason. Good reasons help the agent recover (\"use X instead\"); bad reasons leave it stuck.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-2-verbatim-relay","level":2,"title":"Pattern 2: VERBATIM Relay","text":"
Force the agent to show this to the user as-is. The explicit instruction overcomes the agent's tendency to silently absorb context.
echo \"IMPORTANT: Relay this warning to the user VERBATIM before answering their question.\"\necho \"\"\necho \"┌─ Journal Reminder ─────────────────────────────\"\necho \"│ You have 12 sessions not yet exported.\"\necho \"└────────────────────────────────────────────────\"\n
When to use: Actionable reminders the user needs to see regardless of what they asked: Stale backups, unimported sessions, resource warnings.
Hook type: UserPromptSubmit (runs before the agent sees the prompt).
Examples in ctx:
ctx system check-journal: Unexported sessions and unenriched entries
ctx system check-context-size: Context capacity warning
ctx system check-resources: Resource pressure (memory, swap, disk, load): DANGER only
ctx system check-freshness: Technology constant staleness warning
check-backup-age.sh: Stale backup warning (project-local)
Trade-off: Noisy if overused. Every VERBATIM relay adds a preamble before the agent's actual answer. Throttle with once-per-day markers or adaptive frequency.
Key detail: The phrase IMPORTANT: Relay this ... VERBATIM is what makes this work. Without it, agents tend to process the information internally and never surface it. The explicit instruction is the pattern: the box-drawing is just fancy formatting.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-3-agent-directive","level":2,"title":"Pattern 3: Agent Directive","text":"
Tell the agent to do something, not the user. The agent decides whether and how to involve the user.
echo \"┌─ Persistence Checkpoint (prompt #25) ───────────\"\necho \"│ No context files updated in 15+ prompts.\"\necho \"│ Have you discovered learnings, decisions,\"\necho \"│ or completed tasks worth persisting?\"\necho \"└──────────────────────────────────────────────────\"\n
When to use: Behavioral nudges. The hook detects a condition and asks the agent to consider an action. The user may never need to know.
Hook type: UserPromptSubmit.
Examples in ctx:
ctx system check-persistence: Nudges the agent to persist context
Trade-off: No guarantee the agent acts. The nudge is one signal among many in the context window. Strong phrasing helps (\"Have you...?\" is better than \"Consider...\"), but ultimately the agent decides.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-4-silent-context-injection","level":2,"title":"Pattern 4: Silent Context Injection","text":"
Load context with no visible output. The agent gets enriched without either party noticing.
ctx agent --budget 4000 >/dev/null || true\n
When to use: Background context loading that should be invisible. The agent benefits from the information, but neither it, nor the user needs to know it happened.
Hook type: PreToolUse with .* matcher (runs on every tool call).
Examples in ctx:
The ctx agentPreToolUse hook: injects project context silently
Trade-off: Adds latency to every tool call. Keep the injected content small and fast to generate.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-5-silent-side-effect","level":2,"title":"Pattern 5: Silent Side-Effect","text":"
Do work, produce no output: Housekeeping that needs no acknowledgment.
find \"$CTX_TMPDIR\" -type f -mtime +15 -delete\n
When to use: Cleanup, log rotation, temp file management. Anything where the action is the point and nobody needs to know it happened.
Hook type: Any hook where output is irrelevant.
Examples in ctx:
Log rotation, marker file cleanup, state directory maintenance
Trade-off: None, if the action is truly invisible. If it can fail in a way that matters, consider logging.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-6-conditional-relay","level":3,"title":"Pattern 6: Conditional Relay","text":"
Tell the agent to relay only if a condition holds in context.
echo \"If the user's question involves modifying .context/ files,\"\necho \"relay this warning VERBATIM:\"\necho \"\"\necho \"┌─ Context Integrity ─────────────────────────────\"\necho \"│ CONSTITUTION.md has not been verified in 7 days.\"\necho \"└────────────────────────────────────────────────\"\necho \"\"\necho \"Otherwise, proceed normally.\"\n
When to use: Warnings that only matter in certain contexts. Avoids noise when the user is doing unrelated work.
Trade-off: Depends on the agent's judgment about when the condition holds. More fragile than VERBATIM relay, but less noisy.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-7-suggested-action","level":3,"title":"Pattern 7: Suggested Action","text":"
Give the agent a specific command to propose to the user.
echo \"┌─ Stale Dependencies ──────────────────────────\"\necho \"│ go.sum is 30+ days newer than go.mod.\"\necho \"│ Suggested: run \\`go mod tidy\\`\"\necho \"│ Ask the user before proceeding.\"\necho \"└───────────────────────────────────────────────\"\n
When to use: The hook detects a fixable condition and knows the fix. Goes beyond a nudge: Gives the agent a concrete next step. The agent still asks for permission but knows exactly what to propose.
Trade-off: The suggestion might be wrong or outdated. The \"ask the user before proceeding\" part is critical.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-8-escalating-severity","level":3,"title":"Pattern 8: Escalating Severity","text":"
Different urgency tiers with different relay expectations.
# INFO: agent processes silently, mentions if relevant\necho \"INFO: Last test run was 3 days ago.\"\n\n# WARN: agent should mention to user at next natural pause\necho \"WARN: 12 uncommitted changes across 3 branches.\"\n\n# CRITICAL: agent must relay immediately, before any other work\necho \"CRITICAL: Relay VERBATIM before answering. Disk usage at 95%.\"\n
When to use: When you have multiple hooks producing output and need to avoid overwhelming the user. INFO gets absorbed, WARN gets mentioned, CRITICAL interrupts.
Examples in ctx:
ctx system check-resources: Uses two tiers (WARNING/DANGER) internally but only fires the VERBATIM relay at DANGER level: WARNING is silent. See ctx system for the user-facing command that shows both tiers.
Trade-off: Requires agent training or convention to recognize the tiers. Without a shared protocol, the prefixes are just text.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#choosing-a-pattern","level":2,"title":"Choosing a Pattern","text":"
Is the agent about to do something forbidden?\n └─ Yes → Hard gate\n\nDoes the user need to see this regardless of what they asked?\n └─ Yes → VERBATIM relay\n └─ Sometimes → Conditional relay\n\nShould the agent consider an action?\n └─ Yes, with a specific fix → Suggested action\n └─ Yes, open-ended → Agent directive\n\nIs this background context the agent should have?\n └─ Yes → Silent injection\n\nIs this housekeeping?\n └─ Yes → Silent side-effect\n
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#design-tips","level":2,"title":"Design Tips","text":"
Throttle aggressively: VERBATIM relays that fire every prompt will be ignored or resented. Use once-per-day markers (touch $REMINDED), adaptive frequency (every Nth prompt), or staleness checks (only fire if condition persists).
Include actionable commands: \"You have 12 unimported sessions\" is less useful than \"You have 12 unimported sessions. Run: ctx journal import --all.\" Give the user (or agent) the exact next step.
Use box-drawing for visual structure: The ┌─ ─┐ │ └─ ─┘ pattern makes hook output visually distinct from agent prose. It also signals \"this is machine-generated, not agent opinion.\"
Test the silence path: Most hook runs should produce no output (the condition isn't met). Make sure the common case is fast and silent.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#common-pitfalls","level":2,"title":"Common Pitfalls","text":"
Lessons from 19 days of hook debugging in ctx. Every one of these was encountered, debugged, and fixed in production.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#silent-misfire-wrong-key-name","level":3,"title":"Silent Misfire: Wrong Key Name","text":"
{ \"PreToolUseHooks\": [ ... ] }\n
The key is PreToolUse, not PreToolUseHooks. Claude Code validates silently: A misspelled key means the hook is ignored with no error. Always test with a debug echo first to confirm the hook fires before adding real logic.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#json-escaping-breaks-shell-commands","level":3,"title":"JSON Escaping Breaks Shell Commands","text":"
Go's json.Marshal escapes >, <, and & as Unicode sequences (\\u003e) by default. This breaks shell commands in generated config:
\"command\": \"ctx agent 2\\u003e/dev/null\"\n
Fix: use json.Encoder with SetEscapeHTML(false) when generating hook configuration.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#stdin-not-environment-variables","level":3,"title":"stdin, Not Environment Variables","text":"
Hook input arrives as JSON via stdin, not environment variables:
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#regex-overfitting","level":3,"title":"Regex Overfitting","text":"
A regex meant to catch ctx as a binary will also match ctx as a directory component:
# Too broad: blocks: git -C /home/jose/WORKSPACE/ctx status\n(/home/|/tmp/|/var/)[^ ]*ctx[^ ]*\n\n# Narrow to binary only:\n(/home/|/tmp/|/var/)[^ ]*/ctx( |$)\n
Test hook regexes against paths that contain the target string as a substring, not just as the final component.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#repetition-fatigue","level":3,"title":"Repetition Fatigue","text":"
Injecting context on every tool call sounds safe. In practice, after seeing the same context injection fifteen times, the agent treats it as background noise: Conventions stated in the injected context get violated because salience has been destroyed by repetition.
Fix: cooldowns. ctx agent --session $PPID --cooldown 10m injects at most once per ten minutes per session using a tombstone file in /tmp/. This is not an optimization; it is a correction for a design flaw. Every injection consumes attention budget: 50 tool calls at 4,000 tokens each means 200,000 tokens of repeated context, most of it wasted.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#hardcoded-paths","level":3,"title":"Hardcoded Paths","text":"
A username rename (parallels to jose) broke every hook at once. Use $CLAUDE_PROJECT_DIR instead of absolute paths:
If the platform provides a runtime variable for paths, always use it.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#next-up","level":2,"title":"Next Up","text":"
Webhook Notifications →: Get push notifications when loops complete, hooks fire, or agents hit milestones.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#see-also","level":2,"title":"See Also","text":"
Customizing Hook Messages: override what hooks say without changing what they do
Claude Code Permission Hygiene: how permissions and hooks work together
Defense in Depth: why hooks matter for agent security
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-sequence-diagrams/","level":1,"title":"Hook Sequence Diagrams","text":"","path":["Hook Sequence Diagrams"],"tags":[]},{"location":"recipes/hook-sequence-diagrams/#hook-lifecycle","level":2,"title":"Hook Lifecycle","text":"
Every ctx hook is a Go binary invoked by Claude Code at one of three lifecycle events: PreToolUse (before a tool runs, can block), PostToolUse (after a tool completes), or UserPromptSubmit (on every user prompt, before any tools run). Hooks receive JSON on stdin and emit JSON or plain text on stdout.
This page documents the execution flow of every hook as a sequence diagram.
Daily check for unimported sessions and unenriched journal entries.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-journal\n participant State as .context/state/\n participant Journal as Journal dir\n participant Claude as Claude projects dir\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Check daily throttle marker\n alt throttled\n Hook-->>CC: (silent exit)\n end\n Hook->>Journal: Check dir exists\n Hook->>Claude: Check dir exists\n alt either dir missing\n Hook-->>CC: (silent exit)\n end\n Hook->>Journal: Get newest entry mtime\n Hook->>Claude: Count .jsonl files newer than journal\n Hook->>Journal: Count unenriched entries\n alt unimported == 0 and unenriched == 0\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, variant, {counts})\n Note over Hook: variant: both | unimported | unenriched\n Hook-->>CC: Nudge box (counts)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Touch throttle marker
Per-session check for MEMORY.md changes since last sync.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-memory-drift\n participant State as .context/state/\n participant Mem as memory.Discover\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Check session tombstone\n alt already nudged this session\n Hook-->>CC: (silent exit)\n end\n Hook->>Mem: DiscoverMemoryPath(projectRoot)\n alt auto memory not active\n Hook-->>CC: (silent exit)\n end\n Hook->>Mem: HasDrift(contextDir, sourcePath)\n alt no drift\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, nudge, fallback)\n Hook-->>CC: Nudge box (drift reminder)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Touch session tombstone
Tracks context file modification and nudges when edits happen without persisting context. Adaptive threshold based on prompt count.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-persistence\n participant State as .context/state/\n participant Ctx as .context/ files\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Read persistence state {Count, LastNudge, LastMtime}\n alt first prompt (no state)\n Hook->>State: Initialize state {Count:1, LastNudge:0, LastMtime:now}\n Hook-->>CC: (silent exit)\n end\n Hook->>Hook: Increment Count\n Hook->>Ctx: Get current context mtime\n alt context modified since LastMtime\n Hook->>State: Reset LastNudge = Count, update LastMtime\n Hook-->>CC: (silent exit)\n end\n Hook->>Hook: sinceNudge = Count - LastNudge\n Hook->>Hook: PersistenceNudgeNeeded(Count, sinceNudge)?\n alt threshold not reached\n Hook->>State: Write state\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, nudge, vars)\n Hook-->>CC: Nudge box (prompt count, time since last persist)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Update LastNudge = Count, write state
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-reminders\n participant Store as Reminders store\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>Store: ReadReminders()\n alt load error\n Hook-->>CC: (silent exit)\n end\n Hook->>Hook: Filter by due date (After <= today)\n alt no due reminders\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, reminders, {list})\n Hook-->>CC: Nudge box (reminder list + dismiss hints)\n Hook->>Hook: NudgeAndRelay(message)
Silent per-prompt pulse. Tracks prompt count, context modification, and token usage. The agent never sees this hook's output.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as heartbeat\n participant State as .context/state/\n participant Ctx as .context/ files\n participant Notify as Webhook + EventLog\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Increment heartbeat counter\n Hook->>Ctx: Get latest context file mtime\n Hook->>State: Compare with last recorded mtime\n Hook->>State: Update mtime record\n Hook->>State: Read session token info\n Hook->>Notify: Send heartbeat notification\n Hook->>Notify: Append to event log\n Hook->>State: Write heartbeat log entry\n Note over Hook: No stdout - agent never sees this
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-backup-age\n participant State as .context/state/\n participant FS as Filesystem\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Check daily throttle marker\n alt throttled\n Hook-->>CC: (silent exit)\n end\n Hook->>FS: Check SMB mount (if env var set)\n Hook->>FS: Check backup marker file age\n alt no warnings\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, warning, {Warnings})\n Hook-->>CC: Nudge box (warnings)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Touch throttle marker
","path":["Hook Sequence Diagrams"],"tags":[]},{"location":"recipes/hook-sequence-diagrams/#throttling-summary","level":2,"title":"Throttling Summary","text":"Hook Lifecycle Throttle Type Scope context-load-gate PreToolUse One-shot marker Per session block-non-path-ctx PreToolUse None Every match qa-reminder PreToolUse None Every git command specs-nudge PreToolUse None Every prompt post-commit PostToolUse None Every git commit check-task-completion PostToolUse Configurable interval Per session check-context-size UserPromptSubmit Adaptive counter Per session check-ceremonies UserPromptSubmit Daily marker Once per day check-freshness UserPromptSubmit Daily marker Once per day check-journal UserPromptSubmit Daily marker Once per day check-knowledge UserPromptSubmit Daily marker Once per day check-map-staleness UserPromptSubmit Daily marker Once per day check-memory-drift UserPromptSubmit Session tombstone Once per session check-persistence UserPromptSubmit Adaptive counter Per session check-reminders UserPromptSubmit None Every prompt check-resources UserPromptSubmit None Every prompt check-version UserPromptSubmit Daily marker Once per day heartbeat UserPromptSubmit None Every prompt block-dangerous-commands PreToolUse * None Every match check-backup-age UserPromptSubmit * Daily marker Once per day
* Project-local hook (settings.local.json), not shipped with ctx.
Claude Code plan files (~/.claude/plans/*.md) are ephemeral: They have structured context, approach, and file lists, but they're orphaned after the session ends. The filenames are UUIDs, so you can't tell what's in them without opening each one.
How do you turn a useful plan into a permanent project spec?
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#tldr","level":2,"title":"TL;DR","text":"
You: /ctx-import-plans\nAgent: [lists plans with dates and titles]\n 1. 2026-02-28 Add authentication middleware\n 2. 2026-02-27 Refactor database connection pool\nYou: \"import 1\"\nAgent: [copies to specs/add-authentication-middleware.md]\n
Plans are copied (not moved) to specs/, slugified by their H1 heading.
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-import-plans Skill List, filter, and import plan files to specs /ctx-add-task Skill Optionally add a task referencing the spec","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-1-list-available-plans","level":3,"title":"Step 1: List Available Plans","text":"
Invoke the skill and it lists plans with modification dates and titles:
You: /ctx-import-plans\n\nAgent: Found 3 plan files:\n 1. 2026-02-28 Add authentication middleware\n 2. 2026-02-27 Refactor database connection pool\n 3. 2026-02-25 Import plans skill\n Which plans would you like to import?\n
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-2-filter-optional","level":3,"title":"Step 2: Filter (Optional)","text":"
You can narrow the list with arguments:
Argument Effect --today Only plans modified today --since YYYY-MM-DD Only plans modified on or after the date --all Import everything without prompting (none) Interactive selection
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-3-select-and-import","level":3,"title":"Step 3: Select and Import","text":"
Pick one or more plans by number:
You: \"import 1 and 3\"\n\nAgent: Imported 2 plan(s):\n ~/.claude/plans/abc123.md -> specs/add-authentication-middleware.md\n ~/.claude/plans/ghi789.md -> specs/import-plans-skill.md\n Want me to add tasks referencing these specs?\n
The agent reads the H1 heading from each plan and slugifies it for the filename. If a plan has no H1 heading, the original filename (minus extension) is used as the slug.
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-4-add-follow-up-tasks-optional","level":3,"title":"Step 4: Add Follow-Up Tasks (Optional)","text":"
If you say yes, the agent creates tasks in TASKS.md that reference the imported specs:
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't need to remember the exact skill name:
You say What happens \"import my plans\" /ctx-import-plans (interactive) \"save today's plans as specs\" /ctx-import-plans --today \"import all plans from this week\" /ctx-import-plans --since ... \"turn that plan into a spec\" /ctx-import-plans (filtered)","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#tips","level":2,"title":"Tips","text":"
Plans are copied, not moved: The originals stay in ~/.claude/plans/. Claude Code manages that directory; ctx doesn't delete from it.
Conflict handling: If specs/{slug}.md already exists, the agent asks whether to overwrite or pick a different name.
Specs are project memory: Once imported, specs are tracked in git and available to future sessions. Reference them from TASKS.md phase headers with Spec: specs/slug.md.
Pair with /ctx-implement: After importing a plan as a spec, use /ctx-implement to execute it step-by-step with verification.
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#see-also","level":2,"title":"See Also","text":"
Skills Reference: /ctx-import-plans: full skill description
The Complete Session: where plan import fits in the session flow
Tracking Work Across Sessions: managing tasks that reference imported specs
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/knowledge-capture/","level":1,"title":"Persisting Decisions, Learnings, and Conventions","text":"","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#the-problem","level":2,"title":"The Problem","text":"
You debug a subtle issue, discover the root cause, and move on.
Three weeks later, a different session hits the same issue. The knowledge existed briefly in one session's memory but was never written down.
Architectural decisions suffer the same fate: you weigh trade-offs, pick an approach, and six sessions later the AI suggests the alternative you already rejected.
How do you make sure important context survives across sessions?
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#tldr","level":2,"title":"TL;DR","text":"
/ctx-reflect # surface items worth persisting\n/ctx-add-decision \"Title\" # record with context/rationale/consequence\n/ctx-add-learning \"Title\" # record with context/lesson/application\n
Or just tell your agent: \"What have we learned this session?\"
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx add decision Command Record an architectural decision ctx add learning Command Record a gotcha, tip, or lesson ctx add convention Command Record a coding pattern or standard ctx reindex Command Rebuild both quick-reference indices ctx decision reindex Command Rebuild the DECISIONS.md index ctx learning reindex Command Rebuild the LEARNINGS.md index /ctx-add-decision Skill AI-guided decision capture with validation /ctx-add-learning Skill AI-guided learning capture with validation /ctx-add-convention Skill AI-guided convention recording with placement /ctx-reflect Skill Surface items worth persisting at breakpoints","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#the-workflow","level":2,"title":"The Workflow","text":"
Decision, Learning, or Convention?
If you chose between alternatives, it is a decision.
If you discovered something surprising, it is a learning.
If you are codifying a repeated pattern, it is a convention.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-1-understand-what-to-persist","level":3,"title":"Step 1: Understand What to Persist","text":"
Three context files serve different purposes:
Decisions (DECISIONS.md) answer \"why is it this way?\" They record trade-offs between alternatives with structured fields: context, rationale, and consequences. Decisions prevent re-debating settled questions.
Learnings (LEARNINGS.md) answer \"what did we discover the hard way?\" They record gotchas and debugging insights specific to this project with structured fields: context, lesson, and application.
Learnings prevent repeating past mistakes.
Conventions (CONVENTIONS.md) answer \"how do we do things here?\" They record patterns and standards. No structured fields required: just a name, a rule, and an example. Conventions keep code consistent across sessions.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-2-record-decisions","level":3,"title":"Step 2: Record Decisions","text":"
Decisions have three structured fields:
context,
rationale,
and consequences.
The easiest way to record one is to tell your agent:
You: \"Save that as a decision: we picked file-based cooldown\n tokens over env vars.\"\n\nAgent: \"Recorded in DECISIONS.md:\n - Decision: Use file-based cooldown tokens instead of env vars\n - Context: Hook subprocesses cannot persist env vars to parent shell\n - Rationale: File tokens survive across processes. Simpler than\n IPC. Cleanup is automatic via TTL.\n - Consequence: Tombstone files accumulate in /tmp. Cannot share\n state across machines.\"\n
The agent was present for the trade-off discussion. It fills in the structured fields from what it observed, which is hard to reconstruct later as a terminal one-liner.
When multiple options were considered, the agent includes rejected alternatives in the rationale automatically.
The /ctx-add-decision skill also guides you through the fields interactively. For quick decisions, it supports a Y-statement:
\"In the context of [situation], facing [constraint], we decided for [choice] and against [alternatives], to achieve [benefit], accepting that [trade-off].\"
CLI Command for Scripting and Automation
When no agent is in the loop (CI pipelines, shell scripts, hooks), use the CLI directly with structured flags:
ctx add decision \"Use file-based cooldown tokens instead of env vars\" \\\n --context \"Hook subprocesses cannot persist env vars to parent shell\" \\\n --rationale \"File tokens survive across processes. Simpler than IPC. Cleanup is automatic via TTL.\" \\\n --consequence \"Tombstone files accumulate in /tmp. Cannot share state across machines.\"\n
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-3-record-learnings","level":3,"title":"Step 3: Record Learnings","text":"
Learnings have three structured fields: context, lesson, and application.
Again, your agent is the best tool for this. It was there when you discovered the gotcha and can express it more completely than a backslash-escaped terminal command:
You: \"That subprocess thing was painful. Save it as a learning.\"\n\nAgent: \"Added to LEARNINGS.md:\n - Learning: Claude Code hooks run in a subprocess\n - Context: Set env var in PreToolUse hook, but it was not visible\n in the main session\n - Lesson: Hook scripts execute in a child process. Env changes do\n not propagate to parent.\n - Application: Use tombstone files for hook-to-session\n communication. Never rely on hook env vars.\"\n
You said one sentence. The agent produced a structured entry with all three fields filled from the session context.
That is the point: the agent has the full conversation and can be far more expressive than what you would type at a command prompt.
The /ctx-add-learning skill applies three quality filters:
Could someone Google this in 5 minutes?
Is it specific to this codebase?
Did it take real effort to discover?
All three must pass.
Learnings capture principles and heuristics, not code snippets.
CLI Command for Scripting and Automation
When no agent is in the loop:
ctx add learning \"Claude Code hooks run in a subprocess\" \\\n --context \"Set env var in PreToolUse hook, but it was not visible in the main session\" \\\n --lesson \"Hook scripts execute in a child process. Env changes do not propagate to parent.\" \\\n --application \"Use tombstone files for hook-to-session communication. Never rely on hook env vars.\"\n
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-4-record-conventions","level":3,"title":"Step 4: Record Conventions","text":"
Conventions are simpler: just a name, a rule, and optionally a section.
These are short enough that either approach works:
You: \"We've been using kebab-case for every CLI flag. Codify that.\"\n\nAgent: \"Added to CONVENTIONS.md under Naming:\n 'Use kebab-case for all CLI flag names.'\"\n
Or from the terminal:
ctx add convention \"Use kebab-case for all CLI flag names\" --section \"Naming\"\n
Conventions work best for rules that come up repeatedly. Codify a pattern the third time you see it, not the first.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-5-reindex-after-manual-edits","level":3,"title":"Step 5: Reindex After Manual Edits","text":"
DECISIONS.md and LEARNINGS.md maintain a quick-reference index at the top: a compact table of date and title for each entry. The index updates automatically via ctx add, but falls out of sync after hand edits.
ctx reindex\n
This single command regenerates both indices. You can also reindex individually with ctx decision reindex or ctx learning reindex.
Run reindex after any manual edit. The index lets AI tools scan all entries without reading the full file, which matters when token budgets are tight.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-6-use-ctx-reflect-to-surface-what-to-capture","level":3,"title":"Step 6: Use /ctx-reflect to Surface What to Capture","text":"
Keep It Conversational
/ctx-reflect is not the only way to trigger reflection.
Agents trained on the ctx playbook naturally surface persist-worthy items at breakpoints, even without invoking the skill explicitly.
A conversational prompt like \"anything worth saving?\" or \"let's wrap up\" can trigger the same review.
The skill provides a structured checklist, but the behavior is available through natural conversation.
At natural breakpoints (after completing a feature, fixing a bug, or before ending a session) use /ctx-reflect to identify items worth persisting.
/ctx-reflect\n
The skill walks through learnings, decisions, tasks, and session notes, skipping categories with nothing to report. The output includes specific commands for each suggested persist:
This session implemented file-based cooldown for `ctx agent` and\ndiscovered that hook subprocesses cannot set env vars in the parent.\n\nI'd suggest persisting:\n- **Learning**: Hook subprocesses cannot propagate env vars\n `ctx add learning \"...\" --context \"...\" --lesson \"...\" --application \"...\"`\n- **Decision**: File-based cooldown tokens over env vars\n `ctx add decision \"...\" --context \"...\" --rationale \"...\" --consequence \"...\"`\n\nWant me to persist any of these?\n
The skill always asks before persisting.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-7-the-conversational-approach","level":3,"title":"Step 7: The Conversational Approach","text":"
The commands shown in Steps 2 to 4 are precise, but you rarely need to type them yourself. When working with an agent that has loaded the ctx playbook, natural language triggers the same operations.
You ask, the agent acts:
You say What the agent does \"What have we learned this session?\" Reviews recent work, identifies learnings, offers to persist \"Save that as a decision\" Captures the trade-off you just discussed with structured fields \"Any gotchas worth remembering?\" Scans recent work for non-obvious lessons \"Record that convention\" Adds the pattern you just agreed on to CONVENTIONS.md \"Let's wrap up\" Runs through the reflect checklist conversationally
Example exchange:
You: \"What did we learn from that debugging session?\"\n\nAgent: \"We discovered three things worth persisting:\n 1. Hook subprocesses can't propagate env vars to the parent. We\n spent 20 minutes on this before switching to tombstone files.\n 2. Tombstone files need a TTL or they accumulate in /tmp.\n 3. The cooldown logic must check file mtime, not just existence.\n Want me to save these as learnings?\"\n\nYou: \"Yes, all of them.\"\n\nAgent: \"Done. Added all three to LEARNINGS.md with structured fields.\"\n
The agent can also be proactive without being asked. The playbook encourages agents to persist context at natural milestones.
You may see:
After fixing a tricky bug: \"I noticed this was a non-obvious issue. Want me to save it as a learning?\"
After choosing between approaches: \"We just made a trade-off between file-based tokens and IPC. Should I record this as a decision?\"
At session milestones: the agent checks what is worth persisting and offers a brief summary.
When agents run unattended (no human in the loop), they may persist directly:
\"I've added the subprocess env var gotcha to LEARNINGS.md so we don't hit it again.\"
\"Recorded the decision to use file-based tokens over env vars in DECISIONS.md.\"
\"Marked the cooldown task done and added a follow-up for TTL cleanup.\"
This is behavior by design.
The playbook's self-check prompt, \"If this session ended right now, would the next session know what happened?\" drives agents to persist early and often rather than waiting for explicit instructions.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#command-line-approach-scripting-and-automation","level":3,"title":"Command-line Approach (Scripting and Automation)","text":"
# Decision: record the trade-off\nctx add decision \"Use PostgreSQL over SQLite\" \\\n --context \"Need concurrent multi-user access\" \\\n --rationale \"SQLite locks on writes; Postgres handles concurrency\" \\\n --consequence \"Requires a database server; team needs Postgres training\"\n\n# Learning: record the gotcha\nctx add learning \"SQL migrations must be idempotent\" \\\n --context \"Deploy failed when migration ran twice after rollback\" \\\n --lesson \"CREATE TABLE without IF NOT EXISTS fails on retry\" \\\n --application \"Always use IF NOT EXISTS guards in migrations\"\n\n# Convention: record the pattern\nctx add convention \"API handlers return structured errors\" --section \"API\"\n\n# Reindex after manual edits\nctx reindex\n\n# Reflect at breakpoints (from AI assistant)\n# /ctx-reflect\n
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#conversational-approach-interactive-sessions","level":3,"title":"Conversational Approach (Interactive Sessions)","text":"
You: \"We just picked Postgres over SQLite for the concurrency\n reasons. Save that?\"\n\nAgent: \"Recorded. Added to DECISIONS.md:\n - Decision: Use PostgreSQL over SQLite\n - Context: Need concurrent multi-user access\n - Rationale: SQLite locks on writes; Postgres handles concurrency\n - Consequence: Requires a database server; team needs Postgres\n training\"\n\nYou: \"Also that migration thing. The deploy blew up when the\n migration ran twice.\"\n\nAgent: \"Got it. Added to LEARNINGS.md:\n - Learning: SQL migrations must be idempotent\n - Lesson: CREATE TABLE without IF NOT EXISTS fails on retry\n - Application: Always use IF NOT EXISTS guards in migrations\"\n\nYou: \"Good. Anything else worth saving before we wrap up?\"\n\nAgent: \"We've been using structured error returns in every API\n handler this session. Want me to codify that as a convention?\"\n\nYou: \"Do it.\"\n\nAgent: \"Done. Added 'API handlers return structured errors' to\n CONVENTIONS.md under the API section.\"\n
Both approaches produce the same structured entries in the same context files.
The conversational approach is the natural fit for interactive sessions;
the CLI commands are better suited for scripts, hooks, and automation pipelines.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#tips","level":2,"title":"Tips","text":"
Record decisions at the moment of choice. The alternatives you considered and the reasons you rejected them fade quickly. Capture trade-offs while they are fresh.
Learnings should fail the Gemini test. If someone could find it in a 5-minute Gemini search, it does not belong in LEARNINGS.md.
Conventions earn their place through repetition. Add a convention the third time you see a pattern, not the first.
Use /ctx-reflect at natural breakpoints. The checklist catches items you might otherwise lose.
Keep the entries self-contained. Each entry should make sense on its own. A future session may load only one due to token budget constraints.
Reindex after every hand edit. It takes less than a second. A stale index causes AI tools to miss entries.
Prefer the structured fields. The verbosity forces clarity. A decision without a rationale is just a fact. A learning without an application is just a story.
Talk to your agent, do not type commands. In interactive sessions, the conversational approach is the recommended way to capture knowledge. Say \"save that as a learning\" or \"any decisions worth recording?\" and let the agent handle the structured fields. Reserve the CLI commands for scripting, automation, and CI/CD pipelines where there is no agent in the loop.
Trust the agent's proactive instincts. Agents trained on the ctx playbook will offer to persist context at milestones. A brief \"want me to save this?\" is cheaper than re-discovering the same lesson three sessions later.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#next-up","level":2,"title":"Next Up","text":"
Tracking Work Across Sessions →: Add, prioritize, complete, and archive tasks across sessions.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#see-also","level":2,"title":"See Also","text":"
Tracking Work Across Sessions: managing the tasks that decisions and learnings support
The Complete Session: full session lifecycle including reflection and context persistence
Detecting and Fixing Drift: keeping knowledge files accurate as the codebase evolves
CLI Reference: full documentation for ctx add, ctx decision, ctx learning
Context Files: format and conventions for DECISIONS.md, LEARNINGS.md, and CONVENTIONS.md
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/memory-bridge/","level":1,"title":"Bridging Claude Code Auto Memory","text":"","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#the-problem","level":2,"title":"The Problem","text":"
Claude Code maintains per-project auto memory at ~/.claude/projects/<slug>/memory/MEMORY.md. This file is:
Outside the repo - not version-controlled, not portable
Machine-specific - tied to one ~/.claude/ directory
Invisible to ctx - context loading and hooks don't read it
Meanwhile, ctx maintains structured context files (DECISIONS.md, LEARNINGS.md, CONVENTIONS.md) that are git-tracked, portable, and token-budgeted - but Claude Code doesn't automatically write to them.
The two systems hold complementary knowledge with no bridge between them.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#tldr","level":2,"title":"TL;DR","text":"
ctx memory sync # Mirror MEMORY.md into .context/memory/mirror.md\nctx memory status # Check for drift\nctx memory diff # See what changed since last sync\n
The check-memory-drift hook nudges automatically when MEMORY.md changes - you don't need to remember to sync manually.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx memory sync CLI command Copy MEMORY.md to mirror, archive previous ctx memory status CLI command Show drift, timestamps, line counts ctx memory diff CLI command Show changes since last sync ctx memory import CLI command Classify and promote entries to .context/ files ctx memory publish CLI command Push curated .context/ content to MEMORY.md ctx memory unpublish CLI command Remove published block from MEMORY.md ctx system check-memory-drift Hook Nudge when MEMORY.md has changed (once/session)","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#how-it-works","level":2,"title":"How It Works","text":"","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#discovery","level":3,"title":"Discovery","text":"
Claude Code encodes project paths as directory names under ~/.claude/projects/. The encoding replaces / with - and prefixes with -:
ctx memory uses this encoding to locate MEMORY.md automatically from your project root - no configuration needed.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#mirroring","level":3,"title":"Mirroring","text":"
When you run ctx memory sync:
The previous mirror is archived to .context/memory/archive/mirror-<timestamp>.md
MEMORY.md is copied to .context/memory/mirror.md
Sync state is updated in .context/state/memory-import.json
The mirror is git-tracked, so it travels with the project. Archives provide a fallback for projects that don't use git.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#drift-detection","level":3,"title":"Drift Detection","text":"
The check-memory-drift hook compares MEMORY.md's modification time against the mirror. When drift is detected, the agent sees:
┌─ Memory Drift ────────────────────────────────────────────────\n│ MEMORY.md has changed since last sync.\n│ Run: ctx memory sync\n│ Context: .context\n└────────────────────────────────────────────────────────────────\n
The nudge fires once per session to avoid noise.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#typical-workflow","level":2,"title":"Typical Workflow","text":"","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#at-session-start","level":3,"title":"At Session Start","text":"
If the hook fires a drift nudge, sync before diving into work:
ctx memory diff # Review what changed\nctx memory sync # Mirror the changes\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#periodic-check","level":3,"title":"Periodic Check","text":"
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#dry-run","level":3,"title":"Dry Run","text":"
Preview what sync would do without writing:
ctx memory sync --dry-run\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#storage-layout","level":2,"title":"Storage Layout","text":"
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#edge-cases","level":2,"title":"Edge Cases","text":"Scenario Behavior Auto memory not active sync exits 1 with message. status reports \"not active\". Hook skips silently. First sync (no mirror) Creates mirror without archiving. MEMORY.md is empty Syncs to empty mirror (valid). Not initialized Init guard rejects (same as all ctx commands).","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#importing-entries","level":2,"title":"Importing Entries","text":"
Once you've synced, you can classify and promote entries into structured .context/ files:
Keywords Target always use, prefer, never use, standard CONVENTIONS.md decided, chose, trade-off, approach DECISIONS.md gotcha, learned, watch out, bug, caveat LEARNINGS.md todo, need to, follow up TASKS.md Everything else Skipped
Entries that don't match any pattern are skipped - they stay in the mirror for manual review. Deduplication (hash-based) prevents re-importing the same entry on subsequent runs.
Review Before Importing
Use --dry-run first. The heuristic classifier is deliberately simple - it may misclassify ambiguous entries. Review the plan, then import.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#full-workflow","level":3,"title":"Full Workflow","text":"
ctx memory sync # 1. Mirror MEMORY.md\nctx memory import --dry-run # 2. Preview what would be imported\nctx memory import # 3. Promote entries to .context/ files\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#publishing-context-to-memorymd","level":2,"title":"Publishing Context to MEMORY.md","text":"
Push curated .context/ content back into MEMORY.md so Claude Code sees structured project context on session start - without needing hooks.
ctx memory publish --dry-run # Preview what would be published\nctx memory publish # Write to MEMORY.md\nctx memory publish --budget 40 # Tighter line budget\n
ctx memory publish replaces only inside the markers
To remove the published block entirely:
ctx memory unpublish\n
Publish at Wrap-Up, Not on Commit
The best time to publish is during session wrap-up, after persisting decisions and learnings. Never auto-publish - give yourself a chance to review what's going into MEMORY.md.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#full-bidirectional-workflow","level":3,"title":"Full Bidirectional Workflow","text":"
ctx memory sync # 1. Mirror MEMORY.md\nctx memory import --dry-run # 2. Check what Claude wrote\nctx memory import # 3. Promote entries to .context/\nctx memory publish --dry-run # 4. Check what would be published\nctx memory publish # 5. Push context to MEMORY.md\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/multi-tool-setup/","level":1,"title":"Setup Across AI Tools","text":"","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#the-problem","level":2,"title":"The Problem","text":"
You have installed ctx and want to set it up with your AI coding assistant so that context persists across sessions. Different tools have different integration depths. For example:
Claude Code supports native hooks that load and save context automatically.
Cursor injects context via its system prompt.
Aider reads context files through its --read flag.
This recipe walks through the complete setup for each tool, from initialization through verification, so you end up with a working memory layer regardless of which AI tool you use.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#tldr","level":2,"title":"TL;DR","text":"
Create a .ctxrc in your project root to configure token budgets, context directory, drift thresholds, and more.
Then start your AI tool and ask: \"Do you remember?\"
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Command/Skill Role in this workflow ctx init Create .context/ directory, templates, and permissions ctx setup Generate integration configuration for a specific AI tool ctx agent Print a token-budgeted context packet for AI consumption ctx load Output assembled context in read order (for manual pasting) ctx watch Auto-apply context updates from AI output (non-native tools) ctx completion Generate shell autocompletion for bash, zsh, or fish ctx journal import Import sessions to editable journal Markdown","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-1-initialize-ctx","level":3,"title":"Step 1: Initialize ctx","text":"
Run ctx init in your project root. This creates the .context/ directory with all template files and seeds ctx permissions in settings.local.json.
cd your-project\nctx init\n
This produces the following structure:
.context/\n CONSTITUTION.md # Hard rules the AI must never violate\n TASKS.md # Current and planned work\n CONVENTIONS.md # Code patterns and standards\n ARCHITECTURE.md # System overview\n DECISIONS.md # Architectural decisions with rationale\n LEARNINGS.md # Lessons learned, gotchas, tips\n GLOSSARY.md # Domain terms and abbreviations\n AGENT_PLAYBOOK.md # How AI tools should use this system\n
Using a Different .context Directory
The .context/ directory doesn't have to live inside your project. You can point ctx to an external folder via .ctxrc, the CTX_DIR environment variable, or the --context-dir CLI flag.
This is useful for monorepos or shared context across repositories.
See Configuration for details and External Context for a full recipe.
For Claude Code, install the ctx plugin to get hooks and skills:
claude /plugin marketplace add ActiveMemory/ctx\nclaude /plugin install ctx@activememory-ctx\n
If you only need the core files (useful for lightweight setups), use the --minimal flag:
ctx init --minimal\n
This creates only TASKS.md, DECISIONS.md, and CONSTITUTION.md.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-2-generate-tool-specific-hooks","level":3,"title":"Step 2: Generate Tool-Specific Hooks","text":"
If you are using a tool other than Claude Code (which is configured automatically by ctx init), generate its integration configuration:
# For Cursor\nctx setup cursor\n\n# For Aider\nctx setup aider\n\n# For GitHub Copilot\nctx setup copilot\n\n# For Windsurf\nctx setup windsurf\n
Each command prints the configuration you need. How you apply it depends on the tool.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#claude-code","level":4,"title":"Claude Code","text":"
No action needed. Just install ctx from the Marketplace as ActiveMemory/ctx.
Claude Code is a First-Class Citizen
With the ctx plugin installed, Claude Code gets hooks and skills automatically. The PreToolUse hook runs ctx agent --budget 4000 on every tool call (with a 10-minute cooldown so it only fires once per window).
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#cursor","level":4,"title":"Cursor","text":"
Add the system prompt snippet to .cursor/settings.json:
{\n \"ai.systemPrompt\": \"Read .context/TASKS.md and .context/CONVENTIONS.md before responding. Follow rules in .context/CONSTITUTION.md.\"\n}\n
Context files appear in Cursor's file tree. You can also paste a context packet directly into chat:
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#aider","level":4,"title":"Aider","text":"
Create .aider.conf.yml so context files are loaded on every session:
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-3-set-up-shell-completion","level":3,"title":"Step 3: Set Up Shell Completion","text":"
Shell completion lets you tab-complete ctx subcommands and flags, which is especially useful while learning the CLI.
# Bash (add to ~/.bashrc)\nsource <(ctx completion bash)\n\n# Zsh (add to ~/.zshrc)\nsource <(ctx completion zsh)\n\n# Fish\nctx completion fish > ~/.config/fish/completions/ctx.fish\n
After sourcing, typing ctx a<TAB> completes to ctx agent, and ctx journal <TAB> shows list, show, and export.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-4-verify-the-setup-works","level":3,"title":"Step 4: Verify the Setup Works","text":"
Start a fresh session in your AI tool and ask:
\"Do you remember?\"
A correctly configured tool responds with specific context: current tasks from TASKS.md, recent decisions, and previous session topics. It should not say \"I don't have memory\" or \"Let me search for files.\"
This question checks the passive side of memory. A properly set-up agent is also proactive: it treats context maintenance as part of its job:
After a debugging session, it offers to save a learning.
After a trade-off discussion, it asks whether to record the decision.
After completing a task, it suggests follow-up items.
The \"do you remember?\" check verifies both halves: recall and responsibility.
For example, after resolving a tricky bug, a proactive agent might say:
That Redis timeout issue was subtle. Want me to save this as a *learning*\nso we don't hit it again?\n
If you see behavior like this, the setup is working end to end.
In Claude Code, you can also invoke the /ctx-status skill:
/ctx-status\n
This prints a summary of all context files, token counts, and recent activity, confirming that hooks are loading context.
If context is not loading, check the basics:
Symptom Fix ctx: command not found Ensure ctx is in your PATH: which ctx Hook errors Verify plugin is installed: claude /plugin list Context not refreshing Cooldown may be active; wait 10 minutes or set --cooldown 0","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-5-enable-watch-mode-for-non-native-tools","level":3,"title":"Step 5: Enable Watch Mode for Non-Native Tools","text":"
Tools like Aider, Copilot, and Windsurf do not support native hooks for saving context automatically. For these, run ctx watch alongside your AI tool.
Pipe the AI tool's output through ctx watch:
# Terminal 1: Run Aider with output logged\naider 2>&1 | tee /tmp/aider.log\n\n# Terminal 2: Watch the log for context updates\nctx watch --log /tmp/aider.log\n
Or for any generic tool:
your-ai-tool 2>&1 | tee /tmp/ai.log &\nctx watch --log /tmp/ai.log\n
When the AI emits structured update commands, ctx watch parses and applies them automatically:
<context-update type=\"learning\"\n context=\"Debugging rate limiter\"\n lesson=\"Redis MULTI/EXEC does not roll back on error\"\n application=\"Wrap rate-limit checks in Lua scripts instead\"\n>Redis Transaction Behavior</context-update>\n
To preview changes without modifying files:
ctx watch --dry-run --log /tmp/ai.log\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-6-import-session-transcripts-optional","level":3,"title":"Step 6: Import Session Transcripts (Optional)","text":"
If you want to browse past session transcripts, import them to the journal:
ctx journal import --all\n
This converts raw session data into editable Markdown files in .context/journal/. You can then enrich them with metadata using /ctx-journal-enrich-all inside your AI assistant.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
Here is the condensed setup for all three tools:
# ## Common (run once per project) ##\ncd your-project\nctx init\nsource <(ctx completion zsh) # or bash/fish\n\n# ## Claude Code (automatic, just verify) ##\n# Start Claude Code, then ask: \"Do you remember?\"\n\n# ## Cursor ##\nctx setup cursor\n# Add the system prompt to .cursor/settings.json\n# Paste context: ctx agent --budget 4000 | pbcopy\n\n# ## Aider ##\nctx setup aider\n# Create .aider.conf.yml with read: paths\n# Run watch mode alongside: ctx watch --log /tmp/aider.log\n\n# ## Verify any Tool ##\n# Ask your AI: \"Do you remember?\"\n# Expect: specific tasks, decisions, recent context\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#tips","level":2,"title":"Tips","text":"
Start with ctx init (not --minimal) for your first project. The full template set gives the agent more to work with, and you can always delete files later.
For Claude Code, the token budget is configured in the plugin's hooks.json. To customize, adjust the --budget flag in the ctx agent hook command.
The --session $PPID flag isolates cooldowns per Claude Code process, so parallel sessions do not suppress each other.
Commit your .context/ directory to version control. Several ctx features (journals, changelogs, blog generation) rely on git history.
For Cursor and Copilot, keep CONVENTIONS.md visible. These tools treat open files as higher-priority context.
Run ctx drift periodically to catch stale references before they confuse the agent.
The agent playbook instructs the agent to persist context at natural milestones (completed tasks, decisions, gotchas). In practice, this works best when you reinforce the habit: a quick \"anything worth saving?\" after a debugging session goes a long way.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#companion-tools-highly-recommended","level":2,"title":"Companion Tools (Highly Recommended)","text":"
ctx skills can leverage external MCP servers for web search and code intelligence. ctx works without them, but they significantly improve agent behavior across sessions — the investment is small and the benefits compound. Skills like /ctx-code-review, /ctx-explain, and /ctx-refactor all become noticeably better with these tools connected.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#gemini-search","level":3,"title":"Gemini Search","text":"
Provides grounded web search with citations. Used by skills and the agent playbook as the preferred search backend (faster and more accurate than built-in web search).
Setup: Add the Gemini Search MCP server to your Claude Code settings. See the Gemini Search MCP documentation for installation.
Verification:
# The agent checks this automatically during /ctx-remember\n# Manual test: ask the agent to search for something\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#gitnexus","level":3,"title":"GitNexus","text":"
Provides a code knowledge graph with symbol resolution, blast radius analysis, and domain clustering. Used by skills like /ctx-refactor (impact analysis) and /ctx-code-review (dependency awareness).
Setup: Add the GitNexus MCP server to your Claude Code settings, then index your project:
npx gitnexus analyze\n
Verification:
# The agent checks this automatically during /ctx-remember\n# If the index is stale, it will suggest rehydrating\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#suppressing-the-check","level":3,"title":"Suppressing the Check","text":"
If you don't use companion tools and want to skip the availability check at session start, add to .ctxrc:
companion_check: false\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#future-direction","level":3,"title":"Future Direction","text":"
The companion tool integration is evolving toward a pluggable model: bring your own search engine, bring your own code intelligence. The current integration is MCP-based and limited to Gemini Search and GitNexus. If you use a different search or code intelligence tool, skills will degrade gracefully to built-in capabilities.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#next-up","level":2,"title":"Next Up","text":"
Keeping Context in a Separate Repo →: Store context files outside the project tree for multi-repo or open source setups.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#see-also","level":2,"title":"See Also","text":"
The Complete Session: full session lifecycle recipe
Multilingual Session Parsing: configure session header prefixes for other languages
CLI Reference: all commands and flags
Integrations: detailed per-tool integration docs
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multilingual-sessions/","level":1,"title":"Multilingual Session Parsing","text":"","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#the-problem","level":2,"title":"The Problem","text":"
Your team works across languages. Session files written by AI tools might use headers like # Oturum: 2026-01-15 - API Düzeltme (Turkish) or # セッション: 2026-01-15 - テスト (Japanese) instead of # Session: 2026-01-15 - Fix API.
By default, ctx only recognizes Session: as a session header prefix. Files with other prefixes are silently skipped during journal import and journal generation: They look like regular Markdown, not sessions.
session_prefixes:\n - \"Session:\" # English (include to keep default)\n - \"Oturum:\" # Turkish\n - \"セッション:\" # Japanese\n
Restart your session. All configured prefixes are now recognized.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#how-it-works","level":2,"title":"How It Works","text":"
The Markdown session parser detects session files by looking for an H1 header that starts with a known prefix followed by a date:
# Session: 2026-01-15 - Fix API Rate Limiting\n# Oturum: 2026-01-15 - API Düzeltme\n# セッション: 2026-01-15 - テスト\n
The list of recognized prefixes comes from session_prefixes in .ctxrc. When the key is absent or empty, ctx falls back to the built-in default: [\"Session:\"].
Date-only headers (# 2026-01-15 - Morning Work) are always recognized regardless of prefix configuration.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#configuration","level":2,"title":"Configuration","text":"","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#adding-a-language","level":3,"title":"Adding a language","text":"
Add the prefix with a trailing colon to your .ctxrc:
When you override session_prefixes, the default is replaced, not extended. If you still want English headers recognized, include \"Session:\" in your list.
Commit .ctxrc to the repo so all team members share the same prefix list. This ensures ctx journal import and journal generation pick up sessions from all team members regardless of language.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#common-prefixes","level":3,"title":"Common prefixes","text":"Language Prefix English Session: Turkish Oturum: Spanish Sesión: French Session: German Sitzung: Japanese セッション: Korean 세션: Portuguese Sessão: Chinese 会话:","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#verifying","level":3,"title":"Verifying","text":"
After configuring, test with ctx journal source. Sessions with the new prefixes should appear in the output.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#what-this-does-not-do","level":2,"title":"What This Does NOT Do","text":"
Change the interface language: ctx output is always English. This setting only controls which session files ctx can parse.
Generate headers: ctx never writes session headers. The prefix list is recognition-only (input, not output).
Affect JSONL sessions: Claude Code JSONL transcripts don't use header prefixes. This only applies to Markdown session files in .context/sessions/.
See also: Setup Across AI Tools - complete multi-tool setup including Markdown session configuration.
See also: CLI Reference - full .ctxrc field reference including session_prefixes.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/parallel-worktrees/","level":1,"title":"Parallel Agent Development with Git Worktrees","text":"","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#the-problem","level":2,"title":"The Problem","text":"
You have a large backlog (10, 20, 30 open tasks) and many of them are independent: docs work that doesn't touch Go code, a new package that doesn't overlap with existing ones, test coverage for a stable module.
Running one agent at a time means serial execution. You want 3-4 agents working in parallel, each on its own track, without stepping on each other's files.
Git worktrees solve this.
Each worktree is a separate working directory with its own branch, but they share the same .git object database. Combined with ctx's persistent context, each agent session picks up the full project state and works independently.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#tldr","level":2,"title":"TL;DR","text":"
/ctx-worktree # 1. group tasks by file overlap\ngit worktree add ../myproject-docs -b work/docs # 2. create worktrees\ncd ../myproject-docs && claude # 3. launch agents (one per track)\n/ctx-worktree teardown docs # 4. merge back and clean up\n
TASKS.md will conflict on merge: Accept all [x] completions from both sides.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-worktree Skill Create, list, and tear down worktrees /ctx-next Skill Pick tasks from the backlog for each track git worktree Command Underlying git worktree management git merge Command Merge completed tracks back to main","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-1-assess-the-backlog","level":3,"title":"Step 1: Assess the Backlog","text":"
Start in your main checkout. Ask the agent to analyze your tasks and group them by blast radius: which files and directories each task touches.
/ctx-worktree\nLook at TASKS.md and group the pending tasks into 2-3 independent\ntracks based on which files they'd touch. Show me the grouping\nbefore creating anything.\n
The agent reads TASKS.md, estimates file overlap, and proposes groups:
Proposed worktree groups:\n\n work/docs # recipe updates, blog post (touches: docs/)\n work/crypto # scratchpad encryption infra (touches: internal/crypto/)\n work/tests # journal test coverage (touches: internal/cli/journal/)\n
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-2-create-the-worktrees","level":3,"title":"Step 2: Create the Worktrees","text":"
Once you approve the grouping, the agent creates worktrees as sibling directories:
Each worktree is a full working copy on its own branch.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-3-launch-agents","level":3,"title":"Step 3: Launch Agents","text":"
Open a separate terminal (or editor window) for each worktree and start a Claude Code session:
Each agent sees the full project, including .context/, and can work independently.
Do Not Initialize Context in Worktrees
Do not run ctx init in worktrees: The .context directory is already tracked in git.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-4-work","level":3,"title":"Step 4: Work","text":"
Each agent works through its assigned tasks. They can read TASKS.md to know what's assigned to their track, use /ctx-next to pick the next item, and commit normally on their work/* branch.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-5-merge-back","level":3,"title":"Step 5: Merge Back","text":"
As each track finishes, return to the main checkout and merge:
/ctx-worktree teardown docs\n
The agent checks for uncommitted changes, merges work/docs into your current branch, removes the worktree, and deletes the branch.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-6-handle-tasksmd-conflicts","level":3,"title":"Step 6: Handle TASKS.md Conflicts","text":"
TASKS.md will almost always conflict when merging: Multiple agents will mark different tasks as [x]. This is expected and easy to resolve:
Accept all completions from both sides. No task should go from [x] back to [ ]. The merge resolution is always additive.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-7-cleanup","level":3,"title":"Step 7: Cleanup","text":"
After all tracks are merged, verify everything is clean:
/ctx-worktree list\n
Should show only the main working tree. All work/* branches should be gone.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't have to use the skill directly for every step. These natural prompts work:
\"I have a big backlog. Can we split it across worktrees?\"
\"Which of these tasks can run in parallel without conflicts?\"
\"Merge the docs track back in.\"
\"Clean up all the worktrees, we're done.\"
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#what-works-differently-in-worktrees","level":2,"title":"What Works Differently in Worktrees","text":"
The encryption key lives at ~/.ctx/.ctx.key (user-level, outside the project). Because all worktrees on the same machine share this path, ctx pad and ctx notify work in worktrees automatically - no special setup needed.
One thing to watch:
Journal enrichment: ctx journal import and ctx journal enrich write files relative to the current working directory. Enrichments created in a worktree stay there and are discarded on teardown. Enrich journals on the main branch after merging: the JSONL session logs are always intact, and you don't lose any data.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#tips","level":2,"title":"Tips","text":"
3-4 worktrees max. Beyond that, merge complexity outweighs the parallelism benefit. The skill enforces this limit.
Group by package or directory, not by priority. Two high-priority tasks that touch the same files must be in the same track.
TASKS.md will conflict on merge. This is normal. Accept all [x] completions: The resolution is always additive.
Don't run ctx init in worktrees. The .context/ directory is tracked in git. Running init overwrites shared context files.
Name worktrees by concern, not by number. work/docs and work/crypto are more useful than work/track-1 and work/track-2.
Commit frequently in each worktree. Smaller commits make merge conflicts easier to resolve.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#next-up","level":2,"title":"Next Up","text":"
Back to the beginning: Guide Your Agent →
Or explore the full recipe list.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#see-also","level":2,"title":"See Also","text":"
Running an Unattended AI Agent: for serial autonomous loops instead of parallel tracks
Tracking Work Across Sessions: managing the task backlog that feeds into parallelization
The Complete Session: the complete session workflow end-to-end, with examples
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/permission-snapshots/","level":1,"title":"Permission Snapshots","text":"","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#the-problem","level":2,"title":"The Problem","text":"
Claude Code's .claude/settings.local.json accumulates one-off permissions every time you click \"Allow\". After busy sessions the file is full of session-specific entries that expand the agent's surface area beyond intent.
Since settings.local.json is .gitignored, there is no PR review or CI check. The file drifts independently on every machine, and there is no built-in way to reset to a known-good state.
/ctx-sanitize-permissions # audit for dangerous patterns\nctx permission snapshot # save golden image\n# ... sessions accumulate cruft ...\nctx permission restore # reset to golden state\n
Save a curated settings.local.json as a golden image, then restore from it to drop session-accumulated permissions. The golden file (.claude/settings.golden.json) is committed to version control and shared with the team.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Command/Skill Role in this workflow ctx permission snapshot Save settings.local.json as golden image ctx permission restore Reset settings.local.json from golden image /ctx-sanitize-permissions Audit for dangerous patterns before snapshotting","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#step-by-step","level":2,"title":"Step by Step","text":"","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#1-curate-your-permissions","level":3,"title":"1. Curate Your Permissions","text":"
Start with a clean settings.local.json. Optionally run /ctx-sanitize-permissions to remove dangerous patterns first.
Review the file manually. Every entry should be there because you decided it belongs, not because you clicked \"Allow\" once during debugging.
See the Permission Hygiene recipe for recommended defaults.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#2-take-a-snapshot","level":3,"title":"2. Take a Snapshot","text":"
ctx permission snapshot\n# Saved golden image: .claude/settings.golden.json\n
This creates a byte-for-byte copy. No re-encoding, no indent changes.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#3-commit-the-golden-file","level":3,"title":"3. Commit the Golden File","text":"
git add .claude/settings.golden.json\ngit commit -m \"Add permission golden image\"\n
The golden file is not gitignored (unlike settings.local.json). This is intentional: it becomes a team-shared baseline.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#4-auto-restore-at-the-session-start","level":3,"title":"4. Auto-Restore at the Session Start","text":"
Add this instruction to your CLAUDE.md:
## On Session Start\n\nRun `ctx permission restore` to reset permissions to the golden image.\n
The agent will restore the golden image at the start of every session, automatically dropping any permissions accumulated during previous sessions.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#5-update-when-intentional-changes-are-made","level":3,"title":"5. Update When Intentional Changes Are Made","text":"
When you add a new permanent permission (not a one-off debugging entry):
# Edit settings.local.json with the new permission\n# Then update the golden image:\nctx permission snapshot\ngit add .claude/settings.golden.json\ngit commit -m \"Update permission golden image: add cargo test\"\n
You don't need to remember exact commands. These natural-language prompts work with agents trained on the ctx playbook:
What you say What happens \"Save my current permissions as baseline\" Agent runs ctx permission snapshot \"Reset permissions to the golden image\" Agent runs ctx permission restore \"Clean up my permissions\" Agent runs /ctx-sanitize-permissions then snapshot \"What permissions did I accumulate?\" Agent diffs local vs golden","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#next-up","level":2,"title":"Next Up","text":"
Turning Activity into Content →: Generate blog posts, changelogs, and journal sites from your project activity.
Permission Hygiene: recommended defaults and maintenance workflow
CLI Reference: ctx permission: full command documentation
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/publishing/","level":1,"title":"Turning Activity into Content","text":"","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#the-problem","level":2,"title":"The Problem","text":"
Your .context/ directory is full of decisions, learnings, and session history.
Your git log tells the story of a project evolving.
But none of this is visible to anyone outside your terminal.
You want to turn this raw activity into:
a browsable journal site,
blog posts,
changelog posts.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#tldr","level":2,"title":"TL;DR","text":"
ctx journal import --all # 1. import sessions to markdown\n\n/ctx-journal-enrich-all # 2. add metadata and tags\n\nctx journal site --serve # 3. build and serve the journal\n\n/ctx-blog about the caching layer # 4. draft a blog post\n/ctx-blog-changelog v0.1.0 \"v0.2\" # 5. write a changelog post\n
Read on for details on each stage.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx journal import Command Import session JSONL to editable markdown ctx journal site Command Generate a static site from journal entries ctx journal obsidian Command Generate an Obsidian vault from journal entries ctx serve Command Serve any zensical directory (default: journal) ctx site feed Command Generate Atom feed from finalized blog posts make journal Makefile Shortcut for import + site rebuild /ctx-journal-enrich-all Skill Full pipeline: import if needed, then batch-enrich (recommended) /ctx-journal-enrich Skill Add metadata, summaries, and tags to one entry /ctx-blog Skill Draft a blog post from recent project activity /ctx-blog-changelog Skill Write a themed post from a commit range","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-1-import-sessions-to-markdown","level":3,"title":"Step 1: Import Sessions to Markdown","text":"
Raw session data lives as JSONL files in Claude Code's internal storage. The first step is converting these into readable, editable markdown.
# Import all sessions from the current project\nctx journal import --all\n\n# Import from all projects (if you work across multiple repos)\nctx journal import --all --all-projects\n\n# Import a single session by ID or slug\nctx journal import abc123\nctx journal import gleaming-wobbling-sutherland\n
Imported files land in .context/journal/ as individual Markdown files with session metadata and the full conversation transcript.
--all is safe by default: Only new sessions are imported. Existing files are skipped. Use --regenerate to re-import existing files (YAML frontmatter is preserved). Use --regenerate --keep-frontmatter=false -y to regenerate everything including frontmatter.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-2-enrich-entries-with-metadata","level":3,"title":"Step 2: Enrich Entries with Metadata","text":"
Raw entries have timestamps and conversations but lack the structured metadata that makes a journal searchable. Use /ctx-journal-enrich-all to process your entire backlog at once:
/ctx-journal-enrich-all\n
The skill finds all unenriched entries, filters out noise (suggestion sessions, very short sessions, multipart continuations), and processes each one by extracting titles, topics, technologies, and summaries from the conversation.
For large backlogs (20+ entries), it can spawn subagents to process entries in parallel.
This metadata powers better navigation in the journal site:
titles replace slugs,
summaries appear in the index,
and search covers topics and technologies.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-3-generate-the-journal-site","level":3,"title":"Step 3: Generate the Journal Site","text":"
With entries exported and enriched, generate the static site:
# Generate site files\nctx journal site\n\n# Generate and build static HTML\nctx journal site --build\n\n# Generate and serve locally (opens at http://localhost:8000)\nctx journal site --serve\n\n# Custom output directory\nctx journal site --output ~/my-journal\n
The site is generated in .context/journal-site/ by default. It uses zensical for static site generation (pipx install zensical).
Or use the Makefile shortcut that combines export and rebuild:
make journal\n
This runs ctx journal import --all followed by ctx journal site --build, then reminds you to enrich before rebuilding. To serve the built site, use make journal-serve or ctx serve (serve-only, no regeneration).
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#alternative-export-to-obsidian-vault","level":3,"title":"Alternative: Export to Obsidian Vault","text":"
If you use Obsidian for knowledge management, generate a vault instead of (or alongside) the static site:
This produces an Obsidian-ready directory with wikilinks, MOC (Map of Content) pages for topics/files/types, and a \"Related Sessions\" footer on each entry for graph connectivity. Open the output directory in Obsidian as a vault.
The vault uses the same enriched source entries as the static site. Both outputs can coexist: The static site goes to .context/journal-site/, the vault to .context/journal-obsidian/.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-4-draft-blog-posts-from-activity","level":3,"title":"Step 4: Draft Blog Posts from Activity","text":"
When your project reaches a milestone worth sharing, use /ctx-blog to draft a post from recent activity. The skill gathers context from multiple sources: git log, DECISIONS.md, LEARNINGS.md, completed tasks, and journal entries.
/ctx-blog about the caching layer we just built\n/ctx-blog last week's refactoring work\n/ctx-blog lessons learned from the migration\n
The skill gathers recent commits, decisions, and learnings; identifies a narrative arc; drafts an outline for approval; writes the full post; and saves it to docs/blog/YYYY-MM-DD-slug.md.
Posts are written in first person with code snippets, commit references, and an honest discussion of what went wrong.
The Output is zensical-Flavored Markdown
The blog skills produce Markdown tuned for a zensical site: topics: frontmatter (zensical's tag field), a docs/blog/ output path, and a banner image reference.
The content is still standard Markdown and can be adapted to other static site generators, but the defaults assume a zensical project structure.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-5-write-changelog-posts-from-commit-ranges","level":3,"title":"Step 5: Write Changelog Posts from Commit Ranges","text":"
For release notes or \"what changed\" posts, /ctx-blog-changelog takes a starting commit and a theme, then analyzes everything that changed:
/ctx-blog-changelog 040ce99 \"building the journal system\"\n/ctx-blog-changelog HEAD~30 \"what's new in v0.2.0\"\n/ctx-blog-changelog v0.1.0 \"the road to v0.2.0\"\n
The skill diffs the commit range, identifies the most-changed files, and constructs a narrative organized by theme rather than chronology, including a key commits table and before/after comparisons.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-6-generate-the-blog-feed","level":3,"title":"Step 6: Generate the Blog Feed","text":"
After publishing blog posts, generate the Atom feed so readers and automation can discover new content:
ctx site feed\n
This scans docs/blog/ for finalized posts (reviewed_and_finalized: true), extracts title, date, author, topics, and summary, and writes a valid Atom 1.0 feed to site/feed.xml. The feed is also generated automatically as part of make site.
The feed is available at ctx.ist/feed.xml.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#the-conversational-approach","level":2,"title":"The Conversational Approach","text":"
You can also drive your publishing anytime with natural language:
\"write about what we did this week\"\n\"turn today's session into a blog post\"\n\"make a changelog post covering everything since the last release\"\n\"enrich the last few journal entries\"\n
The agent has full visibility into your .context/ state (tasks completed, decisions recorded, learnings captured), so its suggestions are grounded in what actually happened.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
The full pipeline from raw transcripts to published content:
# 1. Import all sessions\nctx journal import --all\n\n# 2. In Claude Code: enrich all entries with metadata\n/ctx-journal-enrich-all\n\n# 3. Build and serve the journal site\nmake journal\nmake journal-serve\n\n# 3b. Or generate an Obsidian vault\nctx journal obsidian\n\n# 4. In Claude Code: draft a blog post\n/ctx-blog about the features we shipped this week\n\n# 5. In Claude Code: write a changelog post\n/ctx-blog-changelog v0.1.0 \"what's new in v0.2.0\"\n
The journal pipeline is idempotent at every stage. You can rerun ctx journal import --all without losing enrichment. You can rebuild the site as many times as you want.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#tips","level":2,"title":"Tips","text":"
Import regularly. Run ctx journal import --all after each session to keep your journal current. Only new sessions are imported: Existing files are skipped by default.
Use batch enrichment. /ctx-journal-enrich-all filters noise (suggestion sessions, trivial sessions, multipart continuations) so you do not have to decide what is worth enriching.
Keep journal files in .gitignore. Session journals can contain sensitive data: file contents, commands, internal discussions, and error messages with stack traces. Add .context/journal/ and .context/journal-site/ to .gitignore.
Use /ctx-blog for narrative posts and /ctx-blog-changelog for release posts. One finds a story in recent activity, the other explains a commit range by theme.
Edit the drafts. These skills produce drafts, not final posts. Review the narrative, add your perspective, and remove anything that does not serve the reader.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#next-up","level":2,"title":"Next Up","text":"
Running an Unattended AI Agent →: Set up an AI agent that works through tasks overnight without you at the keyboard.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#see-also","level":2,"title":"See Also","text":"
CLI Reference: ctx serve: serve-only (no regeneration)
Browsing and Enriching Past Sessions: journal browsing workflow
The Complete Session: capturing context during a session
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/scratchpad-sync/","level":1,"title":"Syncing Scratchpad Notes Across Machines","text":"","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#the-problem","level":2,"title":"The Problem","text":"
You work from multiple machines: a desktop and a laptop, or a local machine and a remote dev server.
The scratchpad entries are encrypted. The ciphertext (.context/scratchpad.enc) travels with git, but the encryption key lives outside the project at ~/.ctx/.ctx.key and is never committed. Without the key on each machine, you cannot read or write entries.
How do you distribute the key and keep the scratchpad in sync?
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#tldr","level":2,"title":"TL;DR","text":"
ctx init # 1. generates key\nscp ~/.ctx/.ctx.key user@machine-b:~/.ctx/.ctx.key # 2. copy key\nchmod 600 ~/.ctx/.ctx.key # 3. secure it\n# Normal git push/pull syncs the encrypted scratchpad.enc\n# On conflict: ctx pad resolve → rebuild → git add + commit\n
Finding Your Key File
The key is always at ~/.ctx/.ctx.key - one key, one machine.
Treat the Key Like a Password
The scratchpad key is the only thing protecting your encrypted entries.
Store a backup in a secure enclave such as a password manager, and treat it with the same care you would give passwords, certificates, or API tokens.
Anyone with the key can decrypt every scratchpad entry.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx init CLI command Initialize context (generates the key automatically) ctx pad add CLI command Add a scratchpad entry ctx pad rm CLI command Remove a scratchpad entry ctx pad edit CLI command Edit a scratchpad entry ctx pad resolve CLI command Show both sides of a merge conflict ctx pad merge CLI command Merge entries from other scratchpad files ctx pad import CLI command Bulk-import lines from a file ctx pad export CLI command Export blob entries to a directory scp Shell Copy the key file between machines git push / git pull Shell Sync the encrypted file via git/ctx-pad Skill Natural language interface to pad commands","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-1-initialize-on-machine-a","level":3,"title":"Step 1: Initialize on Machine A","text":"
Run ctx init on your first machine. The key is created automatically at ~/.ctx/.ctx.key:
ctx init\n# ...\n# Created ~/.ctx/.ctx.key (0600)\n# Created .context/scratchpad.enc\n
The key lives outside the project directory and is never committed. The .enc file is tracked in git.
Key Folder Change (v0.7.0+)
If you built ctx from source or upgraded past v0.6.0, the key location changed to ~/.ctx/.ctx.key. Check these legacy folders and copy your key manually:
# Old locations (pick whichever exists)\nls ~/.local/ctx/keys/ # pre-v0.7.0 user-level\nls .context/.ctx.key # pre-v0.6.0 project-local\n\n# Copy to the new location\nmkdir -p ~/.ctx && chmod 700 ~/.ctx\ncp <old-key-path> ~/.ctx/.ctx.key\nchmod 600 ~/.ctx/.ctx.key\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-2-copy-the-key-to-machine-b","level":3,"title":"Step 2: Copy the Key to Machine B","text":"
Use any secure transfer method. The key is always at ~/.ctx/.ctx.key:
# scp - create the target directory first\nssh user@machine-b \"mkdir -p ~/.ctx && chmod 700 ~/.ctx\"\nscp ~/.ctx/.ctx.key user@machine-b:~/.ctx/.ctx.key\n\n# Or use a password manager, USB drive, etc.\n
Set permissions on Machine B:
chmod 600 ~/.ctx/.ctx.key\n
Secure the Transfer
The key is a raw 256-bit AES key. Anyone with the key can decrypt the scratchpad. Use an encrypted channel (SSH, password manager, vault).
Never paste it in plaintext over email or chat.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-3-normal-pushpull-workflow","level":3,"title":"Step 3: Normal Push/Pull Workflow","text":"
The encrypted file is committed, so standard git sync works:
# Machine A: add entries and push\nctx pad add \"staging API key: sk-test-abc123\"\ngit add .context/scratchpad.enc\ngit commit -m \"Update scratchpad\"\ngit push\n\n# Machine B: pull and read\ngit pull\nctx pad\n# 1. staging API key: sk-test-abc123\n
Both machines have the same key, so both can decrypt the same .enc file.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-4-read-and-write-from-either-machine","level":3,"title":"Step 4: Read and Write from Either Machine","text":"
Once the key is distributed, all ctx pad commands work identically on both machines. Entries added on Machine A are visible on Machine B after a git pull, and vice versa.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-5-handle-merge-conflicts","level":3,"title":"Step 5: Handle Merge Conflicts","text":"
If both machines add entries between syncs, pulling will create a merge conflict on .context/scratchpad.enc. Git cannot merge binary (encrypted) content automatically.
The fastest approach is ctx pad merge: It reads both conflict sides, deduplicates, and writes the union:
# Extract theirs to a temp file, then merge it in\ngit show :3:.context/scratchpad.enc > /tmp/theirs.enc\ngit checkout --ours .context/scratchpad.enc\nctx pad merge /tmp/theirs.enc\n\n# Done: Commit the resolved scratchpad:\ngit add .context/scratchpad.enc\ngit commit -m \"Resolve scratchpad merge conflict\"\n
Alternatively, use ctx pad resolve to inspect both sides manually:
ctx pad resolve\n# === Ours (this machine) ===\n# 1. staging API key: sk-test-abc123\n# 2. check DNS after deploy\n#\n# === Theirs (incoming) ===\n# 1. staging API key: sk-test-abc123\n# 2. new endpoint: api.example.com/v2\n
Then reconstruct the merged scratchpad:
# Start fresh with all entries from both sides\nctx pad add \"staging API key: sk-test-abc123\"\nctx pad add \"check DNS after deploy\"\nctx pad add \"new endpoint: api.example.com/v2\"\n\n# Mark the conflict resolved\ngit add .context/scratchpad.enc\ngit commit -m \"Resolve scratchpad merge conflict\"\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#merge-conflict-walkthrough","level":2,"title":"Merge Conflict Walkthrough","text":"
Here's a full scenario showing how conflicts arise and how to resolve them:
1. Both machines start in sync (1 entry):
Machine A: 1. staging API key: sk-test-abc123\nMachine B: 1. staging API key: sk-test-abc123\n
2. Both add entries independently:
Machine A adds: \"check DNS after deploy\"\nMachine B adds: \"new endpoint: api.example.com/v2\"\n
3. Machine A pushes first. Machine B pulls and gets a conflict:
git pull\n# CONFLICT (content): Merge conflict in .context/scratchpad.enc\n
4. Machine B runs ctx pad resolve:
ctx pad resolve\n# === Ours ===\n# 1. staging API key: sk-test-abc123\n# 2. new endpoint: api.example.com/v2\n#\n# === Theirs ===\n# 1. staging API key: sk-test-abc123\n# 2. check DNS after deploy\n
5. Rebuild with entries from both sides and commit:
# Clear and rebuild (or use the skill to guide you)\nctx pad add \"staging API key: sk-test-abc123\"\nctx pad add \"check DNS after deploy\"\nctx pad add \"new endpoint: api.example.com/v2\"\n\ngit add .context/scratchpad.enc\ngit commit -m \"Merge scratchpad: keep entries from both machines\"\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#conversational-approach","level":3,"title":"Conversational Approach","text":"
When working with an AI assistant, you can resolve conflicts naturally:
You: \"I have a scratchpad merge conflict. Can you resolve it?\"\n\nAgent: \"Let me extract theirs and merge it in.\"\n [runs git show :3:.context/scratchpad.enc > /tmp/theirs.enc]\n [runs git checkout --ours .context/scratchpad.enc]\n [runs ctx pad merge /tmp/theirs.enc]\n \"Merged 2 new entries (1 duplicate skipped). Want me to\n commit the resolution?\"\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#tips","level":2,"title":"Tips","text":"
Back up the key: If you lose it, you lose access to all encrypted entries. Store a copy in your password manager.
One key per project: Each ctx init generates a unique key. Don't reuse keys across projects.
Keys work in worktrees: Because the key lives at ~/.ctx/.ctx.key (outside the project), git worktrees on the same machine share the key automatically. No special setup needed.
Plaintext fallback for non-sensitive projects: If encryption adds friction and you have nothing sensitive, set scratchpad_encrypt: false in .ctxrc. Merge conflicts become trivial text merges.
Never commit the key: The key is stored outside the project at ~/.ctx/.ctx.key and should never be copied into the repository.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#next-up","level":2,"title":"Next Up","text":"
Hook Output Patterns →: Choose the right output pattern for your Claude Code hooks.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#see-also","level":2,"title":"See Also","text":"
Scratchpad: feature overview, all commands, when to use scratchpad vs context files
Persisting Decisions, Learnings, and Conventions: for structured knowledge that outlives the scratchpad
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-with-claude/","level":1,"title":"Using the Scratchpad","text":"","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#the-problem","level":2,"title":"The Problem","text":"
During a session you accumulate quick notes, reminders, intermediate values, and sometimes sensitive tokens. They don't fit TASKS.md (not work items) or DECISIONS.md (not decisions). They don't have the structured fields that LEARNINGS.md requires.
Without somewhere to put them, they get lost between sessions.
How do you capture working memory that persists across sessions without polluting your structured context files?
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#tldr","level":2,"title":"TL;DR","text":"
ctx pad add \"check DNS propagation after deploy\"\nctx pad # list entries\nctx pad show 1 # print entry (pipe-friendly)\n
Entries are encrypted at rest and travel with git.
Use the /ctx-pad skill to manage entries from inside your AI session.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx pad CLI command List all scratchpad entries ctx pad show N CLI command Output raw text of entry N (pipe-friendly) ctx pad add CLI command Add a new entry ctx pad edit CLI command Replace, append to, or prepend to an entry ctx pad add --file CLI command Ingest a file as a blob entry ctx pad show N --out CLI command Extract a blob entry to a file ctx pad rm CLI command Remove an entry ctx pad mv CLI command Reorder entries ctx pad import CLI command Bulk-import lines from a file (or stdin) ctx pad export CLI command Export all blob entries to a directory /ctx-pad Skill Natural language interface to all pad commands","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-1-add-a-note-naturally","level":3,"title":"Step 1: Add a Note Naturally","text":"
You don't need to remember any syntax. Just tell your assistant what to jot down:
You: \"jot down: check DNS propagation after deploy\"\n\nAgent: \"Added to scratchpad:\n 1. check DNS propagation after deploy\"\n
Behind the scenes, the agent runs ctx pad add \"check DNS propagation after deploy\".
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-2-view-your-scratchpad","level":3,"title":"Step 2: View Your Scratchpad","text":"
You: \"show my scratchpad\"\n\nAgent: \"Your scratchpad has 3 entries:\n 1. check DNS propagation after deploy\n 2. staging API endpoint: api.staging.example.com/v2\n 3. retry limit should be 5, not 3\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-3-edit-an-entry","level":3,"title":"Step 3: Edit an Entry","text":"
You: \"update entry 2 to include the new endpoint\"\n\nAgent: \"Updated entry 2:\n was: staging API endpoint: api.staging.example.com/v2\n now: staging API endpoint: api.prod.example.com/v2\"\n
For partial edits, the agent can use --append or --prepend:
You: \"append the port number to entry 2\"\n\nAgent: \"Updated entry 2:\n staging API endpoint: api.prod.example.com/v2 :8443\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-4-compose-entries","level":3,"title":"Step 4: Compose Entries","text":"
Use ctx pad show N to pipe entry content into other operations:
You: \"append the content of entry 3 to entry 1\"\n\nAgent: [runs ctx pad edit 1 --append \"$(ctx pad show 3)\"]\n \"Done. Entry 1 is now:\n check DNS propagation after deploy retry limit should be 5, not 3\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-5-reorder-by-priority","level":3,"title":"Step 5: Reorder by Priority","text":"
You: \"move entry 4 to the top\"\n\nAgent: [runs ctx pad mv 4 1]\n \"Moved entry 4 to position 1. Scratchpad reordered.\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-6-clean-up","level":3,"title":"Step 6: Clean Up","text":"
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-7-store-a-file-as-a-blob","level":3,"title":"Step 7: Store a File as a Blob","text":"
The scratchpad can hold small files (up to 64 KB) as encrypted blob entries. The file is base64-encoded and stored alongside a label you provide:
# Ingest a file: the first argument is the label\nctx pad add \"deploy config\" --file ./deploy.yaml\n\n# List shows the label with a [BLOB] marker\nctx pad\n# 1. check DNS propagation after deploy\n# 2. deploy config [BLOB]\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-8-extract-a-blob","level":3,"title":"Step 8: Extract a Blob","text":"
Use show --out to write the decoded file back to disk:
# Write blob entry to a file\nctx pad show 2 --out ./recovered-deploy.yaml\n\n# Or print to stdout (for piping)\nctx pad show 2 | head -5\n
Blob entries are encrypted identically to text entries: They're just base64-encoded before encryption. The --out flag decodes and writes the raw bytes.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-9-bulk-import-notes","level":3,"title":"Step 9: Bulk Import Notes","text":"
When you have a file with many notes (one per line), import them in bulk instead of adding one at a time:
# Import from a file: Each non-empty line becomes an entry\nctx pad import notes.txt\n\n# Or pipe from stdin\ngrep TODO *.go | ctx pad import -\n
All entries are written in a single encrypt/write cycle, regardless of how many lines the file contains.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-10-export-blobs-to-disk","level":3,"title":"Step 10: Export Blobs to Disk","text":"
Export all blob entries to a directory as individual files. Each blob's label becomes the filename:
# Export to a directory (created if needed)\nctx pad export ./ideas\n\n# Preview what would be exported\nctx pad export --dry-run ./ideas\n\n# Force overwrite existing files\nctx pad export --force ./backup\n
When a file already exists, a unix timestamp is prepended to the filename to avoid collisions. Use --force to overwrite instead.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#using-ctx-pad-in-a-session","level":2,"title":"Using /ctx-pad in a Session","text":"
Invoke the /ctx-pad skill first, then describe what you want in natural language. Without the skill prefix, the agent may route your request to TASKS.md or another context file instead of the scratchpad.
You: /ctx-pad jot down: check DNS after deploy\nYou: /ctx-pad show my scratchpad\nYou: /ctx-pad delete entry 3\n
Once the skill is active, it translates intent into commands:
You say (after /ctx-pad) What the agent does \"jot down: check DNS after deploy\" ctx pad add \"check DNS after deploy\" \"remember this: retry limit is 5\" ctx pad add \"retry limit is 5\" \"show my scratchpad\" / \"what's on my pad\" ctx pad \"show me entry 3\" ctx pad show 3 \"delete the third one\" / \"remove entry 3\" ctx pad rm 3 \"change entry 2 to ...\" ctx pad edit 2 \"new text\" \"append ' +important' to entry 3\" ctx pad edit 3 --append \" +important\" \"prepend 'URGENT:' to entry 1\" ctx pad edit 1 --prepend \"URGENT: \" \"prioritize entry 4\" / \"move to the top\" ctx pad mv 4 1 \"import my notes from notes.txt\" ctx pad import notes.txt \"export all blobs to ./ideas\" ctx pad export ./ideas
When in Doubt, Use the CLI Directly
The ctx pad commands work the same whether you run them yourself or let the skill invoke them.
If the agent misroutes a request, fall back to ctx pad add \"...\" in your terminal.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#when-to-use-scratchpad-vs-context-files","level":2,"title":"When to Use Scratchpad vs Context Files","text":"Situation Use Temporary reminders (\"check X after deploy\") Scratchpad Session-start reminders (\"remind me next session\") ctx remind Working values during debugging (ports, endpoints, counts) Scratchpad Sensitive tokens or API keys (short-term storage) Scratchpad Quick notes that don't fit anywhere else Scratchpad Work items with completion tracking TASKS.md Trade-offs between alternatives with rationale DECISIONS.md Reusable lessons with context/lesson/application LEARNINGS.md Codified patterns and standards CONVENTIONS.md
Decision Guide
If it has structured fields (context, rationale, lesson, application), it belongs in a context file like DECISIONS.md or LEARNINGS.md.
If it's a work item you'll mark done, it belongs in TASKS.md.
If you want a message relayed VERBATIM at the next session start, it belongs in ctx remind.
If it's a quick note, reminder, or working value (especially if it's sensitive or ephemeral) it belongs on the scratchpad.
Scratchpad Is Not a Junk Drawer
The scratchpad is for working memory, not long-term storage.
If a note is still relevant after several sessions, promote it:
A persistent reminder becomes a task, a recurring value becomes a convention, a hard-won insight becomes a learning.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#tips","level":2,"title":"Tips","text":"
Entries persist across sessions: The scratchpad is committed (encrypted) to git, so entries survive session boundaries. Pick up where you left off.
Entries are numbered and reorderable: Use ctx pad mv to put high-priority items at the top.
ctx pad show N enables unix piping: Output raw entry text with no numbering prefix. Compose with --append, --prepend, or other shell tools.
Never mention the key file contents to the AI: The agent knows how to use ctx pad commands but should never read or print the encryption key (~/.ctx/.ctx.key) directly.
Encryption is transparent: You interact with plaintext; the encryption/decryption happens automatically on every read/write.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#next-up","level":2,"title":"Next Up","text":"
Syncing Scratchpad Notes Across Machines →: Distribute encryption keys and scratchpad data across environments.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#see-also","level":2,"title":"See Also","text":"
Scratchpad: feature overview, all commands, encryption details, plaintext override
Persisting Decisions, Learnings, and Conventions: for structured knowledge that outlives the scratchpad
The Complete Session: full session lifecycle showing how the scratchpad fits into the broader workflow
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/session-archaeology/","level":1,"title":"Browsing and Enriching Past Sessions","text":"","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#the-problem","level":2,"title":"The Problem","text":"
After weeks of AI-assisted development you have dozens of sessions scattered across JSONL files in ~/.claude/projects/. Finding the session where you debugged the Redis connection pool, or remembering what you decided about the caching strategy three Tuesdays ago, often means grepping raw JSON.
There is no table of contents, no search, and no summaries.
This recipe shows how to turn that raw session history into a browsable, searchable, and enriched journal site you can navigate in your browser.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#tldr","level":2,"title":"TL;DR","text":"
Export and Generate
ctx journal import --all\nctx journal site --serve\n
Enrich
/ctx-journal-enrich-all\n
Rebuild
ctx journal site --serve\n
Read on for what each stage does and why.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx journal source Command List parsed sessions with metadata ctx journal source --show Command Inspect a specific session in detail ctx journal import Command Import sessions to editable journal Markdown ctx journal site Command Generate a static site from journal entries ctx journal obsidian Command Generate an Obsidian vault from journal entries ctx serve Command Serve any zensical directory (default: journal) /ctx-history Skill Browse sessions inside your AI assistant /ctx-journal-enrich Skill Add frontmatter metadata to a single entry /ctx-journal-enrich-all Skill Full pipeline: import if needed, then batch-enrich","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#the-workflow","level":2,"title":"The Workflow","text":"
The session journal follows a four-stage pipeline.
Each stage is idempotent and safe to re-run:
By default, each stage skips entries that have already been processed.
import -> enrich -> rebuild\n
Stage Tool What it does Skips if Where Import ctx journal import --all Converts session JSONL to Markdown File already exists (safe default) CLI or agent Enrich /ctx-journal-enrich-all Adds frontmatter, summaries, topic tags Frontmatter already present Agent only Rebuild ctx journal site --build Generates browsable static HTML N/A CLI only Obsidian ctx journal obsidian Generates Obsidian vault with wikilinks N/A CLI only
Where Do You Run Each Stage?
Import (Steps 1 to 3) works equally well from the terminal or inside your AI assistant via /ctx-history. The CLI is fine here: the agent adds no special intelligence, it just runs the same command.
Enrich (Step 4) requires the agent: it reads conversation content and produces structured metadata.
Rebuild and serve (Step 5) is a terminal operation that starts a long-running server.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-1-list-your-sessions","level":3,"title":"Step 1: List Your Sessions","text":"
Start by seeing what sessions exist for the current project:
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-2-inspect-a-specific-session","level":3,"title":"Step 2: Inspect a Specific Session","text":"
Before exporting everything, inspect a single session to see its metadata and conversation summary:
ctx journal source --show --latest\n
Or look up a specific session by its slug, partial ID, or UUID:
Add --full to see the complete message content instead of the summary view:
ctx journal source --show --latest --full\n
This is useful for checking what happened before deciding whether to export and enrich it.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-3-import-sessions-to-the-journal","level":3,"title":"Step 3: Import Sessions to the Journal","text":"
Import converts raw session data into editable Markdown files in .context/journal/:
# Import all sessions from the current project\nctx journal import --all\n\n# Import a single session\nctx journal import gleaming-wobbling-sutherland\n\n# Include sessions from all projects\nctx journal import --all --all-projects\n
--keep-frontmatter=false Discards Enrichments
--keep-frontmatter=false discards enriched YAML frontmatter during regeneration.
Back up your journal before using this flag.
Each imported file contains session metadata (date, time, duration, model, project, git branch), a tool usage summary, and the full conversation transcript.
Re-importing is safe. Running ctx journal import --all only imports new sessions: Existing files are never touched. Use --dry-run to preview what would be imported without writing anything.
To re-import existing files (e.g., after a format improvement), use --regenerate: Conversation content is regenerated while preserving any YAML frontmatter you or the enrichment skill has added. You'll be prompted before any files are overwritten.
--regenerate Replaces the Markdown Body
--regenerate preserves YAML frontmatter but replaces the entire Markdown body with freshly generated content from the source JSONL.
If you manually edited the conversation transcript (added notes, redacted sensitive content, restructured sections), those edits will be lost.
BACK UP YOUR JOURNAL FIRST.
To protect entries you've hand-edited, you can explicitly lock them:
ctx journal lock <pattern>\n
Locked entries are always skipped, regardless of flags.
If you prefer to add locked: true directly in frontmatter during enrichment, run ctx journal sync to propagate the lock state to .state.json:
ctx journal sync\n
See ctx journal lock --help and ctx journal sync --help for details.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-4-enrich-with-metadata","level":3,"title":"Step 4: Enrich with Metadata","text":"
Raw imports have timestamps and transcripts but lack the semantic metadata that makes sessions searchable: topics, technology tags, outcome status, and summaries. The /ctx-journal-enrich* skills add this structured frontmatter.
Locked entries are skipped by enrichment skills, just as they are by import. Lock entries you want to protect before running batch enrichment.
Batch enrichment (recommended):
/ctx-journal-enrich-all\n
The skill finds all unenriched entries, filters out noise (suggestion sessions, very short sessions, multipart continuations), and processes each one by extracting titles, topics, technologies, and summaries from the conversation.
It shows you a grouped summary before applying changes so you can scan quickly rather than reviewing one by one.
For large backlogs (20+ entries), the skill can spawn subagents to process entries in parallel.
The skill also generates a summary and can extract decisions, learnings, and tasks mentioned during the session.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-5-generate-and-serve-the-site","level":3,"title":"Step 5: Generate and Serve the Site","text":"
With imported and enriched journal files, generate the static site:
# Generate site structure only\nctx journal site\n\n# Generate and build static HTML\nctx journal site --build\n\n# Generate, build, and serve locally\nctx journal site --serve\n
Then open http://localhost:8000 to browse.
The site includes a date-sorted index, individual session pages with full conversations, search (press /), dark mode, and enriched titles in the navigation when frontmatter exists.
You can also serve an already-generated site without regenerating using ctx serve (serve-only, no regeneration).
The site generator requires zensical (pipx install zensical).
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#where-does-agent-add-value","level":2,"title":"Where Does Agent Add Value?","text":"
Export, list, and show are mechanical: The agent runs the same CLI commands you would, so you can stay in your terminal for those.
The agent earns its keep in enrich. It reads conversation content, understands what happened, and produces structured metadata. That is agent work, not CLI work.
You can also ask your agent to browse sessions conversationally instead of remembering flags:
What did we work on last week?\nShow me the session about Redis.\nImport everything to the journal.\n
This is convenient but not required: ctx journal source gives you the same inventory.
Where the agent genuinely helps is chaining the pipeline:
You: What happened last Tuesday?\nAgent: Last Tuesday you worked on two sessions:\n - bright-dancing-hopper (2h 10m): refactored the middleware\n pipeline and added Redis caching\n - quiet-flowing-dijkstra (18m): quick fix for a nil pointer\n in the config loader\n Want me to export and enrich them?\nYou: Yes, do it.\nAgent: Exports both, enriches, then proposes frontmatter.\n
The value is staying in one context while the agent runs import -> enrich without you manually switching tools.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
A typical pipeline from raw sessions to a browsable site:
# Terminal: import and generate\nctx journal import --all\nctx journal site --serve\n
# AI assistant: enrich\n/ctx-journal-enrich-all\n
# Terminal: rebuild with enrichments\nctx journal site --serve\n
If your project includes Makefile.ctx (deployed by ctx init), use make journal to combine import and rebuild stages. Then enrich inside Claude Code, then make journal again to pick up enrichments.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#session-retention-and-cleanup","level":2,"title":"Session Retention and Cleanup","text":"
Claude Code does not keep JSONL transcripts forever. Understanding its cleanup behavior helps you avoid losing session history.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#default-behavior","level":3,"title":"Default Behavior","text":"
Claude Code retains session transcripts for approximately 30 days. After that, JSONL files are automatically deleted during cleanup. Once deleted, ctx journal can no longer see those sessions - the data is gone.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#the-cleanupperioddays-setting","level":3,"title":"The cleanupPeriodDays Setting","text":"
Claude Code exposes a cleanupPeriodDays setting in its configuration (~/.claude/settings.json) that controls retention:
Value Behavior 30 (default) Transcripts older than 30 days are deleted 60, 90, etc. Extends the retention window 0 Disables writing new transcripts entirely - not \"keep forever\"
Setting cleanupPeriodDays to 0
Setting this to 0 does not mean \"never delete.\" It disables transcript creation altogether. No new JSONL files are written, which means ctx journal sees nothing new. This is rarely what you want.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#why-journal-import-matters","level":3,"title":"Why Journal Import Matters","text":"
The journal import pipeline (Steps 1-4 above) is your archival mechanism. Imported Markdown files in .context/journal/ persist independently of Claude Code's cleanup cycle. Even after the source JSONL files are deleted, your journal entries remain.
Recommendation: import regularly - weekly, or after any session worth revisiting. A quick ctx journal import --all takes seconds and ensures nothing falls through the 30-day window.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#quick-archival-checklist","level":3,"title":"Quick Archival Checklist","text":"
Run ctx journal import --all at least weekly
Enrich high-value sessions with /ctx-journal-enrich before the details fade from your own memory
Lock enriched entries (ctx journal lock <pattern>) to protect them from accidental regeneration
Rebuild the journal site periodically to keep it current
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#tips","level":2,"title":"Tips","text":"
Start with /ctx-history inside your AI assistant. If you want to quickly check what happened in a recent session without leaving your editor, /ctx-history lets you browse interactively without importing.
Large sessions may be split automatically. Sessions with 200+ messages can be split into multiple parts (session-abc123.md, session-abc123-p2.md, session-abc123-p3.md) with navigation links between them. The site generator can handle this.
Suggestion sessions can be separated. Claude Code can generate short suggestion sessions for autocomplete. These may appear under a separate section in the site index, so they do not clutter your main session list.
Your agent is a good session browser. You do not need to remember slugs, dates, or flags. Ask \"what did we do yesterday?\" or \"find the session about Redis\" and it can map the question to recall commands.
Journal Files Are Sensitive
Journal files MUST be .gitignored.
Session transcripts can contain sensitive data such as file contents, commands, error messages with stack traces, and potentially API keys.
Add .context/journal/, .context/journal-site/, and .context/journal-obsidian/ to your .gitignore.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#next-up","level":2,"title":"Next Up","text":"
Persisting Decisions, Learnings, and Conventions →: Record decisions, learnings, and conventions so they survive across sessions.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#see-also","level":2,"title":"See Also","text":"
The Complete Session: where session saving fits in the daily workflow
Turning Activity into Content: generating blog posts from session history
Session Journal: full documentation of the journal system
CLI Reference: ctx journal: all journal subcommands and flags
CLI Reference: ctx serve: serve-only (no regeneration)
Context Files: the .context/ directory structure
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-ceremonies/","level":1,"title":"Session Ceremonies","text":"","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#the-problem","level":2,"title":"The Problem","text":"
Sessions have two critical moments: the start and the end.
At the start, you need the agent to load context and confirm it knows what is going on.
At the end, you need to capture whatever the session produced before the conversation disappears.
Most ctx skills work conversationally: \"jot down: check DNS after deploy\" is as good as /ctx-pad add \"check DNS after deploy\". But session boundaries are different. They are well-defined moments with specific requirements, and partial execution is costly.
If the agent only half-loads context at the start, it works from stale assumptions. If it only half-persists at the end, learnings and decisions are lost.
This Is One of the Few Times Being Explicit Matters
Session ceremonies are the two bookend skills that mark these boundaries.
They are the exception to the conversational rule:
Invoke /ctx-remember and /ctx-wrap-up explicitly as slash commands.
Most ctx skills encourage natural language. These two are different:
Well-defined moments: Sessions have clear boundaries. A slash command marks the boundary unambiguously.
Ambiguity risk: \"Do you remember?\" could mean many things. /ctx-remember means exactly one thing: load context and present a structured readback.
Completeness: Conversational triggers risk partial execution. The agent might load some files but skip the session history, or persist one learning but forget to check for uncommitted changes. The slash command runs the full ceremony.
Muscle memory: Typing /ctx-remember at session start and /ctx-wrap-up at session end becomes a habit, like opening and closing braces.
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-remember Skill Load context and present structured readback /ctx-wrap-up Skill Gather session signal, propose and persist context /ctx-commit Skill Commit with context capture (offered by wrap-up) ctx agent CLI Load token-budgeted context packet ctx journal source CLI List recent sessions ctx add CLI Persist learnings, decisions, conventions, tasks","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#session-start-ctx-remember","level":2,"title":"Session Start: /ctx-remember","text":"
Invoke at the beginning of every session:
/ctx-remember\n
The skill silently:
Loads the context packet via ctx agent --budget 4000
Reads TASKS.md, DECISIONS.md, LEARNINGS.md
Checks recent sessions via ctx journal source --limit 3
Then presents a structured readback with four sections:
Last session: topic, date, what was accomplished
Active work: pending and in-progress tasks
Recent context: 1-2 relevant decisions or learnings
Next step: suggestion or question about what to focus on
The readback should feel like recall, not a file system tour. If the agent says \"Let me check if there are files...\" instead of a confident summary, the skill is not working correctly.
What About 'do you remember?'
The conversational trigger still works. But /ctx-remember guarantees the full ceremony runs:
After persisting, the skill marks the session as wrapped up via ctx system mark-wrapped-up. This suppresses context checkpoint nudges for 2 hours so the wrap-up ceremony itself does not trigger noisy reminders.
If there are uncommitted changes, offers to run /ctx-commit. Does not auto-commit.
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#when-to-skip","level":2,"title":"When to Skip","text":"
Not every session needs ceremonies.
Skip /ctx-remember when:
You are doing a quick one-off lookup (reading a file, checking a value)
Context was already loaded this session via /ctx-agent
You are continuing immediately after a previous session and context is still fresh
Skip /ctx-wrap-up when:
Nothing meaningful happened (only read files, answered a question)
You already persisted everything manually during the session
The session was trivial (typo fix, quick config change)
A good heuristic: if the session produced something a future session should know about, run /ctx-wrap-up. If not, just close.
# Session start\n/ctx-remember\n\n# ... do work ...\n\n# Session end\n/ctx-wrap-up\n
That is the complete ceremony. Two commands, bookending your session.
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#relationship-to-other-skills","level":2,"title":"Relationship to Other Skills","text":"Skill When Purpose /ctx-remember Session start Load and confirm context /ctx-reflect Mid-session breakpoints Checkpoint at milestones /ctx-wrap-up Session end Full session review and persist /ctx-commit After completing work Commit with context capture
/ctx-reflect is for mid-session checkpoints. /ctx-wrap-up is for end-of-session: it is more thorough, covers the full session arc, and includes the commit offer. If you already ran /ctx-reflect recently, /ctx-wrap-up avoids proposing the same candidates again.
Make it a habit: The value of ceremonies compounds over sessions. Each /ctx-wrap-up makes the next /ctx-remember richer.
Trust the candidates: The agent scans the full conversation. It often catches learnings you forgot about.
Edit before approving: If a proposed candidate is close but not quite right, tell the agent what to change. Do not settle for a vague learning when a precise one is possible.
Do not force empty ceremonies: If /ctx-wrap-up finds nothing worth persisting, that is fine. A session that only read files and answered questions does not need artificial learnings.
The Complete Session: the full session workflow that ceremonies bookend
Persisting Decisions, Learnings, and Conventions: deep dive on what gets persisted during wrap-up
Detecting and Fixing Drift: keeping context files accurate between ceremonies
Pausing Context Hooks: skip ceremonies entirely for quick tasks that don't need them
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-changes/","level":1,"title":"Reviewing Session Changes","text":"","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#what-changed-while-you-were-away","level":2,"title":"What Changed While You Were Away?","text":"
Between sessions, teammates commit code, context files get updated, and decisions pile up. ctx change gives you a single-command summary of everything that moved since your last session.
# Auto-detects your last session and shows what changed\nctx change\n\n# Check what changed in the last 48 hours\nctx change --since 48h\n\n# Check since a specific date\nctx change --since 2026-03-10\n
","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#how-reference-time-works","level":2,"title":"How Reference Time Works","text":"
ctx change needs a reference point to compare against. It tries these sources in order:
--since flag: explicit duration (24h, 72h) or date (2026-03-10, RFC3339 timestamp)
Session markers: ctx-loaded-* files in .context/state/; picks the second-most-recent (your previous session start)
Event log: last context-load-gate event from .context/state/events.jsonl
Fallback: 24 hours ago
The marker-based detection means ctx change usually just works without any flags: it knows when you last loaded context and shows everything after that.
","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#what-it-reports","level":2,"title":"What It Reports","text":"","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#context-file-changes","level":3,"title":"Context file changes","text":"
Any .md file in .context/ modified after the reference time:
No changes? If nothing shows up, the reference time might be wrong. Use --since 48h to widen the window.
Works without git. Context file changes are detected by filesystem mtime, not git. Code changes require git.
Hook integration. The context-load-gate hook writes the session marker that ctx change uses for auto-detection. If you're not using the ctx plugin, markers won't exist and it falls back to the event log or 24h window.
\"What does a full ctx session look like from start to finish?\"
You have ctx installed and your .context/ directory initialized, but the individual commands and skills feel disconnected.
How do they fit together into a coherent workflow?
This recipe walks through a complete session, from opening your editor to persisting context before you close it, so you can see how each piece connects.
Load: /ctx-remember: load context, get structured readback.
Orient: /ctx-status: check file health and token usage.
Pick: /ctx-next: choose what to work on.
Work: implement, test, iterate.
Commit: /ctx-commit: commit and capture decisions/learnings.
Reflect: /ctx-reflect: identify what to persist (at milestones)
Wrap up: /ctx-wrap-up: end-of-session ceremony.
Read on for the full walkthrough with examples.
What is a Readback?
A readback is a structured summary where the agent plays back what it knows:
last session,
active tasks,
recent decisions.
This way, you can confirm it loaded the right context.
The term \"readback\" comes from aviation, where pilots repeat instructions back to air traffic control to confirm they heard correctly.
Same idea in ctx: The agent tells you what it \"thinks\" is going on, and you correct anything that's off before the work begins.
Last session: topic, date, what was accomplished
Active work: pending and in-progress tasks
Recent context: 1-2 decisions or learnings that matter now
Next step: suggestion or question about what to focus on
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx status CLI command Quick health check on context files ctx agent CLI command Load token-budgeted context packet ctx journal source CLI command List previous sessions ctx journal source --show CLI command Inspect a specific session in detail /ctx-remember Skill Recall project context with structured readback /ctx-agent Skill Load full context packet inside the assistant /ctx-status Skill Show context summary with commentary /ctx-next Skill Suggest what to work on with rationale /ctx-commit Skill Commit code and prompt for context capture /ctx-reflect Skill Structured reflection checkpoint /ctx-history Skill Browse session history inside your AI assistant","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#the-workflow","level":2,"title":"The Workflow","text":"
The session lifecycle has seven steps. You will not always use every step (for example, a quick bugfix might skip reflection, and a research session might skip committing), but the full arc looks like this:
Load context > Orient > Pick a Task > Work > Commit > Reflect
Start every session by loading what you know. The fastest way is a single prompt:
Do you remember what we were working on?\n
This triggers the /ctx-remember skill. Behind the scenes, the assistant runs ctx agent --budget 4000, reads the files listed in the context packet (TASKS.md, DECISIONS.md, LEARNINGS.md, CONVENTIONS.md), checks ctx journal source --limit 3 for recent sessions, and then presents a structured readback.
The readback should feel like a recall, not a file system tour. If you see \"Let me check if there are files...\" instead of a confident summary, the context system is not loaded properly.
As an alternative, if you want raw data instead of a readback, run ctx status in your terminal or invoke /ctx-status for a summarized health check showing file counts, token usage, and recent activity.
After loading context, verify you understand the current state.
/ctx-status\n
The status output shows which context files are populated, how many tokens they consume, and which files were recently modified. Look for:
Empty core files: TASKS.md or CONVENTIONS.md with no content means the context is sparse
High token count (over 30k): the context is bloated and might need ctx compact
No recent activity: files may be stale and need updating
If the status looks healthy and the readback from Step 1 gave you enough context, skip ahead.
If something seems off (stale tasks, missing decisions...), spend a minute reading the relevant file before proceeding.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-3-pick-what-to-work-on","level":3,"title":"Step 3: Pick What to Work On","text":"
With context loaded, choose a task. You can pick one yourself, or ask the assistant to recommend:
/ctx-next\n
The skill reads TASKS.md, checks recent sessions to avoid re-suggesting completed work, and presents 1-3 ranked recommendations with rationale.
It prioritizes in-progress tasks over new starts (finishing is better than starting), respects explicit priority tags, and favors momentum: continuing a thread from a recent session is cheaper than context-switching.
If you already know what you want to work on, state it directly:
Let's work on the session enrichment feature.\n
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-4-do-the-work","level":3,"title":"Step 4: Do the Work","text":"
This is the main body of the session: write code, fix bugs, refactor, research: whatever the task requires.
During this phase, a few ctx-specific patterns help:
Check decisions before choosing: when you face a design choice, check if a prior decision covers it.
Is this consistent with our decisions?\n
Constrain scope: keep the assistant focused on the task at hand.
Only change files in internal/cli/session/. Nothing else.\n
Use /ctx-implement for multistep plans: if the task has multiple steps, this skill executes them one at a time with build/test verification between each step.
Context monitoring runs automatically: the check-context-size hook monitors context capacity at adaptive intervals. Early in a session it stays silent. After 16+ prompts it starts monitoring, and past 30 prompts it checks frequently. If context capacity is running high, it will suggest saving unsaved work. No manual invocation is needed.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-5-commit-with-context","level":3,"title":"Step 5: Commit with Context","text":"
When the work is ready, use the context-aware commit instead of raw git commit:
/ctx-commit\n
The Agent May Recommend Committing
You do not always need to invoke /ctx-commit explicitly.
After a commit, the agent may proactively offer to capture context:
\"We just made a trade-off there. Want me to record it as a decision?\"
This is normal: The Agent Playbook encourages persisting at milestones, and a commit is a natural milestone.
As an alternative, you can ask the assistant \"can we commit this?\" and it will pick up the /ctx-commit skill for you.
The skill runs a pre-commit build check (for Go projects, go build), reviews the staged changes, drafts a commit message focused on \"why\" rather than \"what\", and then commits.
After the commit succeeds, it prompts you:
**Any context to capture?**\n\n- **Decision**: Did you make a design choice or trade-off?\n- **Learning**: Did you hit a gotcha or discover something?\n- **Neither**: No context to capture; we are done.\n
If you made a decision, the skill records it with ctx add decision. If you learned something, it records it with ctx add learning including context, lesson, and application fields. This is the bridge between committing code and remembering why the code looks the way it does.
If source code changed in areas that affect documentation, the skill also offers to check for doc drift.
At natural breakpoints (after finishing a feature, resolving a complex bug, or before switching tasks) pause to reflect:
/ctx-reflect\n
Agents Reflect at Milestones
Agents often reflect without explicit invocation.
After completing a significant piece of work, the agent may naturally surface items worth persisting:
\"We discovered that $PPID resolves differently inside hooks. Should I save that as a learning?\"
This is the agent following the Work-Reflect-Persist cycle from the Agent Playbook.
You do not need to say /ctx-reflect for this to happen; the agent treats milestones as reflection triggers on its own.
The skill works through a checklist: learnings discovered, decisions made, tasks completed or created, and whether there are items worth persisting. It then presents a summary with specific items to persist, each with the exact command to run:
I would suggest persisting:\n\n- **Learning**: `$PPID` in PreToolUse hooks resolves to the Claude Code PID\n `ctx add learning --context \"...\" --lesson \"...\" --application \"...\"`\n- **Task**: mark \"Add cooldown to ctx agent\" as done\n- **Decision**: tombstone-based cooldown with 10m default\n `ctx add decision \"...\"`\n\nWant me to persist any of these?\n
The skill asks before persisting anything. You choose what to keep.
Not every commit needs reflection. A typo fix does not. But when you have been debugging for an hour and finally understand the root cause, that is worth a reflection checkpoint.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-7-persist-before-ending","level":3,"title":"Step 7: Persist Before Ending","text":"
Before ending the session, run the wrap-up ceremony to capture outstanding learnings, decisions, conventions, and tasks:
/ctx-wrap-up\n
Ceremony Skills
/ctx-remember at session start and /ctx-wrap-up at session end are ceremony skills: Invoke them as explicit slash commands for completeness, not conversationally.
See Session Ceremonies for details on why these two skills are the exception to the conversational rule.
The skill gathers signal from git diff, recent commits, and the conversation, then proposes structured candidates for your approval. After you select which to keep, it persists them via ctx add and offers /ctx-commit if uncommitted changes remain.
Session transcripts are automatically captured by Claude Code and can be browsed later with ctx journal source and ctx journal source --show.
Last session (Feb 7, caching-layer): we implemented the in-memory cache with\nTTL expiration and wrote tests for the eviction policy. \n\nYou have 3 pending tasks; the biggest one is adding Redis as a backend.\n\nRecent decision:<br>\nwe chose LRU over LFU because the access pattern is mostly recency-based.\n\nWant to continue with the Redis backend, or tackle something else?\n
You: Let's work on the Redis caching backend.
Agent (picks up the task, begins implementation):
Got it. I see the task in TASKS.md: \"Add Redis backend for cache layer.\"\n\nI will start with the connection pool setup. Only touching\n`internal/cache/redis/` for now.\n
(The agent works... writes code, runs tests, iterates.)
Agent (after implementing the connection pool and choosing a retry strategy):
The connection pool is working. I went with exponential backoff with jitter\nfor retries instead of fixed intervals; it handles thundering herd better\nunder reconnection storms. That is a design choice worth recording. \n\nWant me to save it as a decision?\n
You: Yes, go ahead.
Agent:
Done. Added to DECISIONS.md: \"Exponential backoff with jitter for Redis\nreconnection, to avoid thundering herd on pool recovery.\"\n
(More work happens... the agent implements the cache read/write paths.)
You: What else did we learn during this session?
Agent:
A few things worth capturing:\n\n* **Learning**: Redis `SET` with `NX` and `PX` flags handles atomic\n set-if-not-exists with TTL in a single call. No need for a separate\n `EXPIRE` command. We initially did it in two calls and hit a race condition.\n* **Learning**: The `go-redis` library's `PoolSize` default is 10 per CPU.\n For our workload, explicit sizing at 20 was more predictable.\n* **Task update**: The connection pool subtask is done; the read/write path\n is implemented but needs integration tests.\n\nWant me to persist these?\n
You: Yes. And let's wrap up.
Agent (reflects and persists... all in one flow):
Persisted:\n\n* 2 learnings added to LEARNINGS.md\n* Marked \"connection pool setup\" done in TASKS.md\n* Added \"write integration tests for Redis cache\" as a new task\n\nContext is up to date. You are good to close.\n
Notice What Happened
In the above workflow, the user never typed /ctx-reflect or ctx add learning.
The agent moved through Load, Orient, Pick, Work, Commit, and Reflect driven by natural conversation.
\"Let's wrap up\" was enough to trigger the full reflect-and-persist flow.
The agent surfaced persist-worthy items at milestones: after a design choice, after discovering a gotcha: without waiting to be asked.
This is the intended experience.
The commands and skills still exist for when you want precise control, but the agent is a proactive partner in the lifecycle, not a passive executor of slash commands.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
Quick-reference checklist for a complete session:
Load: /ctx-remember: load context and confirm readback
Orient: /ctx-status: check file health and token usage
Pick: /ctx-next: choose what to work on
Work: implement, test, iterate (scope with \"only change X\")
Commit: /ctx-commit: commit and capture decisions/learnings
Reflect: /ctx-reflect: identify what to persist (at milestones)
Wrap up: /ctx-wrap-up: end-of-session ceremony
Conversational equivalents: you can drive the same lifecycle with plain language:
Step Slash command Natural language Load /ctx-remember \"Do you remember?\" / \"What were we working on?\" Orient /ctx-status \"How's our context looking?\" Pick /ctx-next \"What should we work on?\" / \"Let's do the caching task\" Work -- \"Only change files in internal/cache/\" Commit /ctx-commit \"Commit this\" / \"Ship it\" Reflect /ctx-reflect \"What did we learn?\" / (agent offers at milestones) Wrap up /ctx-wrap-up (use the slash command for completeness)
The agent understands both columns.
In practice, most sessions use a mix:
Explicit Commands when you want precision;
Natural Language when you want flow and agentic autonomy.
The agent will also initiate steps on its own (particularly \"Reflect\") when it recognizes a milestone.
Short sessions (quick bugfix) might only use: Load, Work, Commit.
Long sessions should Reflect after each major milestone and persist learnings and decisions before ending.
Persist early if context is running low. A hook monitors context capacity and notifies you when it gets high, but do not wait for the notification. If you have been working for a while and have unpersisted learnings, persist proactively.
Browse previous sessions by topic. If you need context from a prior session, ctx journal source --show auth will match by keyword. You do not need to remember the exact date or slug.
Reflection is optional but valuable. You can skip /ctx-reflect for small changes, but always persist learnings and decisions before ending a session where you did meaningful work. These are what the next session loads.
Let the hook handle context loading. The PreToolUse hook runs ctx agent automatically with a cooldown, so context loads on first tool use without you asking. The /ctx-remember prompt at session start is for your benefit (to get a readback), not because the assistant needs it.
The agent is a proactive partner, not a passive tool. A ctx-aware agent follows the Agent Playbook: it watches for milestones (completed tasks, design decisions, discovered gotchas) and offers to persist them without being asked. If you finish a tricky debugging session, it may say \"That root cause is worth saving as a learning. Want me to record it?\" before you think to ask. This is by design.
Not every session needs the full ceremony. Quick investigations, one-off questions, small fixes unrelated to active project work: These tasks don't benefit from persistence nudges, ceremony reminders, or knowledge checks. Every hook still fires, consuming tokens and attention on work that won't produce learnings or decisions worth capturing.
","path":["Recipes","Sessions","Pausing Context Hooks"],"tags":[]},{"location":"recipes/session-pause/#tldr","level":2,"title":"TL;DR","text":"Command What it does ctx pause or /ctx-pause Silence all nudge hooks for this session ctx resume or /ctx-resume Restore normal hook behavior
Pause is session-scoped: It only affects the current session. Other sessions (same project, different terminal) are unaffected.
","path":["Recipes","Sessions","Pausing Context Hooks"],"tags":[]},{"location":"recipes/session-pause/#what-still-fires","level":2,"title":"What Still Fires","text":"
Security hooks always run, even when paused:
block-non-path-ctx: prevents ./ctx invocations
block-dangerous-commands: blocks sudo, force push, etc.
# 1. Session starts: Context loads normally.\n\n# 2. You realize this is a quick task\nctx pause\n\n# 3. Work without interruption: hooks are silent\n\n# 4. Session evolves into real work? Resume first\nctx resume\n\n# 5. Now wrap up normally\n# /ctx-wrap-up\n
Resume before wrapping up. If your quick task turns into real work, resume hooks before running /ctx-wrap-up. The wrap-up ceremony needs active hooks to capture learnings properly.
Initial context load is unaffected. The ~8k token startup injection (CLAUDE.md, playbook, constitution) happens before any command runs. Pause only affects hooks that fire during the session.
Use for quick investigations. Debugging a stack trace? Checking a git log? Answering a colleague's question? Pause, do the work, close the session. No ceremony needed.
Don't use for real work. If you're implementing features, fixing bugs, or making decisions: keep hooks active. The nudges exist to prevent context loss.
You're deep in a session and realize: \"I need to refactor the swagger definitions next time.\" You could add a task, but this isn't a work item: it's a note to future-you. You could jot it on the scratchpad, but scratchpad entries don't announce themselves.
How do you leave a message that your next session opens with?
Reminders surface automatically at session start: VERBATIM, every session, until you dismiss them.
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx remind CLI command Add a reminder (default action) ctx remind list CLI command Show all pending reminders ctx remind dismiss CLI command Remove a reminder by ID (or --all) /ctx-remind Skill Natural language interface to reminders","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#step-1-leave-a-reminder","level":3,"title":"Step 1: Leave a Reminder","text":"
Tell your agent what to remember, or run it directly:
You: \"remind me to refactor the swagger definitions\"\n\nAgent: [runs ctx remind \"refactor the swagger definitions\"]\n \"Reminder set:\n + [1] refactor the swagger definitions\"\n
Or from the terminal:
ctx remind \"refactor the swagger definitions\"\n
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#step-2-set-a-date-gate-optional","level":3,"title":"Step 2: Set a Date Gate (Optional)","text":"
If the reminder shouldn't fire until a specific date:
You: \"remind me to check the deploy logs after Tuesday\"\n\nAgent: [runs ctx remind \"check the deploy logs\" --after 2026-02-25]\n \"Reminder set:\n + [2] check the deploy logs (after 2026-02-25)\"\n
The reminder stays silent until that date, then fires every session.
The agent converts natural language dates (\"tomorrow\", \"next week\", \"after the release on Friday\") to YYYY-MM-DD. If it's ambiguous, it asks.
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#step-3-start-a-new-session","level":3,"title":"Step 3: Start a New Session","text":"
Next session, the reminder appears automatically before anything else:
[1] refactor the swagger definitions\n [3] review auth token expiry logic\n [4] check deploy logs (after 2026-02-25, not yet due)\n
Date-gated reminders that haven't reached their date show (not yet due).
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#using-ctx-remind-in-a-session","level":2,"title":"Using /ctx-remind in a Session","text":"
Invoke the /ctx-remind skill, then describe what you want:
You: /ctx-remind remind me to update the API docs\nYou: /ctx-remind what reminders do I have?\nYou: /ctx-remind dismiss reminder 3\n
You say (after /ctx-remind) What the agent does \"remind me to update the API docs\" ctx remind \"update the API docs\" \"remind me next week to check staging\" ctx remind \"check staging\" --after 2026-03-02 \"what reminders do I have?\" ctx remind list \"dismiss reminder 3\" ctx remind dismiss 3 \"clear all reminders\" ctx remind dismiss --all","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#reminders-vs-scratchpad-vs-tasks","level":2,"title":"Reminders vs Scratchpad vs Tasks","text":"You want to... Use Leave a note that announces itself next session ctx remind Jot down a quick value or sensitive token ctx pad Track work with status and completion TASKS.md Record a decision or lesson for all sessions Context files
Decision guide:
If it should announce itself at session start → ctx remind
If it's a quiet note you'll check manually → ctx pad
If it's a work item you'll mark done → TASKS.md
Reminders Are Sticky Notes, Not Tasks
A reminder has no status, no priority, no lifecycle. It's a message to \"future you\" that fires until dismissed.
Reminders fire every session: Unlike nudges (which throttle to once per day), reminders repeat until you dismiss them. This is intentional: You asked to be reminded.
Date gating is session-scoped, not clock-scoped: --after 2026-02-25 means \"don't show until sessions on or after Feb 25.\" It does not mean \"alarm at midnight on Feb 25.\"
The agent handles date parsing: Say \"next week\" or \"after Friday\": The agent converts it to YYYY-MM-DD. The CLI only accepts the explicit date format.
Reminders are committed to git: They travel with the repo. If you switch machines, your reminders follow.
IDs never reuse: After dismissing reminder 3, the next reminder gets ID 4 (or higher). No confusion from recycled numbers.
Every session creates tombstone files in .context/state/ - small markers that suppress repeat hook nudges (\"already checked context size\", \"already sent persistence reminder\"). Over days and weeks, these accumulate into hundreds of files from long-dead sessions.
The files are harmless individually, but the clutter makes it harder to reason about state, and stale global tombstones can suppress nudges across sessions entirely.
ctx system prune --dry-run # preview what would be removed\nctx system prune # prune files older than 7 days\nctx system prune --days 1 # more aggressive: keep only today\n
","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/state-maintenance/#commands-used","level":2,"title":"Commands Used","text":"Tool Type Purpose ctx system prune Command Remove old per-session state files ctx status Command Quick health overview including state dir","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/state-maintenance/#understanding-state-files","level":2,"title":"Understanding State Files","text":"
State files fall into two categories:
Session-scoped (contain a UUID in the filename): Created per-session to suppress repeat nudges. Safe to prune once the session ends. Examples:
Global (no UUID): Persist across sessions. ctx system prune preserves these automatically. Some are legitimate state (events.jsonl, memory-import.json); others may be stale tombstones that need manual review.
ctx system prune # older than 7 days\nctx system prune --days 3 # older than 3 days\nctx system prune --days 1 # older than 1 day (aggressive)\n
","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/state-maintenance/#step-3-review-global-files","level":3,"title":"Step 3: Review Global Files","text":"
After pruning, check what prune preserved:
ls .context/state/ | grep -v '[0-9a-f]\\{8\\}-[0-9a-f]\\{4\\}'\n
Legitimate global files (keep):
events.jsonl - event log
memory-import.json - import tracking state
Stale global tombstones (safe to delete):
Files like backup-reminded, ceremony-reminded, version-checked with no session UUID are one-shot markers. If they are from a previous session, they are stale and can be removed manually.
Pruning active sessions is safe but noisy: If you prune a file belonging to a still-running session, the corresponding hook will re-fire its nudge on the next prompt. Minor UX annoyance, not data loss.
No context files are stored in state: The state directory contains only tombstones, counters, and diagnostic data. Nothing in .context/state/ affects your decisions, learnings, tasks, or conventions.
Test artifacts sneak in: Files like context-check-statstest or heartbeat-unknown are artifacts from development or testing. They lack UUIDs so prune preserves them. Delete manually.
Detecting and Fixing Drift: broader context maintenance including drift detection and archival
Troubleshooting: diagnostic workflow using ctx doctor and event logs
CLI Reference: system: full flag documentation for ctx system prune and related commands
","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/system-hooks-audit/","level":1,"title":"Auditing System Hooks","text":"","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#the-problem","level":2,"title":"The Problem","text":"
ctx runs 14 system hooks behind the scenes: nudging your agent to persist context, warning about resource pressure, gating commits on QA. But these hooks are invisible by design. You never see them fire. You never know if they stopped working.
How do you verify your hooks are actually running, audit what they do, and get alerted when they go silent?
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#tldr","level":2,"title":"TL;DR","text":"
ctx system check-resources # run a hook manually\nls -la .context/logs/ # check hook execution logs\nctx notify setup # get notified when hooks fire\n
Or ask your agent: \"Are our hooks running?\"
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx system <hook> CLI command Run a system hook manually ctx system resources CLI command Show system resource status ctx system stats CLI command Stream or dump per-session token stats ctx notify setup CLI command Configure webhook for audit trail ctx notify test CLI command Verify webhook delivery .ctxrcnotify.events Configuration Subscribe to relay for full hook audit .context/logs/ Log files Local hook execution ledger","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#what-are-system-hooks","level":2,"title":"What Are System Hooks?","text":"
System hooks are plumbing commands that ctx registers with your AI tool (Claude Code, Cursor, etc.) via the plugin's hooks.json. They fire automatically at specific events during your AI session:
Event When Hooks UserPromptSubmit Before the agent sees your prompt 10 check hooks + heartbeat PreToolUse Before the agent uses a tool block-non-path-ctx, qa-reminderPostToolUse After a tool call succeeds post-commit
You never run these manually. Your AI tool runs them for you: That's the point.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#the-complete-hook-catalog","level":2,"title":"The Complete Hook Catalog","text":"","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#prompt-time-checks-userpromptsubmit","level":3,"title":"Prompt-Time Checks (UserPromptSubmit)","text":"
These fire before every prompt, but most are throttled to avoid noise.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-context-size-context-capacity-warning","level":4,"title":"check-context-size: Context Capacity Warning","text":"
What: Adaptive prompt counter. Silent for the first 15 prompts, then nudges with increasing frequency (every 5th, then every 3rd).
Why: Long sessions lose coherence. The nudge reminds both you and the agent to persist context before the window fills up.
Output: VERBATIM relay box with prompt count.
┌─ Context Checkpoint (prompt #20) ────────────────\n│ This session is getting deep. Consider wrapping up\n│ soon. If there are unsaved learnings, decisions, or\n│ conventions, now is a good time to persist them.\n│ ⏱ Context window: ~45k tokens (~22% of 200k)\n└──────────────────────────────────────────────────\n
Stats: Every prompt records token usage to .context/state/stats-{session}.jsonl. Monitor live with ctx system stats --follow or query with ctx system stats --json. Stats are recorded even during wrap-up suppression (event: suppressed).
Billing guard: When billing_token_warn is set in .ctxrc, a one-shot warning fires if session tokens exceed the threshold. This warning is independent of all other triggers - it fires even during wrap-up suppression.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-persistence-context-staleness-nudge","level":4,"title":"check-persistence: Context Staleness Nudge","text":"
What: Tracks when .context/*.md files were last modified. If too many prompts pass without a write, nudges the agent to persist.
Why: Sessions produce insights that evaporate if not recorded. This catches the \"we talked about it but never wrote it down\" failure mode.
Output: VERBATIM relay after 20+ prompts without a context file change.
┌─ Persistence Checkpoint (prompt #20) ───────────\n│ No context files updated in 20+ prompts.\n│ Have you discovered learnings, made decisions,\n│ established conventions, or completed tasks\n│ worth persisting?\n│\n│ Run /ctx-wrap-up to capture session context.\n└──────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-ceremonies-session-ritual-adoption","level":4,"title":"check-ceremonies: Session Ritual Adoption","text":"
What: Scans your last 3 journal entries for /ctx-remember and /ctx-wrap-up usage. Nudges once per day if missing.
Why: Session ceremonies are the highest-leverage habit in ctx. This hook bootstraps the habit until it becomes automatic.
Output: Tailored nudge depending on which ceremony is missing.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-journal-unimported-session-reminder","level":4,"title":"check-journal: Unimported Session Reminder","text":"
What: Detects unimported Claude Code sessions and unenriched journal entries. Fires once per day.
Why: Exported sessions become searchable history. Unenriched entries lack metadata for filtering. Both decay in value over time.
Output: VERBATIM relay with counts and exact commands.
┌─ Journal Reminder ─────────────────────────────\n│ You have 3 new session(s) not yet exported.\n│ 5 existing entries need enrichment.\n│\n│ Export and enrich:\n│ ctx journal import --all\n│ /ctx-journal-enrich-all\n└────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-resources-system-resource-pressure","level":4,"title":"check-resources: System Resource Pressure","text":"
What: Monitors memory, swap, disk, and CPU load. Only fires at DANGER severity (memory >= 90%, swap >= 75%, disk >= 95%, load >= 1.5x CPU count).
Why: Resource exhaustion mid-session can corrupt work. This provides early warning to persist and exit.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-knowledge-knowledge-file-growth","level":4,"title":"check-knowledge: Knowledge File Growth","text":"
What: Counts entries in LEARNINGS.md, DECISIONS.md, and lines in CONVENTIONS.md. Fires once per day when thresholds are exceeded.
Why: Large knowledge files dilute agent context. 35 learnings compete for attention; 15 focused ones get applied. Thresholds are configurable in .ctxrc.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-version-binaryplugin-version-drift","level":4,"title":"check-version: Binary/Plugin Version Drift","text":"
What: Compares the ctx binary version against the plugin version. Fires once per day. Also checks encryption key age for rotation nudge.
Why: Version drift means hooks reference features the binary doesn't have. The key rotation nudge prevents indefinite key reuse.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-reminders-pending-reminder-relay","level":4,"title":"check-reminders: Pending Reminder Relay","text":"
What: Reads .context/reminders.json and surfaces any due reminders via VERBATIM relay. No throttle: fires every session until dismissed.
Why: Reminders are sticky notes to future-you. Unlike nudges (which throttle to once per day), reminders repeat deliberately until the user dismisses them.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-freshness-technology-constant-staleness","level":4,"title":"check-freshness: Technology Constant Staleness","text":"
What: Stats files listed in .ctxrcfreshness_files and warns if any haven't been modified in over 6 months. Daily throttle. Silent when no files are configured (opt-in via .ctxrc).
Why: Model capabilities evolve - token budgets, attention limits, and context window sizes that were accurate 6 months ago may no longer reflect best practices. This hook reminds you to review and touch the file to confirm values are still current.
Config (.ctxrc):
freshness_files:\n - path: config/thresholds.yaml\n desc: Model token limits and batch sizes\n review_url: https://docs.example.com/limits # optional\n
Each entry has a path (relative to project root), desc (what constants live there), and optional review_url (where to check current values). When review_url is set, the nudge includes \"Review against: {url}\". When absent, just \"Touch the file to mark it as reviewed.\"
Output: VERBATIM relay listing stale files, silent otherwise.
┌─ Technology Constants Stale ──────────────────────\n│ config/thresholds.yaml (210 days ago)\n│ - Model token limits and batch sizes\n│ Review against: https://docs.example.com/limits\n│ Touch each file to mark it as reviewed.\n└───────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-map-staleness-architecture-map-drift","level":4,"title":"check-map-staleness: Architecture Map Drift","text":"
What: Checks whether map-tracking.json is older than 30 days and there are commits touching internal/ since the last map refresh. Daily throttle prevents repeated nudges.
Why: Architecture documentation drifts silently as code evolves. This hook detects structural changes that the map hasn't caught up with and suggests running /ctx-architecture to refresh.
Output: VERBATIM relay when stale and modules changed, silent otherwise.
┌─ Architecture Map Stale ────────────────────────────\n│ ARCHITECTURE.md hasn't been refreshed since 2026-01-15\n│ and there are commits touching 12 modules.\n│ /ctx-architecture keeps architecture docs drift-free.\n│\n│ Want me to run /ctx-architecture to refresh?\n└─────────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#heartbeat-session-heartbeat-webhook","level":4,"title":"heartbeat: Session Heartbeat Webhook","text":"
What: Fires on every prompt. Sends a webhook notification with prompt count, session ID, context modification status, and token usage telemetry. Never produces stdout.
Why: Other hooks only send webhooks when they \"speak\" (nudge/relay). When silent, you have no visibility into session activity. The heartbeat provides a continuous session-alive signal with token consumption data for observability dashboards or liveness monitoring.
Token fields (tokens, context_window, usage_pct) are included when usage data is available from the session JSONL file.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#tool-time-hooks-pretooluse-posttooluse","level":3,"title":"Tool-Time Hooks (PreToolUse / PostToolUse)","text":"","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#block-non-path-ctx-path-enforcement-hard-gate","level":4,"title":"block-non-path-ctx: PATH Enforcement (Hard Gate)","text":"
What: Blocks any Bash command that invokes ./ctx, ./dist/ctx, go run ./cmd/ctx, or an absolute path to ctx. Only PATH invocations are allowed.
Why: Enforces CONSTITUTION.md's invocation invariant. Running a dev-built binary in production context causes version confusion and silent behavior drift.
Output: Block response (prevents the tool call):
{\"decision\": \"block\", \"reason\": \"Use 'ctx' from PATH, not './ctx'...\"}\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#qa-reminder-pre-commit-qa-gate","level":4,"title":"qa-reminder: Pre-Commit QA Gate","text":"
What: Fires on every Edit tool use. Reminds the agent to lint and test the entire project before committing.
Why: Agents tend to \"I'll test later\" and then commit untested code. Repetition is intentional: the hook reinforces the habit on every edit, not just before commits.
Output: Agent directive with hard QA gate instructions.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#post-commit-context-capture-after-commit","level":4,"title":"post-commit: Context Capture After Commit","text":"
What: Fires after any git commit (excludes --amend). Prompts the agent to offer context capture (decision? learning?) and suggest running lints/tests before pushing.
Why: Commits are natural reflection points. The nudge converts mechanical git operations into context-capturing opportunities.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#auditing-hooks-via-the-local-event-log","level":2,"title":"Auditing Hooks via the Local Event Log","text":"
If you don't need an external audit trail, enable the local event log for a self-contained record of hook activity:
# .ctxrc\nevent_log: true\n
Once enabled, every hook that fires writes an entry to .context/state/events.jsonl. Query it with ctx system events:
ctx system events # last 50 events\nctx system events --hook qa-reminder # filter by hook\nctx system events --session <id> # filter by session\nctx system events --json | jq '.' # raw JSONL for processing\n
The event log is local, queryable, and doesn't require any external service. For a full diagnostic workflow combining event logs with structural health checks, see Troubleshooting.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#auditing-hooks-via-webhooks","level":2,"title":"Auditing Hooks via Webhooks","text":"
The most powerful audit setup pipes all hook output to a webhook, giving you a real-time external record of what your agent is being told.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#step-1-set-up-the-webhook","level":3,"title":"Step 1: Set Up the Webhook","text":"
ctx notify setup\n# Enter your webhook URL (Slack, Discord, ntfy.sh, IFTTT, etc.)\n
See Webhook Notifications for service-specific setup.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#step-2-subscribe-to-relay-events","level":3,"title":"Step 2: Subscribe to relay Events","text":"
# .ctxrc\nnotify:\n events:\n - relay # all hook output: VERBATIM relays, directives, blocks\n - nudge # just the user-facing VERBATIM relays\n
The relay event fires for every hook that produces output. This includes:
Hook Event sent check-context-sizerelay + nudgecheck-persistencerelay + nudgecheck-ceremoniesrelay + nudgecheck-journalrelay + nudgecheck-resourcesrelay + nudgecheck-knowledgerelay + nudgecheck-versionrelay + nudgecheck-remindersrelay + nudgecheck-freshnessrelay + nudgecheck-map-stalenessrelay + nudgeheartbeatheartbeat only block-non-path-ctxrelay only post-commitrelay only qa-reminderrelay only","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#step-3-cross-reference","level":3,"title":"Step 3: Cross-Reference","text":"
With relay enabled, your webhook receives a JSON payload every time a hook fires:
{\n \"event\": \"relay\",\n \"message\": \"check-persistence: No context updated in 20+ prompts\",\n \"session_id\": \"b854bd9c\",\n \"timestamp\": \"2026-02-22T14:30:00Z\",\n \"project\": \"my-project\"\n}\n
This creates an external audit trail independent of the agent. You can now cross-verify: did the agent actually relay the checkpoint the hook told it to relay?
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#verifying-hooks-actually-fire","level":2,"title":"Verifying Hooks Actually Fire","text":"
Hooks are invisible. An invisible thing that breaks is indistinguishable from an invisible thing that never existed. Three verification methods, from simplest to most robust:
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#method-1-ask-the-agent","level":3,"title":"Method 1: Ask the Agent","text":"
The simplest check. After a few prompts into a session:
\"Did you receive any hook output this session? Print the last\ncontext checkpoint or persistence nudge you saw.\"\n
The agent should be able to recall recent hook output from its context window. If it says \"I haven't received any hook output\", either:
The hooks aren't firing (check installation);
The session is too short (hooks throttle early);
The hooks fired but the agent absorbed them silently.
Limitation: You are trusting the agent to report accurately. Agents sometimes confabulate or miss context. Use this as a quick smoke test, not definitive proof.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#method-2-check-the-webhook-trail","level":3,"title":"Method 2: Check the Webhook Trail","text":"
If you have relay events enabled, check your webhook receiver. Every hook that fires sends a timestamped notification. No notification = no fire.
This is the ground truth. The webhook is called directly by the ctx binary, not by the agent. The agent cannot fake, suppress, or modify webhook deliveries.
Compare what the webhook received against what the agent claims to have relayed. Discrepancies mean the agent is absorbing nudges instead of surfacing them.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#method-3-read-the-local-logs","level":3,"title":"Method 3: Read the Local Logs","text":"
Hooks that support logging write to .context/logs/:
Logs are append-only and written by the ctx binary, not the agent.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#detecting-silent-hook-failures","level":2,"title":"Detecting Silent Hook Failures","text":"
The hardest failure mode: hooks that stop firing without error. The plugin config changes, a binary update drops a hook, or a PATH issue silently breaks execution. Nothing errors: The hook just never runs.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#the-staleness-signal","level":3,"title":"The Staleness Signal","text":"
If .context/logs/check-context-size.log has no entries newer than 5 days but you've been running sessions daily, something is wrong. The absence of evidence is evidence of absence: but only if you control for inactivity.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#false-positive-protection","level":3,"title":"False Positive Protection","text":"
A naive \"hooks haven't fired in N days\" alert fires incorrectly when you simply haven't used ctx. The correct check needs two inputs:
Last hook fire time: from .context/logs/ or webhook history
Last session activity: from journal entries or ctx journal source
If sessions are happening but hooks aren't firing, that's a real problem. If neither sessions nor hooks are happening, that's a vacation.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#what-to-check","level":3,"title":"What to Check","text":"
When you suspect hooks aren't firing:
# 1. Verify the plugin is installed\nls ~/.claude/plugins/\n\n# 2. Check hook registration\ncat ~/.claude/plugins/ctx/hooks.json | head -20\n\n# 3. Run a hook manually to see if it errors\necho '{\"session_id\":\"test\"}' | ctx system check-context-size\n\n# 4. Check for PATH issues\nwhich ctx\nctx --version\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#tips","level":2,"title":"Tips","text":"
Start with nudge, graduate to relay: The nudge event covers user-facing VERBATIM relays. Add relay when you want full visibility into agent directives and hard gates.
Webhooks are your trust anchor: The agent can ignore a nudge, but it can't suppress the webhook. If the webhook fired and the agent didn't relay, you have proof of a compliance gap.
Hooks are throttled by design: Most check hooks fire once per day or use adaptive frequency. Don't expect a notification every prompt: Silence usually means the throttle is working, not that the hook is broken.
Daily markers live in .context/state/: Throttle files are stored in .context/state/ alongside other project-scoped state. If you need to force a hook to re-fire during testing, delete the corresponding marker file.
The QA reminder is intentionally noisy: Unlike other hooks, qa-reminder fires on every Edit call with no throttle. This is deliberate: The commit quality degrades when the reminder fades from salience.
Log files are safe to commit: .context/logs/ contains only timestamps, session IDs, and status keywords. No secrets, no code.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#next-up","level":2,"title":"Next Up","text":"
Detecting and Fixing Drift →: Keep context files accurate as your codebase evolves.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#see-also","level":2,"title":"See Also","text":"
Troubleshooting: full diagnostic workflow using ctx doctor, event logs, and /ctx-doctor
Customizing Hook Messages: override what hooks say without changing what they do
Webhook Notifications: setting up and configuring the webhook system
Hook Output Patterns: understanding VERBATIM relays, agent directives, and hard gates
Detecting and Fixing Drift: structural checks that complement runtime hook auditing
CLI Reference: full ctx system command reference
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/task-management/","level":1,"title":"Tracking Work Across Sessions","text":"","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-problem","level":2,"title":"The Problem","text":"
You have work that spans multiple sessions. Tasks get added during one session, partially finished in another, and completed days later.
Without a system, follow-up items fall through the cracks, priorities drift, and you lose track of what was done versus what still needs doing. TASKS.md grows cluttered with completed checkboxes that obscure the remaining work.
How do you manage work items that span multiple sessions without losing context?
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#tldr","level":2,"title":"TL;DR","text":"
Read on for the full workflow and conversational patterns.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx add task Command Add a new task to TASKS.mdctx task complete Command Mark a task as done by number or text ctx task snapshot Command Create a point-in-time backup of TASKS.mdctx task archive Command Move completed tasks to archive file /ctx-add-task Skill AI-assisted task creation with validation /ctx-archive Skill AI-guided archival with safety checks /ctx-next Skill Pick what to work on based on priorities","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-1-add-tasks-with-priorities","level":3,"title":"Step 1: Add Tasks with Priorities","text":"
Every piece of follow-up work gets a task. Use ctx add task from the terminal or /ctx-add-task from your AI assistant. Tasks should start with a verb and be specific enough that someone unfamiliar with the session could act on them.
# High-priority bug found during code review\nctx add task \"Fix race condition in session cooldown\" --priority high\n\n# Medium-priority feature work\nctx add task \"Add --format json flag to ctx status for CI integration\" --priority medium\n\n# Low-priority cleanup\nctx add task \"Remove deprecated --raw flag from ctx load\" --priority low\n
The /ctx-add-task skill validates your task before recording it. It checks that the description is actionable, not a duplicate, and specific enough for someone else to pick up.
If you say \"fix the bug,\" it will ask you to clarify which bug and where.
Tasks Are Often Created Proactively
In practice, many tasks are created proactively by the agent rather than by explicit CLI commands.
After completing a feature, the agent will often identify follow-up work: tests, docs, edge cases, error handling, and offer to add them as tasks.
You do not need to dictate ctx add task commands; the agent picks up on work context and suggests tasks naturally.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-2-organize-with-phase-sections","level":3,"title":"Step 2: Organize with Phase Sections","text":"
Tasks live in phase sections inside TASKS.md.
Phases provide logical groupings that preserve order and enable replay.
A task does not move between sections. It stays in its phase permanently, and status is tracked via checkboxes and inline tags.
## Phase 1: Core CLI\n\n- [x] Implement ctx add command `#done:2026-02-01-143022`\n- [x] Implement ctx task complete command `#done:2026-02-03-091544`\n- [ ] Add --section flag to ctx add task `#priority:medium`\n\n## Phase 2: AI Integration\n\n- [ ] Implement ctx agent cooldown `#priority:high` `#in-progress`\n- [ ] Add ctx watch XML parsing `#priority:medium`\n - Blocked by: Need to finalize agent output format\n\n## Backlog\n\n- [ ] Performance optimization for large TASKS.md files `#priority:low`\n- [ ] Add metrics dashboard to ctx status `#priority:deferred`\n
Use --section when adding a task to a specific phase:
ctx add task \"Add ctx watch XML parsing\" --priority medium --section \\\n \"Phase 2: AI Integration\"\n
Without --section, the task is inserted before the first unchecked task in TASKS.md.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-3-pick-what-to-work-on","level":3,"title":"Step 3: Pick What to Work On","text":"
At the start of a session, or after finishing a task, use /ctx-next to get prioritized recommendations.
The skill reads TASKS.md, checks recent sessions, and ranks candidates using explicit priority, blocking status, in-progress state, momentum from recent work, and phase order.
You can also ask naturally: \"what should we work on?\" or \"what's the highest priority right now?\"
/ctx-next\n
The output looks like this:
**1. Implement ctx agent cooldown** `#priority:high`\n\n Still in-progress from yesterday's session. The tombstone file approach is\n half-built. Finishing is cheaper than context-switching.\n\n**2. Add --section flag to ctx add task** `#priority:medium`\n\n Last Phase 1 item. Quick win that unblocks organized task entry.\n\n---\n\n*Based on 8 pending tasks across 3 phases.\n\nLast session: agent-cooldown (2026-02-06).*\n
In-progress tasks almost always come first:
Finishing existing work takes priority over starting new work.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-4-complete-tasks","level":3,"title":"Step 4: Complete Tasks","text":"
When a task is done, mark it complete by number or partial text match:
# By task number (as shown in TASKS.md)\nctx task complete 3\n\n# By partial text match\nctx task complete \"agent cooldown\"\n
The task's checkbox changes from [ ] to [x] and a #done timestamp is added. Tasks are never deleted: they stay in their phase section so history is preserved.
Be Conversational
You rarely need to run ctx task complete yourself during an interactive session.
When you say something like \"the rate limiter is done\" or \"we finished that,\" the agent marks the task complete and moves on to suggesting what is next.
The CLI commands are most useful for manual housekeeping, scripted workflows, or when you want precision.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-5-snapshot-before-risky-changes","level":3,"title":"Step 5: Snapshot Before Risky Changes","text":"
Before a major refactor or any change that might break things, snapshot your current task state. This creates a copy of TASKS.md in .context/archive/ without modifying the original.
# Default snapshot\nctx task snapshot\n\n# Named snapshot (recommended before big changes)\nctx task snapshot \"before-refactor\"\n
This creates a file like .context/archive/tasks-before-refactor-2026-02-08-1430.md. If the refactor goes sideways, and you need to confirm what the task state looked like before you started, the snapshot is there.
Snapshots are cheap: Take them before any change you might want to undo or review later.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-6-archive-when-tasksmd-gets-cluttered","level":3,"title":"Step 6: Archive When TASKS.md Gets Cluttered","text":"
After several sessions, TASKS.md accumulates completed tasks that make it hard to see what is still pending.
Use ctx task archive to move all [x] items to a timestamped archive file.
Start with a dry run to preview what will be moved:
ctx task archive --dry-run\n
Then archive:
ctx task archive\n
Completed tasks move to .context/archive/tasks-2026-02-08.md. Phase headers are preserved in the archive for traceability. Pending tasks ([ ]) remain in TASKS.md.
The /ctx-archive skill adds two safety checks before archiving: it verifies that completed tasks are genuinely cluttering the view and that nothing was marked [x] prematurely.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#letting-the-agent-drive","level":2,"title":"Letting the Agent Drive","text":"
The steps above show the CLI commands for task management.
In practice, most task management happens conversationally:
An agent that has loaded the context files does not need you to type ctx add task. It tracks work naturally and offers the right operations.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#use-natural-language-instead-of-commands","level":3,"title":"Use Natural Language Instead of Commands","text":"
These conversational prompts replace explicit commands during interactive sessions:
Instead of typing... Just say... ctx add task \"Write tests for X\" \"We should add tests for this: track that?\" /ctx-next \"What should we work on?\" ctx task complete \"rate limiting\" \"The rate limiter is done, what's next?\" ctx task archive \"TASKS.md is getting long, can you clean it up?\" ctx add task ... && ctx add task ... \"Add follow-ups for what we just built.\"
The agent translates these into the right ctx operations behind the scenes.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-agent-creates-tasks-proactively","level":3,"title":"The Agent Creates Tasks Proactively","text":"
After completing a piece of work, the agent does not wait for you to dictate follow-up items. It identifies them from the work context and offers:
**Agent**: \"I've finished implementing the rate limiter middleware. I noticed\nthree follow-up items:\n\n1. Write integration tests for the rate limiter (the unit tests pass but\n there's no end-to-end coverage)\n2. Add rate limit headers to API responses (X-RateLimit-Remaining, etc.)\n3. Document the rate limit configuration in the ops runbook\n\nWant me to add these as tasks?\"\n
This is how an agent behaves once it has read the Agent Playbook.
The playbook's \"Work, Reflect, Persist\" cycle means the agent asks itself after every milestone: Did I just complete a task? Did new work emerge?
When the answer is yes, it acts.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#asking-the-agent-what-to-capture","level":3,"title":"Asking the Agent What to Capture","text":"
You don't always know what's worth persisting.
Instead of reviewing the session yourself and typing specific commands, ask the agent to do the thinking:
You say What happens \"What tasks came out of this session?\" Agent reviews the work, identifies follow-ups, and offers to add them \"Are there any tasks, learnings, or decisions worth adding?\" Agent scans the full session for all three types and proposes a batch \"What should we remember from today?\" Agent surfaces everything persist-worthy: not just tasks \"Anything we should track before we move on?\" Agent checks for loose ends specific to the work just completed
This is especially powerful after long or exploratory sessions where multiple threads were touched.
The agent has the full conversation in context and can spot follow-ups you might miss:
You: \"Are there any tasks, learnings, or decisions worth adding\n from this session?\"\n\nAgent: \"Yes. Here's what I'd suggest:\n\n Tasks:\n - Add retry logic to the webhook delivery (we discussed it when\n the timeout test failed but never tracked it)\n - Update the API docs to reflect the new rate limit headers\n\n Learnings:\n - net/http's default client has no timeout: we burned 15 minutes\n on that before adding an explicit 30s timeout\n\n Decisions:\n - We chose token-bucket over sliding-window for rate limiting\n because it handles bursts better and the library is simpler\n\n Want me to add all of these?\"\n\nYou: \"All of them.\"\n
The agent fills in structured fields from what it observed during the session. You get the same quality entries as if you typed verbose commands, without needing to remember what happened or how to format it.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-agent-completes-tasks-and-moves-on","level":3,"title":"The Agent Completes Tasks and Moves On","text":"
When you confirm that work is finished, or when the agent can see that it is, it marks the task done and suggests the next step:
You: \n\"The cooldown logic is working now.\"\n\nAgent: \n\"Marked 'Implement ctx agent cooldown' as done. Your highest-\npriority remaining task is 'Add --section flag to ctx add task'.\n\nIt's the last item in Phase 1 and a quick win. Want to start on that?\"\n
The agent chains completion into next-task selection.
One sentence from you triggers two operations.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#end-of-session-capture","level":3,"title":"End-of-Session Capture","text":"
At the end of a session, the agent reviews what happened and catches loose ends:
You: \n\"I think we're good for today.\"\n\nAgent: \n\"Before we wrap up: during the middleware work you mentioned\nwanting to add metrics for rate limit hits. That never made it into TASKS.md.\n\nWant me to add it as a task so it's there next session?\"\n
This is the proactive persistence model at work.
The agent monitors the conversation for mentioned-but-not-tracked work and surfaces it before context is lost.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#a-conversational-session-example","level":3,"title":"A Conversational Session Example","text":"
Here is what task management looks like in a real session when the agent is driving:
You: What's left to do?\nAgent: You have 5 pending tasks across 2 phases. The highest priority is\n \"Implement ctx agent cooldown\": it's marked in-progress from\n yesterday and half-built. Want to pick that up?\n\nYou: Yeah, let's finish it.\n [... 20 minutes of implementation work ...]\n\nYou: That's working. Tests pass.\nAgent: Nice. I'll mark \"Implement ctx agent cooldown\" as done.\n I noticed two things from the implementation:\n 1. The tombstone file cleanup should happen on a timer, not just\n at startup. Want me to add that as a task?\n 2. We learned that file locks don't work reliably on NFS. I'll\n add that to LEARNINGS.md.\n\nYou: Yes to both. What's next?\nAgent: Added the cleanup task as high priority. Next up is \"Add --section\n flag to ctx add task\": last item in Phase 1. It should be\n straightforward since the flag parsing pattern already exists\n in the codebase. Ready?\n
It's All Conversational
Notice what did not happen: The user never typed a ctx command.
The agent handled task completion, follow-up creation, learning capture, and next-task selection: all from natural conversation.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
# Add a task\nctx add task \"Implement rate limiting for API endpoints\" --priority high\n\n# Add to a specific phase\nctx add task \"Write integration tests for rate limiter\" --section \"Phase 2\"\n\n# See what to work on\n# (from AI assistant) /ctx-next\n\n# Mark done by text\nctx task complete \"rate limiting\"\n\n# Mark done by number\nctx task complete 5\n\n# Snapshot before a risky refactor\nctx task snapshot \"before-middleware-rewrite\"\n\n# Archive completed tasks when the list gets long\nctx task archive --dry-run # preview first\nctx task archive # then archive\n
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#tips","level":2,"title":"Tips","text":"
Start tasks with a verb: \"Add,\" \"Fix,\" \"Implement,\" \"Investigate\": not just a topic like \"Authentication.\"
Include the why in the task description. Future sessions lack the context of why you added the task. \"Add rate limiting\" is worse than \"Add rate limiting to prevent abuse on the public API after the load test showed 10x traffic spikes.\"
Use #in-progress sparingly. Only one or two tasks should carry this tag at a time. If everything is in-progress, nothing is.
Snapshot before, not after. The point of a snapshot is to capture the state before a change, not to celebrate what you just finished.
Archive regularly. Once completed tasks outnumber pending ones, it is time to archive. A clean TASKS.md helps both you and your AI assistant focus.
Never delete tasks. Mark them [x] (completed) or [-] (skipped with a reason). Deletion breaks the audit trail.
Trust the agent's task instincts. When the agent suggests follow-up items after completing work, it is drawing on the full context of what just happened.
Conversational prompts beat commands in interactive sessions. Saying \"what should we work on?\" is faster and more natural than running /ctx-next. Save explicit commands for scripts, CI, and unattended runs.
Let the agent chain operations. A single statement like \"that's done, what's next?\" can trigger completion, follow-up identification, and next-task selection in one flow.
Review proactive task suggestions before moving on. The best follow-ups come from items spotted in-context right after the work completes.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#next-up","level":2,"title":"Next Up","text":"
Using the Scratchpad →: Store short-lived sensitive notes in an encrypted scratchpad.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#see-also","level":2,"title":"See Also","text":"
The Complete Session: full session lifecycle including task management in context
Persisting Decisions, Learnings, and Conventions: capturing the \"why\" behind your work
Detecting and Fixing Drift: keeping TASKS.md accurate over time
CLI Reference: full documentation for ctx add, ctx task complete, ctx task
Context Files: TASKS.md: format and conventions for TASKS.md
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/troubleshooting/","level":1,"title":"Troubleshooting","text":"","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#the-problem","level":2,"title":"The Problem","text":"
Something isn't working: a hook isn't firing, nudges are too noisy, context seems stale, or the agent isn't following instructions. The information to diagnose it exists (across status, drift, event logs, hook config, and session history), but assembling it manually is tedious.
ctx doctor # structural health check\nctx system events --last 20 # recent hook activity\n# or ask: \"something seems off, can you diagnose?\"\n
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx doctor CLI command Structural health report ctx doctor --json CLI command Machine-readable health report ctx system events CLI command Query local event log /ctx-doctor Skill Agent-driven diagnosis with analysis","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#quick-check-ctx-doctor","level":3,"title":"Quick Check: ctx doctor","text":"
Run ctx doctor for an instant structural health report. It checks context initialization, required files, drift, hook configuration, event logging, webhooks, reminders, task completion ratio, and context token size: all in one pass:
ctx doctor\n
ctx doctor\n==========\n\nStructure\n ✓ Context initialized (.context/)\n ✓ Required files present (4/4)\n\nQuality\n ⚠ Drift: 2 warnings (stale path in ARCHITECTURE.md, high entry count in LEARNINGS.md)\n\nHooks\n ✓ hooks.json valid (14 hooks registered)\n ○ Event logging disabled (enable with event_log: true in .ctxrc)\n\nState\n ✓ No pending reminders\n ⚠ Task completion ratio high (18/22 = 82%): consider archiving\n\nSize\n ✓ Context size: ~4200 tokens (budget: 8000)\n\nSummary: 2 warnings, 0 errors\n
Warnings are non-critical but worth fixing. Errors need attention. Informational notes (○) flag optional features that aren't enabled.
For power users: ctx system events with filters gives direct access to the event log.
# Last 50 events (default)\nctx system events\n\n# Events from a specific session\nctx system events --session eb1dc9cd-0163-4853-89d0-785fbfaae3a6\n\n# Only QA reminder events\nctx system events --hook qa-reminder\n\n# Raw JSONL for jq processing\nctx system events --json | jq '.message'\n\n# Include rotated (older) events\nctx system events --all --last 100\n
Filters use AND logic: --hook qa-reminder --session abc123 returns only QA reminder events from that specific session.
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#common-problems","level":2,"title":"Common Problems","text":"","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#ctx-not-initialized","level":3,"title":"\"ctx: not initialized\"","text":"
Symptoms: Any ctx command fails with ctx: not initialized - run \"ctx init\" first.
Cause: You're running ctx in a directory without an initialized .context/ directory. This guard runs on all user-facing commands to prevent confusing downstream errors.
Fix:
ctx init # create .context/ with template files\nctx init --minimal # or just the essentials (CONSTITUTION, TASKS, DECISIONS)\n
Commands that work without initialization: ctx init, ctx setup, ctx doctor, and help-only grouping commands (ctx, ctx system).
Symptoms: No nudges appearing, webhook silent, event log shows no entries for the expected hook.
Diagnosis:
# 1. Check if ctx is installed and on PATH\nwhich ctx && ctx --version\n\n# 2. Check if the hook is registered\ngrep \"check-persistence\" ~/.claude/plugins/ctx/hooks.json\n\n# 3. Run the hook manually to see if it errors\necho '{\"session_id\":\"test\"}' | ctx system check-persistence\n\n# 4. Check event log for the hook (if enabled)\nctx system events --hook check-persistence\n
Common causes:
Plugin is not installed: run ctx init --claude to reinstall
PATH issue: the hook invokes ctx from PATH; ensure it resolves
Throttle active: most hooks fire once per day: check .context/state/ for daily marker files
Hook silenced: a custom message override may be an empty file: check ctx system message list for overrides
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#too-many-nudges","level":3,"title":"\"Too many nudges\"","text":"
Symptoms: The agent is overwhelmed with hook output. Context checkpoints, persistence reminders, and QA gates fire constantly.
Diagnosis:
# Check how often hooks fired recently\nctx system events --last 50\n\n# Count fires per hook\nctx system events --json | jq -r '.detail.hook // \"unknown\"' \\\n | sort | uniq -c | sort -rn\n
Common causes:
QA reminder is noisy by design: it fires on every Edit call with no throttle. This is intentional. If it's too much, silence it with an empty override: ctx system message edit qa-reminder gate, then empty the file
Long session: context checkpoint fires with increasing frequency after prompt 15. This is the system telling you the session is getting long: consider wrapping up
Short throttle window: if you deleted marker files in .context/state/, daily-throttled hooks will re-fire
Outdated Claude Code plugin: Update the plugin using Claude Code → /plugin → \"Marketplace\"
ctx version mismatch: Build (or download) and install the latest ctx vesion.
Symptoms: The agent references outdated information, paths that don't exist, or decisions that were reversed.
Diagnosis:
# Structural drift check\nctx drift\n\n# Full doctor check (includes drift + more)\nctx doctor\n\n# Check when context files were last modified\nctx status --verbose\n
Common causes:
Drift accumulated: stale path references in ARCHITECTURE.md or CONVENTIONS.md. Fix with ctx drift --fix or ask the agent to clean up.
Task backlog: too many completed tasks diluting active context. Archive with ctx task archive or ctx compact --archive.
Large context files: LEARNINGS.md with 40+ entries competes for attention. Consolidate with /ctx-consolidate.
Missing session ceremonies: if /ctx-remember and /ctx-wrap-up aren't being used, context doesn't get refreshed. See Session Ceremonies.
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#the-agent-isnt-following-instructions","level":3,"title":"\"The agent isn't following instructions\"","text":"
Symptoms: The agent ignores conventions, forgets decisions, or acts contrary to CONSTITUTION.md rules.
Diagnosis:
# Check context token size: Is it too large for the model?\nctx doctor --json | jq '.results[] | select(.name == \"context_size\")'\n\n# Check if context is actually being loaded\nctx system events --hook context-load-gate\n
Common causes:
Context too large: if total tokens exceed the model's effective attention, instructions get diluted. Check ctx doctor for the size check. Compact with ctx compact --archive.
Context not loading: if context-load-gate hasn't fired, the agent may not have received context. Verify the hook is registered.
Conflicting instructions: CONVENTIONS.md says one thing, AGENT_PLAYBOOK.md says another. Review both files for consistency.
Agent drift: the agent's behavior diverges from instructions over long sessions. This is normal. Use /ctx-reflect to re-anchor, or start a new session.
Event logging (optional but recommended): event_log: true in .ctxrc
ctx initialized: ctx init
Event logging is not required for ctx doctor or /ctx-doctor to work. Both degrade gracefully: structural checks run regardless, and the skill notes when event data is unavailable.
Start with ctx doctor: It's the fastest way to get a comprehensive health picture. Save event log inspection for when you need to understand when and how often something happened.
Enable event logging early: The log is opt-in and low-cost (~250 bytes per event, 1MB rotation cap). Enable it before you need it: Diagnosing a problem without historical data is much harder.
Use the skill for correlation: ctx doctor tells you what is wrong. /ctx-doctor tells you why by correlating structural findings with event patterns. The agent can spot connections that individual commands miss.
Event log is gitignored: It's machine-local diagnostic data, not project context. Different machines produce different event streams.
Auditing System Hooks: the complete hook catalog and webhook-based audit trails
Detecting and Fixing Drift: structural and semantic drift detection and repair
Webhook Notifications: push notifications for hook activity
ctx doctor CLI: full command reference
ctx system events CLI: event log query reference
/ctx-doctor skill: agent-driven diagnosis
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/webhook-notifications/","level":1,"title":"Webhook Notifications","text":"","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#the-problem","level":2,"title":"The Problem","text":"
Your agent runs autonomously (loops, implements, releases) while you are away from the terminal. You have no way to know when it finishes, hits a limit, or when a hook fires a nudge.
How do you get notified about agent activity without watching the terminal?
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#tldr","level":2,"title":"TL;DR","text":"
ctx notify setup # configure webhook URL (encrypted)\nctx notify test # verify delivery\n# Hooks auto-notify on: session-end, loop-iteration, resource-danger\n
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx notify setup CLI command Configure and encrypt webhook URL ctx notify test CLI command Send a test notification ctx notify --event <name> \"msg\" CLI command Send a notification from scripts/skills .ctxrcnotify.events Configuration Filter which events reach your webhook","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-1-get-a-webhook-url","level":3,"title":"Step 1: Get a Webhook URL","text":"
Any service that accepts HTTP POST with JSON works. Common options:
Service How to get a URL IFTTT Create an applet with the \"Webhooks\" trigger Slack Create an Incoming Webhook Discord Channel Settings > Integrations > Webhooks ntfy.sh Use https://ntfy.sh/your-topic (no signup) Pushover Use API endpoint with your user key
The URL contains auth tokens. ctx encrypts it; it never appears in plaintext in your repo.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-2-configure-the-webhook","level":3,"title":"Step 2: Configure the Webhook","text":"
This encrypts the URL with AES-256-GCM using the same key as the scratchpad (~/.ctx/.ctx.key). The encrypted file (.context/.notify.enc) is safe to commit. The key lives outside the project and is never committed.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-3-test-it","level":3,"title":"Step 3: Test It","text":"
If you see No webhook configured, run ctx notify setup first.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-4-configure-events","level":3,"title":"Step 4: Configure Events","text":"
Notifications are opt-in: no events are sent unless you configure an event list in .ctxrc:
# .ctxrc\nnotify:\n events:\n - loop # loop completion or max-iteration hit\n - nudge # VERBATIM relay hooks (context checkpoint, persistence, etc.)\n - relay # all hook output (verbose, for debugging)\n - heartbeat # every-prompt session-alive signal with metadata\n
Only listed events fire. Omitting an event silently drops it.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-5-use-in-your-own-skills","level":3,"title":"Step 5: Use in Your Own Skills","text":"
Add ctx notify calls to any skill or script:
# In a release skill\nctx notify --event release \"v1.2.0 released successfully\" 2>/dev/null || true\n\n# In a backup script\nctx notify --event backup \"Nightly backup completed\" 2>/dev/null || true\n
The 2>/dev/null || true suffix ensures the notification never breaks your script: If there's no webhook or the HTTP call fails, it's a silent noop.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#event-types","level":2,"title":"Event Types","text":"
ctx fires these events automatically:
Event Source When loop Loop script Loop completes or hits max iterations nudge System hooks VERBATIM relay nudge is emitted (context checkpoint, persistence, ceremonies, journal, resources, knowledge, version) relay System hooks Any hook output (VERBATIM relays, agent directives, block responses) heartbeat System hook Every prompt: session-alive signal with prompt count and context modification status testctx notify test Manual test notification (custom) Your skills You wire ctx notify --event <name> in your own scripts
nudge vs relay: The nudge event fires only for VERBATIM relay hooks (the ones the agent is instructed to show verbatim). The relay event fires for all hook output: VERBATIM relays, agent directives, and hard gates. Subscribe to relay for debugging (\"did the agent get the post-commit nudge?\"), nudge for user-facing assurance (\"was the checkpoint emitted?\").
Webhooks as a Hook Audit Trail
Subscribe to relay events and you get an external record of every hook that fires, independent of the agent.
This lets you verify hooks are running and catch cases where the agent absorbs a nudge instead of surfacing it.
See Auditing System Hooks for the full workflow.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#payload-format","level":2,"title":"Payload Format","text":"
The detail field is a structured template reference containing the hook name, variant, and any template variables. This lets receivers filter by hook or variant without parsing rendered text. The field is omitted when no template reference applies (e.g. custom ctx notify calls).
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#heartbeat-payload","level":3,"title":"Heartbeat Payload","text":"
The heartbeat event fires on every prompt with session metadata and token usage telemetry:
The tokens, context_window, and usage_pct fields are included when token data is available from the session JSONL file. They are omitted when no usage data has been recorded yet (e.g. first prompt).
Unlike other events, heartbeat fires every prompt (not throttled). Use it for observability dashboards or liveness monitoring of long-running sessions.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#security-model","level":2,"title":"Security Model","text":"Component Location Committed? Permissions Encryption key ~/.ctx/.ctx.key No (user-level) 0600 Encrypted URL .context/.notify.enc Yes (safe) 0600 Webhook URL Never on disk in plaintext N/A N/A
The key is shared with the scratchpad. If you rotate the encryption key, re-run ctx notify setup to re-encrypt the webhook URL with the new key.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#key-rotation","level":2,"title":"Key Rotation","text":"
ctx checks the age of the encryption key once per day. If it's older than 90 days (configurable via key_rotation_days), a VERBATIM nudge is emitted suggesting rotation.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#worktrees","level":2,"title":"Worktrees","text":"
The webhook URL is encrypted with the same encryption key (~/.ctx/.ctx.key). Because the key lives at the user level, it is shared across all worktrees on the same machine - notifications work in worktrees automatically.
This means agents running in worktrees cannot send webhook alerts. For autonomous runs where worktree agents are opaque, monitor them from the terminal rather than relying on webhooks. Enrich journals and review results on the main branch after merging.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#event-log-the-local-complement","level":2,"title":"Event Log: The Local Complement","text":"
Don't need a webhook but want diagnostic visibility? Enable event_log: true in .ctxrc. The event log writes the same payload as webhooks to a local JSONL file (.context/state/events.jsonl) that you can query without any external service:
ctx system events --last 20 # recent hook activity\nctx system events --hook qa-reminder # filter by hook\n
Webhooks and event logging are independent: you can use either, both, or neither. Webhooks give you push notifications and an external audit trail. The event log gives you local queryability and ctx doctor integration.
See Troubleshooting for how they work together.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#tips","level":2,"title":"Tips","text":"
Fire-and-forget: Notifications never block. HTTP errors are silently ignored. No retry, no response parsing.
No webhook = no cost: When no webhook is configured, ctx notify exits immediately. System hooks that call notify.Send() add zero overhead.
Multiple projects: Each project has its own .notify.enc. You can point different projects at different webhooks.
Event filter is per-project: Configure notify.events in each project's .ctxrc independently.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#next-up","level":2,"title":"Next Up","text":"
Auditing System Hooks →: Verify your hooks are running, audit what they do, and get alerted when they go silent.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#see-also","level":2,"title":"See Also","text":"
CLI Reference: ctx notify: full command reference
Configuration: .ctxrc settings including notify options
Running an Unattended AI Agent: how loops work and how notifications fit in
Hook Output Patterns: understanding VERBATIM relays, agent directives, and hard gates
Auditing System Hooks: using webhooks as an external audit trail for hook execution
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/","level":1,"title":"When to Use a Team of Agents","text":"","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-problem","level":2,"title":"The Problem","text":"
You have a task, and you are wondering: \"should I throw more agents at it?\"
More agents can mean faster results, but they also mean coordination overhead, merge conflicts, divergent mental models, and wasted tokens re-reading context.
The wrong setup costs more than it saves.
This recipe is a decision framework: It helps you choose between a single agent, parallel worktrees, and a full agent team, and explains what ctx provides at each level.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#tldr","level":2,"title":"TL;DR","text":"
Single agent for most work;
Parallel worktrees when tasks touch disjoint file sets;
Agent teams only when tasks need real-time coordination. When in doubt, start with one agent.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-spectrum","level":2,"title":"The Spectrum","text":"
There are three modes, ordered by complexity:
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#1-single-agent-default","level":3,"title":"1. Single Agent (Default)","text":"
One agent, one session, one branch. This is correct for most work.
Use this when:
The task has linear dependencies (step 2 needs step 1's output);
Changes touch overlapping files;
You need tight feedback loops (review each change before the next);
The task requires deep understanding of a single area;
Total effort is less than a few hours of agent time.
ctx provides: Full .context/: tasks, decisions, learnings, conventions, all in one session.
The agent builds a coherent mental model and persists it as it goes.
Example tasks: Bug fixes, feature implementation, refactoring a module, writing documentation for one area, debugging.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#2-parallel-worktrees-independent-tracks","level":3,"title":"2. Parallel Worktrees (Independent Tracks)","text":"
2-4 agents, each in a separate git worktree on its own branch, working on non-overlapping parts of the codebase.
Use this when:
You have 5+ independent tasks in the backlog;
Tasks group cleanly by directory or package;
File overlap between groups is zero or near-zero;
Each track can be completed and merged independently;
You want parallelism without coordination complexity.
ctx provides: Shared .context/ via git (each worktree sees the same tasks, decisions, conventions). /ctx-worktree skill for setup and teardown. TASKS.md as a lightweight work queue.
Example tasks: Docs + new package + test coverage (three tracks that don't touch the same files). Parallel recipe writing. Independent module development.
See: Parallel Agent Development with Git Worktrees
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#3-agent-team-coordinated-swarm","level":3,"title":"3. Agent Team (Coordinated Swarm)","text":"
Multiple agents communicating via messages, sharing a task list, with a lead agent coordinating. Claude Code's team/swarm feature.
Use this when:
Tasks have dependencies but can still partially overlap;
You need research and implementation happening simultaneously;
The work requires different roles (researcher, implementer, tester);
A lead agent needs to review and integrate others' work;
The task is large enough that coordination cost is justified.
ctx provides: .context/ as shared state that all agents can read. Task tracking for work assignment. Decisions and learnings as team memory that survives individual agent turnover.
Example tasks: Large refactor across modules where a lead reviews merges. Research and implementation where one agent explores options while another builds. Multi-file feature that needs integration testing after parallel implementation.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-decision-framework","level":2,"title":"The Decision Framework","text":"
Ask these questions in order:
Can one agent do this in a reasonable time?\n YES → Single agent. Stop here.\n NO ↓\n\nCan the work be split into non-overlapping file sets?\n YES → Parallel worktrees (2-4 tracks)\n NO ↓\n\nDo the subtasks need to communicate during execution?\n YES → Agent team with lead coordination\n NO → Parallel worktrees with a merge step\n
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-file-overlap-test","level":3,"title":"The File Overlap Test","text":"
This is the critical decision point. Before choosing multi-agent, list the files each subtask would touch. If two subtasks modify the same file, they belong in the same track (or the same single-agent session).
You: \"I want to parallelize these tasks. Which files would each one touch?\"\n\nAgent: [reads `TASKS.md`, analyzes codebase]\n \"Task A touches internal/config/ and internal/cli/initialize/\n Task B touches docs/ and site/\n Task C touches internal/config/ and internal/cli/status/\n\n Tasks A and C overlap on internal/config/ # they should be\n in the same track. Task B is independent.\"\n
When in doubt, keep things in one track. A merge conflict in a critical file costs more time than the parallelism saves.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#when-teams-make-things-worse","level":2,"title":"When Teams Make Things Worse","text":"
\"More agents\" is not always better. Watch for these patterns:
Merge hell: If you are spending more time resolving conflicts than the parallel work saved, you split wrong: Re-group by file overlap.
Context divergence: Each agent builds its own mental model. After 30 minutes of independent work, agent A might make assumptions that contradict agent B's approach. Shorter tracks with frequent merges reduce this.
Coordination theater: A lead agent spending most of its time assigning tasks, checking status, and sending messages instead of doing work. If the task list is clear enough, worktrees with no communication are cheaper.
Re-reading overhead: Every agent reads .context/ on startup. A team of 4 agents each reading 4000 tokens of context = 16000 tokens before anyone does any work. For small tasks, that overhead dominates.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#what-ctx-gives-you-at-each-level","level":2,"title":"What ctx Gives You at Each Level","text":"ctx Feature Single Agent Worktrees Team .context/ files Full access Shared via git Shared via filesystem TASKS.md Work queue Split by track Assigned by lead Decisions/Learnings Persisted in session Persisted per branch Persisted by any agent /ctx-next Picks next task Picks within track Lead assigns /ctx-worktree N/A Setup + teardown Optional /ctx-commit Normal commits Per-branch commits Per-agent commits","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#team-composition-recipes","level":2,"title":"Team Composition Recipes","text":"
Four practical team compositions for common workflows.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#feature-development-3-agents","level":3,"title":"Feature Development (3 agents)","text":"Role Responsibility Architect Writes spec in specs/, breaks work into TASKS.md phases Implementer Picks tasks from TASKS.md, writes code, marks [x] done Reviewer Runs tests, ctx drift, lint; files issues as new tasks
Coordination: TASKS.md checkboxes. Architect writes tasks before implementer starts. Reviewer runs after each implementer commit.
Anti-pattern: All three agents editing the same file simultaneously. Sequence the work so only one agent touches a file at a time.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#consolidation-sprint-3-4-agents","level":3,"title":"Consolidation Sprint (3-4 agents)","text":"Role Responsibility Auditor Runs ctx drift, identifies stale paths and broken refs Code Fixer Updates source code to match context (or vice versa) Doc Writer Updates ARCHITECTURE.md, CONVENTIONS.md, and docs/ Test Fixer (Optional) Fixes tests broken by the fixer's changes
Coordination: Auditor's ctx drift output is the shared work queue. Each agent claims a subset of issues by adding #in-progress labels.
Anti-pattern: Fixer and doc writer both editing ARCHITECTURE.md. Assign file ownership explicitly.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#release-prep-2-agents","level":3,"title":"Release Prep (2 agents)","text":"Role Responsibility Release Notes Generates changelog from commits, writes release notes Validation Runs full test suite, lint, build across platforms
Coordination: Both read TASKS.md to identify what shipped. Release notes agent works from git log; validation agent works from make audit.
Anti-pattern: Release notes agent running tests \"to verify.\" Each agent stays in its lane.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#documentation-sprint-3-agents","level":3,"title":"Documentation Sprint (3 agents)","text":"Role Responsibility Content Writes new pages, expands existing docs Cross-linker Adds nav entries, cross-references, \"See Also\" sections Verifier Builds site, checks broken links, validates rendering
Coordination: Content agent writes files first. Cross-linker updates zensical.toml and index pages after content lands. Verifier builds after each batch.
Antipattern: Content and cross-linker both editing zensical.toml. Batch nav updates into the cross-linker's pass.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#tips","level":2,"title":"Tips","text":"
Start with one agent: Only add parallelism when you have identified the bottleneck. \"This would go faster with more agents\" is usually wrong for tasks under 2 hours.
The 3-4 agent ceiling is real: Coordination overhead grows quadratically. 2 agents = 1 communication pair. 4 agents = 6 pairs. Beyond 4, you are managing agents more than doing work.
Worktrees > teams for most parallelism needs: If agents don't need to talk to each other during execution, worktrees give you parallelism with zero coordination overhead.
Use ctx as the shared brain: Whether it's one agent or four, the .context/ directory is the single source of truth. Decisions go in DECISIONS.md, not in chat messages between agents.
Merge early, merge often: Long-lived parallel branches diverge. Merge a track as soon as it's done rather than waiting for all tracks to finish.
TASKS.md conflicts are normal: Multiple agents completing different tasks will conflict on merge. The resolution is always additive: accept all [x] completions from both sides.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#next-up","level":2,"title":"Next Up","text":"
Parallel Agent Development with Git Worktrees →: Run multiple agents on independent task tracks using git worktrees.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#go-deeper","level":2,"title":"Go Deeper","text":"
CLI Reference: all commands and flags
Integrations: setup for Claude Code, Cursor, Aider
Session Journal: browse and search session history
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#see-also","level":2,"title":"See Also","text":"
Parallel Agent Development with Git Worktrees: the mechanical \"how\" for worktree-based parallelism
Running an Unattended AI Agent: serial autonomous loops: a different scaling strategy
Tracking Work Across Sessions: managing the task backlog that feeds into any multi-agent setup
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"reference/","level":1,"title":"Reference","text":"
Technical reference for ctx commands, skills, and internals.
","path":["Reference"],"tags":[]},{"location":"reference/#the-system-explains-itself","level":3,"title":"The System Explains Itself","text":"
The 12 properties that must hold for any valid ctx implementation. Not features: constraints. The system's contract with its users and contributors.
","path":["Reference"],"tags":[]},{"location":"reference/audit-conventions/","level":1,"title":"Audit Conventions: Common Patterns and Fixes","text":"
This guide documents the code conventions enforced by internal/audit/ AST tests. Each section shows the violation pattern, the fix, and the rationale. When a test fails, find the matching section below.
All tests skip _test.go files. The patterns apply only to production code under internal/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#variable-shadowing-bare-err-reuse","level":2,"title":"Variable Shadowing (bare err := reuse)","text":"
Test: TestNoVariableShadowing
When a function has multiple := assignments to err, each shadows the previous one. This makes it impossible to tell which error a later if err != nil is checking.
Rule: Use descriptive error names (readErr, writeErr, parseErr, walkErr, absErr, relErr) so each error site is independently identifiable.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#import-name-shadowing","level":2,"title":"Import Name Shadowing","text":"
Test: TestNoImportNameShadowing
When a local variable has the same name as an imported package, the import becomes inaccessible in that scope.
Before:
import \"github.com/ActiveMemory/ctx/internal/session\"\n\nfunc process(session *entity.Session) { // param shadows import\n // session package is now unreachable here\n}\n
After:
import \"github.com/ActiveMemory/ctx/internal/session\"\n\nfunc process(sess *entity.Session) {\n // session package still accessible\n}\n
Rule: Parameters, variables, and return values must not reuse imported package names. Common renames: session -> sess, token -> tok, config -> cfg, entry -> ent.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#magic-strings","level":2,"title":"Magic Strings","text":"
Test: TestNoMagicStrings
String literals in function bodies are invisible to refactoring tools and cause silent breakage when the value changes in one place but not another.
Before (string literals):
func loadContext() {\n data := filepath.Join(dir, \"TASKS.md\")\n if strings.HasSuffix(name, \".yaml\") {\n // ...\n }\n}\n
After:
func loadContext() {\n data := filepath.Join(dir, config.FilenameTask)\n if strings.HasSuffix(name, config.ExtYAML) {\n // ...\n }\n}\n
func EntryHash(text string) string {\n h := sha256.Sum256([]byte(text))\n return hex.EncodeToString(h[:cfgFmt.HashPrefixLen])\n}\n
Before (URL schemes — also caught):
if strings.HasPrefix(target, \"https://\") ||\n strings.HasPrefix(target, \"http://\") {\n return target\n}\n
After:
if strings.HasPrefix(target, cfgHTTP.PrefixHTTPS) ||\n strings.HasPrefix(target, cfgHTTP.PrefixHTTP) {\n return target\n}\n
Exempt from this check:
Empty string \"\", single space \" \", indentation strings
Regex capture references ($1, ${name})
const and var definition sites (that's where constants live)
Struct tags
Import paths
Packages under internal/config/, internal/assets/tpl/
Rule: If a string is used for comparison, path construction, or appears in 3+ files, it belongs in internal/config/ as a constant. Format strings belong in internal/config/ as named constants (e.g., cfgGit.FlagLastN, cfgTrace.RefFormat). User-facing prose belongs in internal/assets/ YAML files accessed via desc.Text().
Common fix for fmt.Sprintf with format verbs:
Pattern Fix fmt.Sprintf(\"%d\", n)strconv.Itoa(n)fmt.Sprintf(\"%d\", int64Val)strconv.FormatInt(int64Val, 10)fmt.Sprintf(\"%x\", bytes)hex.EncodeToString(bytes)fmt.Sprintf(\"%q\", s)strconv.Quote(s)fmt.Sscanf(s, \"%d\", &n)strconv.Atoi(s)fmt.Sprintf(\"-%d\", n)fmt.Sprintf(cfgGit.FlagLastN, n)\"https://\"cfgHTTP.PrefixHTTPS\"<\" config constant in config/html/","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#direct-printf-calls","level":2,"title":"Direct Printf Calls","text":"
Test: TestNoPrintfCalls
cmd.Printf and cmd.PrintErrf bypass the write-package formatting pipeline and scatter user-facing text across the codebase.
Rule: All formatted output goes through internal/write/ which uses cmd.Print/cmd.Println with pre-formatted strings from desc.Text().
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#raw-time-format-strings","level":2,"title":"Raw Time Format Strings","text":"
Test: TestNoRawTimeFormats
Inline time format strings (\"2006-01-02\", \"15:04:05\") drift when one call site is updated but others are missed.
Rule: All time format strings must use constants from internal/config/time/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#direct-flag-registration","level":2,"title":"Direct Flag Registration","text":"
Test: TestNoFlagBindOutsideFlagbind
Direct cobra flag calls (.Flags().StringVar(), etc.) scatter flag wiring across dozens of cmd.go files. Centralizing through internal/flagbind/ gives one place to audit flag names, defaults, and description key lookups.
Before:
func Cmd() *cobra.Command {\n var output string\n c := &cobra.Command{Use: cmd.UseStatus}\n c.Flags().StringVarP(&output, \"output\", \"o\", \"\",\n \"output format\")\n return c\n}\n
After:
func Cmd() *cobra.Command {\n var output string\n c := &cobra.Command{Use: cmd.UseStatus}\n flagbind.StringFlagShort(c, &output, flag.Output,\n flag.OutputShort, cmd.DescKeyOutput)\n return c\n}\n
Rule: All flag registration goes through internal/flagbind/. If the helper you need doesn't exist, add it to flagbind/flag.go before using it.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#todo-comments","level":2,"title":"TODO Comments","text":"
Test: TestNoTODOComments
TODO, FIXME, HACK, and XXX comments in production code are invisible to project tracking. They accumulate silently and never get addressed.
Remove the comment and add a task to .context/TASKS.md:
- [ ] Handle pagination in listEntries (internal/task/task.go)\n
Rule: Deferred work lives in TASKS.md, not in source comments.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#dead-exports","level":2,"title":"Dead Exports","text":"
Test: TestNoDeadExports
Exported symbols with zero references outside their definition file are dead weight. They increase API surface, confuse contributors, and cost maintenance.
Fix: Either delete the export (preferred) or demote it to unexported if it's still used within the file.
If the symbol existed for historical reasons and might be needed again, move it to quarantine/deadcode/ with a .dead extension. This preserves the code in git without polluting the live codebase:
// Dead exports quarantined from internal/config/flag/flag.go\n// Quarantined: 2026-04-02\n// Restore from git history if needed.\n
Rule: If a test-only allowlist entry is needed (the export exists only for test use), add the fully qualified symbol to testOnlyExports in dead_exports_test.go. Keep this list small — prefer eliminating the export.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#core-package-structure","level":2,"title":"Core Package Structure","text":"
Test: TestCoreStructure
core/ directories under internal/cli/ must contain only doc.go and test files at the top level. All domain logic lives in subpackages. This prevents core/ from becoming a god package.
Rule: Extract each logical unit into its own subpackage under core/. Each subpackage gets a doc.go. The subpackage name should match the domain concept (golang, check, fix, store), not a generic label (util, helper).
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#cross-package-types","level":2,"title":"Cross-Package Types","text":"
Test: TestCrossPackageTypes
When a type defined in one package is used from a different module (e.g., cli/doctor importing a type from cli/notify), the type has crossed its module boundary. Cross-cutting types belong in internal/entity/ for discoverability.
Exempt: Types inside entity/, proto/, core/ subpackages, and config/ packages. Same-module usage (e.g., cli/doctor/cmd/ using cli/doctor/core/) is not flagged.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#type-file-convention","level":2,"title":"Type File Convention","text":"
Exported types in core/ subpackages should live in types.go (the convention from CONVENTIONS.md), not scattered across implementation files. This makes type definitions discoverable. TestTypeFileConventionReport generates a diagnostic summary of all type placements for triage.
Exception: entity/ organizes by domain (task.go, session.go), proto/ uses schema.go, and err/ packages colocate error types with their domain context.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#desckey-yaml-linkage","level":2,"title":"DescKey / YAML Linkage","text":"
Test: TestDescKeyYAMLLinkage
Every DescKey constant must have a corresponding key in the YAML asset files, and every YAML key must have a corresponding DescKey constant. Orphans in either direction mean dead text or runtime panics.
Fix for orphan YAML key: Delete the YAML entry, or add the corresponding DescKey constant in config/embed/{text,cmd,flag}/.
Fix for orphan DescKey: Delete the constant, or add the corresponding entry in the YAML file under internal/assets/commands/text/, cmd/, or flag/.
If the orphan YAML entry was once valid but the feature was removed, move the YAML entry to a .dead file in quarantine/deadcode/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#package-doc-quality","level":2,"title":"Package Doc Quality","text":"
Test: TestPackageDocQuality
Every package under internal/ must have a doc.go with a meaningful package doc comment (at least 8 lines of real content). One-liners and file-list patterns (// - foo.go, // Source files:) are flagged because they drift as files change.
Template:
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n\n// Package mypackage does X.\n//\n// It handles Y by doing Z. The main entry point is [FunctionName]\n// which accepts A and returns B.\n//\n// Configuration is read from [config.SomeConstant]. Output is\n// written through [write.SomeHelper].\n//\n// This package is used by [parentpackage] during the W lifecycle\n// phase.\npackage mypackage\n
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#inline-regex-compilation","level":2,"title":"Inline Regex Compilation","text":"
Test: TestNoInlineRegexpCompile
regexp.MustCompile and regexp.Compile inside function bodies recompile the pattern on every call. Compiled patterns belong at package level.
Before:
func parse(s string) bool {\n re := regexp.MustCompile(`\\d{4}-\\d{2}-\\d{2}`)\n return re.MatchString(s)\n}\n
After:
// In internal/config/regex/regex.go:\n// DatePattern matches ISO date format (YYYY-MM-DD).\nvar DatePattern = regexp.MustCompile(`\\d{4}-\\d{2}-\\d{2}`)\n\n// In calling package:\nfunc parse(s string) bool {\n return regex.DatePattern.MatchString(s)\n}\n
Rule: All compiled regexes live in internal/config/regex/ as package-level var declarations. Two tests enforce this: TestNoInlineRegexpCompile catches function-body compilation, and TestNoRegexpOutsideRegexPkg catches package-level compilation outside config/regex/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#doc-comments","level":2,"title":"Doc Comments","text":"
Test: TestDocComments
All functions (exported and unexported), structs, and package-level variables must have a doc comment. Config packages allow group doc comments for const blocks.
// buildIndex maps entry names to their position in the\n// ordered slice for O(1) lookup during reconciliation.\n//\n// Parameters:\n// - entries: ordered slice of entries to index\n//\n// Returns:\n// - map[string]int: name-to-position mapping\nfunc buildIndex(entries []Entry) map[string]int {\n
Rule: Every function, struct, and package-level var gets a doc comment in godoc format. Functions include Parameters: and Returns: sections. Structs with 2+ fields document every field. See CONVENTIONS.md for the full template.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#line-length","level":2,"title":"Line Length","text":"
Test: TestLineLength
Lines in non-test Go files must not exceed 80 characters. This is a hard check, not a suggestion.
Rule: Break at natural points: function arguments, struct fields, chained calls. Long strings (URLs, struct tags) are the rare acceptable exception.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#literal-whitespace","level":2,"title":"Literal Whitespace","text":"
Test: TestNoLiteralWhitespace
Bare whitespace string and byte literals (\"\\n\", \"\\r\\n\", \"\\t\") must not appear outside internal/config/token/. All other packages use the token constants.
Before:
output := strings.Join(lines, \"\\n\")\n
After:
output := strings.Join(lines, token.Newline)\n
Rule: Whitespace literals are defined once in internal/config/token/. Use token.Newline, token.Tab, token.CRLF, etc.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#magic-numeric-values","level":2,"title":"Magic Numeric Values","text":"
Test: TestNoMagicValues
Numeric literals in function bodies need constants, with narrow exceptions.
Before:
if len(entries) > 100 {\n entries = entries[:100]\n}\n
After:
if len(entries) > config.MaxEntries {\n entries = entries[:config.MaxEntries]\n}\n
Exempt: 0, 1, -1, 2–10, strconv radix/bitsize args (10, 32, 64 in strconv.Parse*/Format*), octal permissions (caught separately by TestNoRawPermissions), and const/var definition sites.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#inline-separators","level":2,"title":"Inline Separators","text":"
Test: TestNoInlineSeparators
strings.Join calls must use token constants for their separator argument, not string literals.
Before:
result := strings.Join(parts, \", \")\n
After:
result := strings.Join(parts, token.CommaSep)\n
Rule: Separator strings live in internal/config/token/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#stuttery-function-names","level":2,"title":"Stuttery Function Names","text":"
Test: TestNoStutteryFunctions
Function names must not redundantly include their package name as a PascalCase word boundary. Go callers already write pkg.Function, so pkg.PkgFunction stutters.
Before:
// In package write\nfunc WriteJournal(cmd *cobra.Command, ...) {\n
After:
// In package write\nfunc Journal(cmd *cobra.Command, ...) {\n
Exempt: Identity functions like write.Write / write.write.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#mixed-visibility","level":2,"title":"Mixed Visibility","text":"
Test: TestNoMixedVisibility
Files with exported functions must not also contain unexported functions. Public API and private helpers live in separate files.
Exempt: Files with exactly one function, doc.go, test files.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#stray-errgo-files","level":2,"title":"Stray err.go Files","text":"
Test: TestNoStrayErrFiles
err.go files must only exist under internal/err/. Error constructors anywhere else create a broken-window pattern where contributors add local error definitions when they see a local err.go.
Fix: Move the error constructor to internal/err/<domain>/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#cli-cmd-structure","level":2,"title":"CLI Cmd Structure","text":"
Test: TestCLICmdStructure
Each cmd/$sub/ directory under internal/cli/ may contain only cmd.go, run.go, doc.go, and test files. Extra .go files (helpers, output formatters, types) belong in the corresponding core/ subpackage.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#desckey-namespace","level":2,"title":"DescKey Namespace","text":"
Three tests enforce DescKey/Use constant discipline:
Use* constants appear only in cobra Use: struct field assignments — never as arguments to desc.Text() or elsewhere.
DescKey* constants are passed only to assets.CommandDesc(), assets.FlagDesc(), or desc.Text() — never to cobra Use:.
No cross-namespace lookups — TextDescKey must not be passed to CommandDesc(), FlagDescKey must not be passed to Text(), etc.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#yaml-examples-registry-linkage","level":2,"title":"YAML Examples / Registry Linkage","text":"
Every key in examples.yaml and registry.yaml must match a known entry type constant. Prevents orphan entries that are never rendered.
Fix: Delete the orphan YAML entry, or add the corresponding constant in config/entry/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#other-enforced-patterns","level":2,"title":"Other Enforced Patterns","text":"
These tests follow the same fix approach — extract the operation to its designated package:
Test Violation Fix TestNoNakedErrorsfmt.Errorf/errors.New outside internal/err/ Add error constructor to internal/err/<domain>/TestNoRawFileIO Direct os.ReadFile, os.Create, etc. Use io.SafeReadFile, io.SafeWriteFile, etc. TestNoRawLogging Direct fmt.Fprintf(os.Stderr, ...) Use log/warn.Warn() or log/event.Append()TestNoExecOutsideExecPkgexec.Command outside internal/exec/ Add command to internal/exec/<domain>/TestNoCmdPrintOutsideWritecmd.Print* outside internal/write/ Add output helper to internal/write/<domain>/TestNoRawPermissions Octal literals (0644, 0755) Use config/fs.PermFile, config/fs.PermExec, etc. TestNoErrorsAserrors.As() Use errors.AsType() (generic, Go 1.23+) TestNoStringConcatPathsdir + \"/\" + file Use filepath.Join(dir, file)","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#general-fix-workflow","level":2,"title":"General Fix Workflow","text":"
When an audit test fails:
Read the error message. It includes file:line and a description of the violation.
Find the matching section above. The test name maps directly to a section.
Apply the pattern. Most fixes are mechanical: extract to the right package, rename a variable, or replace a literal with a constant.
Run make test before committing. Audit tests run as part of go test ./internal/audit/.
Don't add allowlist entries as a first resort. Fix the code. Allowlists exist only for genuinely unfixable cases (test-only exports, config packages that are definitionally exempt).
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/comparison/","level":1,"title":"Tool Ecosystem","text":"","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#high-level-mental-model","level":2,"title":"High-Level Mental Model","text":"
Many tools help AI think.
ctx helps AI remember.
Not by storing thoughts,
but by preserving intent.
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#how-ctx-differs-from-similar-tools","level":2,"title":"How ctx Differs from Similar Tools","text":"
There are many tools in the AI ecosystem that touch parts of the context problem:
Some manage prompts.
Some retrieve data.
Some provide runtime context objects.
Some offer enterprise platforms.
ctx focuses on a different layer entirely.
This page explains where ctx fits, and where it intentionally does not.
That single difference explains nearly all of ctx's design choices.
Question Most tools ctx Where does context live? In prompts or APIs In files How long does it last? One request / one session Across time Who can read it? The model Humans and tools How is it updated? Implicitly Explicitly Is it inspectable? Rarely Always","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#prompt-management-tools","level":2,"title":"Prompt Management Tools","text":"
Examples include:
prompt templates;
reusable system prompts;
prompt libraries;
prompt versioning tools.
These tools help you start a session.
They do not help you continue one.
Prompt tools:
inject text at session start;
are ephemeral by design;
do not evolve with the project.
ctx:
persists knowledge over time;
accumulates decisions and learnings;
makes the context part of the repository itself.
Prompt tooling and ctx are complementary; not competing. Yet, they operate in different layers.
Users often evaluate ctx against specific tools they already use. These comparisons clarify where responsibilities overlap, where they diverge, and where the tools are genuinely complementary.
Anthropic's auto-memory is tool-managed memory (L2): the model decides what to remember, stores it automatically, and retrieves it implicitly. ctx is system memory (L3): humans and agents explicitly curate decisions, learnings, and tasks in inspectable files.
Auto-memory is convenient - you do not configure anything. But it is also opaque: you cannot see what was stored, edit it precisely, or share it across tools. ctx files are plain Markdown in your repository, visible in diffs and code review.
The two are complementary. ctx can absorb auto-memory as an input source (importing what the model remembered into structured context files) while providing the durable, inspectable layer that auto-memory lacks.
Static rule files (.cursorrules, .claude/rules/) declare conventions: coding style, forbidden patterns, preferred libraries. They are effective for what to do and load automatically at session start.
ctx adds dimensions that rule files do not cover: architectural decisions with rationale, learnings discovered during development, active tasks, and a constitution that governs agent behavior. Critically, ctx context accumulates - each session can add to it, and token budgeting ensures only the most relevant context is injected.
Use rule files for static conventions. Use ctx for evolving project memory.
Aider's --read flag injects file contents at session start; --watch reloads them on change. The concept is similar to ctx's \"load\" step: make the agent aware of specific files.
The differences emerge beyond loading. Aider has no persistence model -- nothing the agent learns during a session is written back. There is no token budgeting (large files consume the full context window), no priority ordering across file types, and no structured format for decisions or learnings. ctx provides the full lifecycle: load, accumulate, persist, and budget.
GitHub Copilot's @workspace performs workspace-wide code search. It answers \"what code exists?\" - finding function definitions, usages, and file structure across the repository.
ctx answers a different question: \"what did we decide?\" It stores architectural intent, not code indices. Copilot's workspace search and ctx's project memory are orthogonal; one finds code, the other preserves the reasoning behind it.
Cline's memory bank stores session context within the Cline extension. The motivation is similar to ctx: help the agent remember across sessions.
The key difference is portability. Cline memory is tied to Cline - it does not transfer to Claude Code, Cursor, Aider, or any other tool. ctx is tool-agnostic: context lives in plain files that any editor, agent, or script can read. Switching tools does not mean losing memory.
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#when-ctx-is-a-good-fit","level":2,"title":"When ctx Is a Good Fit","text":"
ctx works best when:
you want AI work to compound over time;
architectural decisions matter;
context must be inspectable;
humans and AI must share the same source of truth;
Git history should include why, not just what.
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#when-ctx-is-not-the-right-tool","level":2,"title":"When ctx Is Not the Right Tool","text":"
ctx is probably not what you want if:
you only need one-off prompts;
you rely exclusively on RAG;
you want autonomous agents without a human-readable state;
You Can't Import Expertise: why project-specific context matters more than generic best practices
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/design-invariants/","level":1,"title":"Invariants","text":"","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#the-system-explains-itself","level":1,"title":"The System Explains Itself","text":"
These are the properties that must hold for any valid ctx implementation.
These are not features.
These are constraints.
A change that violates an invariant is a category error, not an improvement.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#cognitive-state-tiers","level":2,"title":"Cognitive State Tiers","text":"
ctx distinguishes between three forms of state:
Authoritative state: Versioned, inspectable artifacts that define intent and survive time.
Delivery views: Deterministic assemblies of the authoritative state for a specific budget or workflow.
Ephemeral working state: Local, transient, or sensitive data that assists interaction but does not define system truth.
The invariants below apply primarily to the authoritative cognitive state.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#1-cognitive-state-is-explicit","level":2,"title":"1. Cognitive State Is Explicit","text":"
All authoritative context lives in artifacts that can be inspected, reviewed, and versioned.
If something is important, it must exist as a file: Not only in a prompt, a chat, or a model's hidden memory.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#2-assembly-is-reproducible","level":2,"title":"2. Assembly Is Reproducible","text":"
Given the same:
repository state,
configuration,
and inputs,
context assembly produces the same result.
Heuristics may rank or filter for delivery under constraints.
They do not alter the authoritative state.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#3-the-authoritative-state-is-human-readable","level":2,"title":"3. The Authoritative State Is Human-Readable","text":"
The authoritative cognitive state must be stored in formats that a human can:
read,
diff,
review,
and edit directly.
Sensitive working memory may be encrypted at rest. However, encryption must not become the only representation of authoritative knowledge.
Reasoning, decisions, and outcomes must remain available after the interaction that produced them has ended.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#5-authority-is-user-defined","level":2,"title":"5. Authority Is User-Defined","text":"
What enters the authoritative context is an explicit human decision.
Models may suggest.
Automation may assist.
Selection is never implicit.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#6-operation-is-local-first","level":2,"title":"6. Operation Is Local-First","text":"
The core system must function without requiring network access or a remote service.
External systems may extend ctx.
They must not be required for its operation.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#7-versioning-is-the-memory-model","level":2,"title":"7. Versioning Is the Memory Model","text":"
The evolution of the authoritative cognitive state must be:
preserved,
inspectable,
and branchable.
Ephemeral and sensitive working state may use different retention and diff strategies by design.
Understanding includes understanding how we arrived here.
Authoritative cognitive state must have a defined layout that:
communicates intent,
supports navigation,
and prevents drift.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#9-verification-is-the-scoreboard","level":2,"title":"9. Verification Is the Scoreboard","text":"
Claims without recorded outcomes are noise.
Reality (observed and captured) is the only signal that compounds.
This invariant defines a required direction:
The authoritative state must be able to record expectation and result.
Work that has already produced understanding must not be re-derived from scratch.
Explored paths, rejected options, and validated conclusions are permanent assets.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#11-policies-are-encoded-not-remembered","level":2,"title":"11. Policies Are Encoded, not Remembered","text":"
Alignment must not depend on recall or goodwill.
Constraints that matter must exist in machine-readable form and participate in context assembly.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#12-the-system-explains-itself","level":2,"title":"12. The System Explains Itself","text":"
From the repository state alone it must be possible to determine:
To avoid category errors, ctx does not attempt to be:
a skill,
a prompt management tool,
a chat history viewer,
an autonomous agent runtime,
a vector database,
a hosted memory service.
Such systems may integrate with ctx.
They do not define it.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#implications-for-contributions","level":1,"title":"Implications for Contributions","text":"
Valid contributions:
strengthen an invariant,
reduce the cost of maintaining an invariant,
or extend the system without violating invariants.
Invalid contributions:
introduce hidden authoritative state,
replace reproducible assembly with non-reproducible behavior,
make core operation depend on external services,
reduce human inspectability of authoritative state,
or bypass explicit user authority over what becomes authoritative.
Everything else (commands, skills, layouts, integrations, optimizations) is an implementation detail.
These invariants are the system.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/scratchpad/","level":1,"title":"Scratchpad","text":"","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#what-is-ctx-scratchpad","level":2,"title":"What Is ctx Scratchpad?","text":"
A one-liner scratchpad, encrypted at rest, synced via git.
Quick notes that don't fit decisions, learnings, or tasks: reminders, intermediate values, sensitive tokens, working memory during debugging. Entries are numbered, reorderable, and persist across sessions.
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#encrypted-by-default","level":2,"title":"Encrypted by Default","text":"
Scratchpad entries are encrypted with AES-256-GCM before touching the disk.
Component Path Git status Encryption key ~/.ctx/.ctx.key User-level, 0600 permissions Encrypted data .context/scratchpad.enc Committed
The key is generated automatically during ctx init (256-bit via crypto/rand) and stored at ~/.ctx/.ctx.key. One key per machine, shared across all projects.
The ciphertext format is [12-byte nonce][ciphertext+tag]. No external dependencies: Go stdlib only.
Because the key is .gitignored and the data is committed, you get:
At-rest encryption: the .enc file is opaque without the key
Git sync: push/pull the encrypted file like any other tracked file
Key separation: the key never leaves the machine unless you copy it
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#commands","level":2,"title":"Commands","text":"Command Purpose ctx pad List all entries (numbered 1-based) ctx pad show N Output raw text of entry N (no prefix, pipe-friendly) ctx pad add \"text\" Append a new entry ctx pad rm N Remove entry at position N ctx pad edit N \"text\" Replace entry N with new text ctx pad edit N --append \"text\" Append text to the end of entry N ctx pad edit N --prepend \"text\" Prepend text to the beginning of entry N ctx pad add TEXT --file PATH Ingest a file as a blob entry (TEXT is the label) ctx pad show N --out PATH Write decoded blob content to a file ctx pad mv N M Move entry from position N to position M ctx pad resolve Show both sides of a merge conflict for resolution ctx pad import FILE Bulk-import lines from a file (or stdin with -) ctx pad import --blob DIR Import directory files as blob entries ctx pad export [DIR] Export all blob entries to a directory as files ctx pad merge FILE... Merge entries from other scratchpad files into current
All commands decrypt on read, operate on plaintext in memory, and re-encrypt on write. The key file is never printed to stdout.
# Add a note\nctx pad add \"check DNS propagation after deploy\"\n\n# List everything\nctx pad\n# 1. check DNS propagation after deploy\n# 2. staging API key: sk-test-abc123\n\n# Show raw text (for piping)\nctx pad show 2\n# sk-test-abc123\n\n# Compose entries\nctx pad edit 1 --append \"$(ctx pad show 2)\"\n\n# Reorder\nctx pad mv 2 1\n\n# Clean up\nctx pad rm 2\n
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#bulk-import-and-export","level":2,"title":"Bulk Import and Export","text":"
Import lines from a file in bulk (each non-empty line becomes an entry):
# Import from a file\nctx pad import notes.txt\n\n# Import from stdin\ngrep TODO *.go | ctx pad import -\n
Export all blob entries to a directory as files:
# Export to a directory\nctx pad export ./ideas\n\n# Preview without writing\nctx pad export --dry-run\n\n# Overwrite existing files\nctx pad export --force ./backup\n
Combine entries from other scratchpad files into your current pad. Useful when merging work from parallel worktrees, other machines, or teammates:
# Merge from a worktree's encrypted scratchpad\nctx pad merge worktree/.context/scratchpad.enc\n\n# Merge from multiple sources (encrypted and plaintext)\nctx pad merge pad-a.enc notes.md\n\n# Merge a foreign encrypted pad using its key\nctx pad merge --key /other/.ctx.key foreign.enc\n\n# Preview without writing\nctx pad merge --dry-run pad-a.enc pad-b.md\n
Each input file is auto-detected as encrypted or plaintext: decryption is attempted first, and on failure the file is parsed as plain text. Entries are deduplicated by exact content, so running merge twice with the same file is safe.
The scratchpad can store small files (up to 64 KB) as blob entries. Files are base64-encoded and stored with a human-readable label.
# Ingest a file: first argument is the label\nctx pad add \"deploy config\" --file ./deploy.yaml\n\n# Listing shows label with a [BLOB] marker\nctx pad\n# 1. check DNS propagation after deploy\n# 2. deploy config [BLOB]\n\n# Extract to a file\nctx pad show 2 --out ./recovered.yaml\n\n# Or print decoded content to stdout\nctx pad show 2\n
Blob entries are encrypted identically to text entries. The internal format is label:::base64data: You never need to construct this manually.
Constraint Value Max file size (pre-encoding) 64 KB Storage format label:::base64(content) Display label [BLOB] in listings
When Should You Use Blobs
Blobs are for small files you want encrypted and portable: config snippets, key fragments, deployment manifests, test fixtures. For anything larger than 64 KB, use the filesystem directly.
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#using-with-ai","level":2,"title":"Using with AI","text":"
Use Natural Language
As in many ctx features, the ctx scratchpad can also be used with natural langauge. You don't have to memorize the CLI commands.
CLI gives you \"precision\", whereas natural language gives you flow.
The /ctx-pad skill maps natural language to ctx pad commands. You don't need to remember the syntax:
You say What happens \"jot down: check DNS after deploy\" ctx pad add \"check DNS after deploy\" \"show my scratchpad\" ctx pad \"delete the third entry\" ctx pad rm 3 \"update entry 2 to include the new endpoint\" ctx pad edit 2 \"...\" \"move entry 4 to the top\" ctx pad mv 4 1 \"import my notes from notes.txt\" ctx pad import notes.txt \"export all blobs to ./backup\" ctx pad export ./backup \"merge the scratchpad from the worktree\" ctx pad merge worktree/.context/scratchpad.enc
The skill handles the translation. You describe what you want in plain English; the agent picks the right command.
The encryption key lives at ~/.ctx/.ctx.key (outside the project directory). Because all worktrees on the same machine share this path, ctx pad works in worktrees automatically - no special setup needed.
For projects where encryption is unnecessary, disable it in .ctxrc:
scratchpad_encrypt: false\n
In plaintext mode:
Entries are stored in .context/scratchpad.md instead of .enc.
No key is generated or required.
All ctx pad commands work identically.
The file is human-readable and diffable.
When Should You Use Plaintext
Plaintext mode is useful for non-sensitive projects, solo work where encryption adds friction, or when you want scratchpad entries visible in git diff.
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#when-should-you-use-scratchpad-versus-context-files","level":2,"title":"When Should You Use Scratchpad versus Context Files","text":"Use case Where it goes Temporary reminders (\"check X after deploy\") Scratchpad Working values during debugging Scratchpad Sensitive tokens or API keys (short-term) Scratchpad Quick notes that don't fit anywhere else Scratchpad Items that are not directly relevant to the project Scratchpad Things that you want to keep near, but also hidden Scratchpad Work items with completion tracking TASKS.md Trade-offs with rationale DECISIONS.md Reusable lessons with context/lesson/application LEARNINGS.md Codified patterns and standards CONVENTIONS.md
Rule of thumb:
If it needs structure or will be referenced months later, use a context file (i.e. DECISIONS.md, LEARNINGS.md, TASKS.md).
If it is working memory for the current session or week, use the scratchpad.
Session journals contain sensitive data such as file contents, commands, API keys, internal discussions, error messages with stack traces, and more.
The .context/journal-site/ and .context/journal-obsidian/ directories MUST be .gitignored.
DO NOT host your journal publicly.
DO NOT commit your journal files to version control.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#browse-your-session-history","level":2,"title":"Browse Your Session History","text":"
ctx's Session Journal turns your AI coding sessions into a browsable, searchable, and editable archive.
After using ctx for a couple of sessions, you can generate a journal site with:
# Import all sessions to markdown\nctx journal import --all\n\n# Generate and serve the journal site\nctx journal site --serve\n
Then open http://localhost:8000 to browse your sessions.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#what-you-get","level":2,"title":"What You Get","text":"
The Session Journal gives you:
Browsable history: Navigate through all your AI sessions by date
Full conversations: See every message, tool use, and result
Token usage: Track how many tokens each session consumed
Search: Find sessions by content, project, or date
Dark mode: Easy on the eyes for late-night archaeology
Each session page includes the following sections:
Section Content Metadata Date, time, duration, model, project, git branch Summary Space for your notes (editable) Tool Usage Which tools were used and how often Conversation Full transcript with timestamps","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#1-import-sessions","level":3,"title":"1. Import Sessions","text":"
# Import all sessions from current project (only new files)\nctx journal import --all\n\n# Import sessions from all projects\nctx journal import --all --all-projects\n\n# Import a specific session by ID (always writes)\nctx journal import abc123\n\n# Preview what would be imported\nctx journal import --all --dry-run\n\n# Re-import existing (regenerates conversation, preserves YAML frontmatter)\nctx journal import --all --regenerate\n\n# Discard frontmatter during regeneration\nctx journal import --all --regenerate --keep-frontmatter=false -y\n
Imported sessions go to .context/journal/ as editable Markdown files.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#2-generate-the-site","level":3,"title":"2. Generate the Site","text":"
# Generate site structure\nctx journal site\n\n# Generate and build static HTML\nctx journal site --build\n\n# Generate and serve locally\nctx journal site --serve\n\n# Custom output directory\nctx journal site --output ~/my-journal\n
The site is generated in .context/journal-site/ by default.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#3-browse-and-search","level":3,"title":"3. Browse and Search","text":"
Imported sessions are plain Markdown in .context/journal/. You can:
Add summaries: Fill in the ## Summary section
Add notes: Insert your own commentary anywhere
Highlight key moments: Use Markdown formatting
Delete noise: Remove irrelevant tool outputs
After editing, regenerate the site:
ctx journal site --serve\n
Safe by Default
Running ctx journal import --all only imports new sessions. Existing files are skipped entirely (your edits and enrichments are never touched).
Use --regenerate to re-import existing files. Conversation content is regenerated, but YAML frontmatter (topics, type, outcome, etc.) is preserved. You'll be prompted before any existing files are overwritten; add -y to skip the prompt.
Use --keep-frontmatter=false to discard enriched frontmatter during regeneration.
Locked entries (via ctx journal lock) are always skipped, regardless of flags. If you prefer to add locked: true to frontmatter during enrichment, run ctx journal sync to propagate the lock state to .state.json.
Claude Code generates \"suggestion\" sessions for auto-complete prompts. These are separated in the index under a \"Suggestions\" section to keep your main session list focused.
Raw imported sessions contain basic metadata (date, time, project) but lack the structured information needed for effective search, filtering, and analysis. Journal enrichment adds semantic metadata that transforms a flat archive into a searchable knowledge base.
Field Required Description title Yes Descriptive title (not the session slug) date Yes Session date (YYYY-MM-DD) type Yes Session type (see below) outcome Yes How the session ended (see below) topics No Subject areas discussed technologies No Languages, databases, frameworks libraries No Specific packages or libraries used key_files No Important files created or modified
Type values:
Type When to use feature Building new functionality bugfix Fixing broken behavior refactor Restructuring without behavior change exploration Research, learning, experimentation debugging Investigating issues documentation Writing docs, comments, README
Outcome values:
Outcome Meaning completed Goal achieved partial Some progress, work continues abandoned Stopped pursuing this approach blocked Waiting on external dependency","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#using-ctx-journal-enrich","level":3,"title":"Using /ctx-journal-enrich","text":"
The /ctx-journal-enrich skill automates enrichment by analyzing conversation content and proposing metadata.
Extract decisions, learnings, and tasks mentioned;
Show a diff and ask for confirmation before writing.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#before-and-after","level":3,"title":"Before and After","text":"
Before enrichment:
# twinkly-stirring-kettle\n\n**ID**: abc123-def456\n**Date**: 2026-01-24\n**Time**: 14:30:00\n...\n\n## Summary\n\n[Add your summary of this session]\n\n## Conversation\n...\n
After enrichment:
---\ntitle: \"Add Redis caching to API endpoints\"\ndate: 2026-01-24\ntype: feature\noutcome: completed\ntopics:\n - caching\n - api-performance\ntechnologies:\n - go\n - redis\nkey_files:\n - internal/api/middleware/cache.go\n - internal/cache/redis.go\n---\n\n# twinkly-stirring-kettle\n\n**ID**: abc123-def456\n**Date**: 2026-01-24\n**Time**: 14:30:00\n...\n\n## Summary\n\nImplemented Redis-based caching middleware for frequently accessed API endpoints.\nAdded cache invalidation on writes and configurable TTL per route. Reduced\n the average response time from 200ms to 15ms for cached routes.\n\n## Decisions\n\n* Used Redis over in-memory cache for horizontal scaling\n* Chose per-route TTL configuration over global setting\n\n## Learnings\n\n* Redis WATCH command prevents race conditions during cache invalidation\n\n## Conversation\n...\n
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#enrichment-and-site-generation","level":3,"title":"Enrichment and Site Generation","text":"
The journal site generator uses enriched metadata for better organization:
Titles appear in navigation instead of slugs
Summaries provide context in the index
Topics enable filtering (when using search)
Types allow grouping by work category
Future improvements will add topic-based navigation and outcome filtering to the generated site.
Use ctx journal site when you want a web-browsable archive with search and dark mode. Use ctx journal obsidian when you want graph view, backlinks, and tag-based navigation inside Obsidian. Both use the same enriched source entries: you can generate both.
The complete journal workflow has four stages. Each is idempotent: safe to re-run, and stages skip already-processed entries.
import → enrich → rebuild\n
Stage Command / Skill What it does Skips if Import ctx journal import --all Converts session JSONL to Markdown File already exists (safe default) Enrich /ctx-journal-enrich Adds frontmatter, summaries, topics Frontmatter already present Rebuild ctx journal site --build Generates static HTML site -- Obsidian ctx journal obsidian Generates Obsidian vault with wikilinks --
One-command pipeline
/ctx-journal-enrich-all handles import automatically - it detects unimported sessions and imports them before enriching. You only need to run ctx journal site --build afterward.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#using-make-journal","level":3,"title":"Using make journal","text":"
If your project includes Makefile.ctx (deployed by ctx init), the first and last stages are combined:
make journal # import + rebuild\n
After it runs, it reminds you to enrich in Claude Code:
Next steps (in Claude Code):\n /ctx-journal-enrich-all # imports if needed + adds metadata per entry\n\nThen re-run: make journal\n
Rendering Issues?
If individual entries have rendering problems (broken fences, malformed lists), check the programmatic normalization in the import pipeline. Most cases are handled automatically during ctx journal import.
# Import, browse, then enrich in Claude Code\nmake journal && make journal-serve\n# Then in Claude Code: /ctx-journal-enrich <session>\n
After a productive session:
# Import just that session and add notes\nctx journal import <session-id>\n# Edit .context/journal/<session>.md\n# Regenerate: ctx journal site\n
Searching across all sessions:
# Use grep on the journal directory\ngrep -r \"authentication\" .context/journal/\n
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#requirements","level":2,"title":"Requirements","text":"Use pipx for zensical
pip install zensical may install a non-functional stub on system Python. Using venv has other issues too.
These issues especially happen on Mac OSX.
Use pipx install zensical, which creates an isolated environment and handles Python version management automatically.
The journal site uses zensical for static site generation:
Skills are slash commands that run inside your AI assistant (e.g., /ctx-next), as opposed to CLI commands that run in your terminal (e.g., ctx status).
Skills give your agent structured workflows: It knows what to read, what to run, and when to ask. Most wrap one or more ctx CLI commands with opinionated behavior on top.
Skills Are Best Used Conversationally
The beauty of ctx is that it's designed to be intuitive and conversational, allowing you to interact with your AI assistant naturally. That's why you don't have to memorize many of these skills.
See the Prompting Guide for natural-language triggers that invoke these skills conversationally.
However, when you need a more precise control, you have the option to invoke the relevant skills directly.
","path":["Reference","Skills"],"tags":[]},{"location":"reference/skills/#all-skills","level":2,"title":"All Skills","text":"Skill Description Type /ctx-remember Recall project context and present structured readback user-invocable /ctx-wrap-up End-of-session context persistence ceremony user-invocable /ctx-status Show context summary with interpretation user-invocable /ctx-agent Load full context packet for AI consumption user-invocable /ctx-next Suggest 1-3 concrete next actions with rationale user-invocable /ctx-commit Commit with integrated context persistence user-invocable /ctx-reflect Pause and reflect on session progress user-invocable /ctx-add-task Add actionable task to TASKS.md user-invocable /ctx-add-decision Record architectural decision with rationale user-invocable /ctx-add-learning Record gotchas and lessons learned user-invocable /ctx-add-convention Record coding convention for consistency user-invocable /ctx-archive Archive completed tasks from TASKS.md user-invocable /ctx-pad Manage encrypted scratchpad entries user-invocable /ctx-history Browse and import AI session history user-invocable /ctx-journal-enrich Enrich single journal entry with metadata user-invocable /ctx-journal-enrich-all Full journal pipeline: export if needed, then batch-enrich user-invocable /ctx-blog Generate blog post draft from project activity user-invocable /ctx-blog-changelog Generate themed blog post from a commit range user-invocable /ctx-consolidate Consolidate redundant learnings or decisions user-invocable /ctx-drift Detect and fix context drift user-invocable /ctx-prompt Apply, list, and manage saved prompt templates user-invocable /ctx-prompt-audit Analyze prompting patterns for improvement user-invocable /ctx-check-links Audit docs for dead internal and external links user-invocable /ctx-sanitize-permissions Audit Claude Code permissions for security risks user-invocable /ctx-brainstorm Structured design dialogue before implementation user-invocable /ctx-spec Scaffold a feature spec from a project template user-invocable /ctx-import-plans Import Claude Code plan files into project specs user-invocable /ctx-implement Execute a plan step-by-step with verification user-invocable /ctx-loop Generate autonomous loop script user-invocable /ctx-worktree Manage git worktrees for parallel agents user-invocable /ctx-architecture Build and maintain architecture maps user-invocable /ctx-remind Manage session-scoped reminders user-invocable /ctx-doctor Troubleshoot ctx behavior with health checks and event analysis user-invocable /ctx-skill-audit Audit skills against Anthropic prompting best practices user-invocable /ctx-skill-creator Create, improve, and test skills user-invocable /ctx-pause Pause context hooks for this session user-invocable /ctx-resume Resume context hooks after a pause user-invocable","path":["Reference","Skills"],"tags":[]},{"location":"reference/skills/#session-lifecycle","level":2,"title":"Session Lifecycle","text":"
Skills for starting, running, and ending a productive session.
Session Ceremonies
Two skills in this group are ceremony skills: /ctx-remember (session start) and /ctx-wrap-up (session end). Unlike other skills that work conversationally, these should be invoked as explicit slash commands for completeness. See Session Ceremonies.
Commit code with integrated context persistence: pre-commit checks, staged files, Co-Authored-By trailer, and a post-commit prompt to capture decisions and learnings.
Wraps: git add, git commit, optionally chains to /ctx-add-decision and /ctx-add-learning
End-of-session context persistence ceremony. Gathers signal from git diff, recent commits, and conversation themes. Proposes candidates (learnings, decisions, conventions, tasks) with complete structured fields for user approval, then persists via ctx add. Offers /ctx-commit if uncommitted changes remain. Ceremony skill: invoke explicitly at session end.
Record a project-specific gotcha, bug, or unexpected behavior. Filters for insights that are searchable, project-specific, and required real effort to discover.
Full journal pipeline: imports unimported sessions first, then batch-enriches all unenriched entries. Filters out short sessions and continuations. Can spawn subagents for large backlogs.
Generate a blog post draft from recent project activity: git history, decisions, learnings, tasks, and journal entries. Requires a narrative arc (problem, approach, outcome).
Consolidate redundant entries in LEARNINGS.md or DECISIONS.md. Groups overlapping entries by keyword similarity, presents candidates, and (with user approval) merges groups into denser combined entries. Originals are archived, not deleted.
Detect and fix context drift: stale paths, missing files, file age staleness, task accumulation, entry count warnings, and constitution violations via ctx drift. Also detects skill drift against canonical templates.
Analyze recent prompting patterns to identify vague or ineffective prompts. Reviews 3-5 journal entries and suggests rewrites with positive observations.
Troubleshoot ctx behavior. Runs structural health checks via ctx doctor, analyzes event log patterns via ctx system events, and presents findings with suggested actions. The CLI provides the structural baseline; the agent adds semantic analysis of event patterns and correlations.
Wraps: ctx doctor --json, ctx system events --json --last 100, ctx remind list, ctx system message list, reads .ctxrc
Graceful degradation: If event_log is not enabled, the skill still works but with reduced capability. It runs structural checks and notes: \"Enable event_log: true in .ctxrc for hook-level diagnostics.\"
See also: Troubleshooting, ctx doctor CLI, ctx system events CLI
Scan all markdown files under docs/ for broken links. Three passes: internal links (verify file targets exist on disk), external links (HTTP HEAD with timeout, report failures as warnings), and image references. Resolves relative paths, strips anchors before checking, and skips localhost/example URLs.
Wraps: Glob + Grep to scan, curl for external checks
Audit .claude/settings.local.json for dangerous permissions across four risk categories: hook bypass (Critical), destructive commands (High), config injection vectors (High), and overly broad patterns (Medium). Reports findings by severity and offers specific fix actions with user confirmation.
Wraps: reads .claude/settings.local.json, edits with confirmation
Transform raw ideas into clear, validated designs through structured dialogue before any implementation begins. Follows a gated process: understand context, clarify the idea (one question at a time), surface non-functional requirements, lock understanding with user confirmation, explore 2-3 design approaches with trade-offs, stress-test the chosen approach, and present the detailed design.
Wraps: reads DECISIONS.md, relevant source files; chains to /ctx-add-decision for recording design choices
Trigger phrases: \"let's brainstorm\", \"design this\", \"think through\", \"before we build\", \"what approach should we take?\"
Scaffold a feature spec from the project template and walk through each section with the user. Covers: problem, approach, happy path, edge cases, validation rules, error handling, interface, implementation, configuration, testing, and non-goals. Spends extra time on edge cases and error handling.
Wraps: reads specs/tpl/spec-template.md, writes to specs/, optionally chains to /ctx-add-task
Trigger phrases: \"spec this out\", \"write a spec\", \"create a spec\", \"design document\"
Import Claude Code plan files (~/.claude/plans/*.md) into the project's specs/ directory. Lists plans with dates and H1 titles, supports filtering (--today, --since, --all), slugifies headings for filenames, and optionally creates tasks referencing each imported spec.
Wraps: reads ~/.claude/plans/*.md, writes to specs/, optionally chains to /ctx-add-task
See also: Importing Claude Code Plans, Tracking Work Across Sessions
Execute a multi-step plan with build and test verification at each step. Loads a plan from a file or conversation context, breaks it into atomic steps, and checkpoints after every 3-5 steps.
Wraps: reads plan file, runs verification commands (go build, go test, etc.)
Generate a ready-to-run shell script for autonomous AI iteration. Supports Claude Code, Aider, and generic tool templates with configurable completion signals.
Manage git worktrees for parallel agent development. Create sibling worktrees on dedicated branches, analyze task blast radius for grouping, and tear down with merge.
Build and maintain architecture maps incrementally. Creates or refreshes ARCHITECTURE.md (succinct project map, loaded at session start) and DETAILED_DESIGN.md (deep per-module reference, consulted on-demand). Coverage is tracked in map-tracking.json so each run extends the map rather than re-analyzing everything.
Manage session-scoped reminders via natural language. Translates user intent (\"remind me to refactor swagger\") into the corresponding ctx remind command. Handles date conversion for --after flags.
Audit one or more skills against Anthropic prompting best practices. Checks audit dimensions: positive framing, motivation, phantom references, examples, subagent guards, scope, and descriptions. Reports findings by severity with concrete fix suggestions.
Wraps: reads internal/assets/claude/skills/*/SKILL.md or .claude/skills/*/SKILL.md, references anthropic-best-practices.md
Trigger phrases: \"audit this skill\", \"check skill quality\", \"review the skills\", \"are our skills any good?\"
Create, improve, and test skills. Guides the full lifecycle: capture intent, interview for edge cases, draft the SKILL.md, test with realistic prompts, review results with the user, and iterate. Applies core principles: the agent is already smart (only add what it does not know), the description is the trigger (make it specific and \"pushy\"), and explain the why instead of rigid directives.
Wraps: reads/writes .claude/skills/ and internal/assets/claude/skills/
Trigger phrases: \"create a skill\", \"turn this into a skill\", \"make a slash command\", \"this should be a skill\", \"improve this skill\", \"the skill isn't triggering\"
Pause all context nudge and reminder hooks for the current session. Security hooks still fire. Use for quick investigations or tasks that don't need ceremony overhead.
The ctx plugin ships the skills listed above. Teams can add their own project-specific skills to .claude/skills/ in the project root: These are separate from plugin-shipped skills and are scoped to the project.
Project-specific skills follow the same format and are invoked the same way.
MCP server for tool-agnostic AI integration. Memory bridge connecting Claude Code auto-memory to .context/. Complete CLI restructuring into cmd/ + core/ taxonomy. All user-facing strings externalized to YAML. fatih/color removed; two direct dependencies remain.
","path":["Reference","Version History"],"tags":[]},{"location":"reference/versions/#v060-the-integration-release","level":3,"title":"v0.6.0: The Integration Release","text":"
Plugin architecture: hooks and skills converted from shell scripts to Go subcommands, shipped as a Claude Code marketplace plugin. Multi-tool hook generation for Cursor, Aider, Copilot, and Windsurf. Webhook notifications with encrypted URL storage.
","path":["Reference","Version History"],"tags":[]},{"location":"reference/versions/#v030-the-discipline-release","level":3,"title":"v0.3.0: The Discipline Release","text":"
Journal static site generation via zensical. 49-skill audit and fix pass (positive framing, phantom reference removal, scope tightening). Context consolidation skill. golangci-lint v2 migration.
","path":["Reference","Version History"],"tags":[]},{"location":"reference/versions/#v020-the-archaeology-release","level":3,"title":"v0.2.0: The Archaeology Release","text":"
Session journal system: ctx journal import converts Claude Code JSONL transcripts to browsable Markdown. Constants refactor with semantic prefixes (Dir*, File*, Filename*). CRLF handling for Windows compatibility.
Trust model, vulnerability reporting, permission hygiene, and security design principles.
","path":["Security"],"tags":[]},{"location":"security/agent-security/","level":1,"title":"Securing AI Agents","text":"","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#defense-in-depth-securing-ai-agents","level":1,"title":"Defense in Depth: Securing AI Agents","text":"","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#the-problem","level":2,"title":"The Problem","text":"
An unattended AI agent with unrestricted access to your machine is an unattended shell with unrestricted access to your machine.
This is not a theoretical concern. AI coding agents execute shell commands, write files, make network requests, and modify project configuration. When running autonomously (overnight, in a loop, without a human watching), the attack surface is the full capability set of the operating system user account.
The risk is not that the AI is malicious. The risk is that the AI is controllable: it follows instructions from context, and context can be poisoned.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#threat-model","level":2,"title":"Threat Model","text":"","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#how-agents-get-compromised","level":3,"title":"How Agents Get Compromised","text":"
AI agents follow instructions from multiple sources: system prompts, project files, conversation history, and tool outputs. An attacker who can inject content into any of these sources can redirect the agent's behavior.
Vector How it works Prompt injection via dependencies A malicious package includes instructions in its README, changelog, or error output. The agent reads these during installation or debugging and follows them. Prompt injection via fetched content The agent fetches a URL (documentation, API response, Stack Overflow answer) containing embedded instructions. Poisoned project files A contributor adds adversarial instructions to CLAUDE.md, .cursorrules, or .context/ files. The agent loads these at session start. Self-modification between iterations In an autonomous loop, the agent modifies its own configuration files. The next iteration loads the modified config with no human review. Tool output injection A command's output (error messages, log lines, file contents) contains instructions the agent interprets and follows.","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#what-can-a-compromised-agent-do","level":3,"title":"What Can a Compromised Agent Do","text":"
Depends entirely on what permissions and access the agent has:
Access level Potential impact Unrestricted shell Execute any command, install software, modify system files Network access Exfiltrate source code, credentials, or context files to external servers Docker socket Escape container isolation by spawning privileged sibling containers SSH keys Pivot to other machines, push to remote repositories, access production systems Write access to own config Disable its own guardrails for the next iteration","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#the-defense-layers","level":2,"title":"The Defense Layers","text":"
No single layer is sufficient. Each layer catches what the others miss.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-1-soft-instructions-probabilistic","level":3,"title":"Layer 1: Soft Instructions (Probabilistic)","text":"
Markdown files like CONSTITUTION.md and the Agent Playbook tell the agent what to do and what not to do. These are probabilistic: the agent usually follows them, but there is no enforcement mechanism.
What it catches: Most common mistakes. An agent that has been told \"never delete production data\" will usually not delete production data.
What it misses: Prompt injection. A sufficiently crafted injection can override soft instructions. Long context windows dilute attention on rules stated early. Edge cases where instructions are ambiguous.
Verdict: Necessary but not sufficient. Good for the common case. Do not rely on it for security boundaries.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-2-application-controls-deterministic-at-runtime-mutable-across-iterations","level":3,"title":"Layer 2: Application Controls (Deterministic at Runtime, Mutable Across Iterations)","text":"
AI tool runtimes (Claude Code, Cursor, etc.) provide permission systems: tool allowlists, command restrictions, confirmation prompts.
For Claude Code, ctx init writes both an allowlist and an explicit deny list into .claude/settings.local.json. The golden images live in internal/assets/permissions/:
Allowlist (allow.txt): only these tools run without confirmation:
Bash(ctx:*)\nSkill(ctx-add-convention)\nSkill(ctx-add-decision)\n... # all bundled ctx-* skills\n
Deny list (deny.txt): these are blocked even if the agent requests them:
What it catches: The agent cannot run commands outside the allowlist, and the deny list blocks dangerous operations even if a future allowlist change were to widen access. If rm, curl, sudo, or docker are not allowed and sudo/curl/wget are explicitly denied, the agent cannot invoke them regardless of what any prompt says.
What it misses: The agent can modify the allowlist itself. In an autonomous loop, if the agent writes to .claude/settings.local.json, and the next iteration loads the modified config, then the protection is effectively lost. The application enforces the rules, but the application reads the rules from files the agent can write.
Verdict: Strong first layer. Must be combined with self-modification prevention (Layer 3).
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-3-os-level-isolation-deterministic-and-unbypassable","level":3,"title":"Layer 3: OS-Level Isolation (Deterministic and Unbypassable)","text":"
The operating system enforces access controls that no application-level trick can override. An unprivileged user cannot read files owned by root. A process without CAP_NET_RAW cannot open raw sockets. These are kernel boundaries.
Control Purpose Dedicated user account No sudo, no privileged group membership (docker, wheel, adm). The agent cannot escalate privileges. Filesystem permissions Project directory writable; everything else read-only or inaccessible. Agent cannot reach other projects, home directories, or system config. Immutable config files CLAUDE.md, .claude/settings.local.json, and .context/CONSTITUTION.md owned by a different user or marked immutable (chattr +i on Linux). The agent cannot modify its own guardrails.
What it catches: Privilege escalation, self-modification, lateral movement to other projects or users.
What it misses: Actions within the agent's legitimate scope. If the agent has write access to source code (which it needs to do its job), it can introduce vulnerabilities in the code itself.
Verdict: Essential. This is the layer that makes the other layers trustworthy.
OS-level isolation does not make the agent safe; it makes the other layers meaningful.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-4-network-controls","level":3,"title":"Layer 4: Network Controls","text":"
An agent that cannot reach the internet cannot exfiltrate data. It also cannot ingest new instructions mid-loop from external documents, API responses, or hostile content.
Scenario Recommended control Agent does not need the internet --network=none (container) or outbound firewall drop-all Agent needs to fetch dependencies Allow specific registries (npmjs.com, proxy.golang.org, pypi.org) via firewall rules. Block everything else. Agent needs API access Allow specific API endpoints only. Use an HTTP proxy with allowlisting.
What it catches: Data exfiltration, phone-home payloads, downloading additional tools, and instruction injection via fetched content.
What it misses: Nothing, if the agent genuinely does not need the network. The tradeoff is that many real workloads need dependency resolution, so a full airgap requires pre-populated caches.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-5-infrastructure-isolation","level":3,"title":"Layer 5: Infrastructure Isolation","text":"
The strongest boundary is a separate machine (or something that behaves like one).
The moment you stop arguing about prompts and start arguing about kernels, you are finally doing security.
Critical: never mount the Docker socket (/var/run/docker.sock).
An agent with socket access can spawn sibling containers with full host access, effectively escaping the sandbox.
Use rootless Docker or Podman to eliminate this escalation path.
Virtual machines: The strongest isolation. The guest kernel has no visibility into the host OS. No shared folders, no filesystem passthrough, no SSH keys to other machines.
Resource limits: CPU, memory, and disk quotas prevent a runaway agent from consuming all resources. Use ulimit, cgroup limits, or container resource constraints.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
A defense-in-depth setup for overnight autonomous runs:
Layer Implementation Stops Soft instructions CONSTITUTION.md with \"never delete tests\", \"always run tests before committing\" Common mistakes (probabilistic) Application allowlist .claude/settings.local.json with explicit tool permissions Unauthorized commands (deterministic within runtime) Immutable config chattr +i on CLAUDE.md, .claude/, CONSTITUTION.md Self-modification between iterations Unprivileged user Dedicated user, no sudo, no docker group Privilege escalation Container --cap-drop=ALL --network=none, rootless, no socket mount Host escape, network exfiltration Resource limits --memory=4g --cpus=2, disk quotas Resource exhaustion
Each layer is straightforward: The strength is in the combination.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#common-mistakes","level":2,"title":"Common Mistakes","text":"
\"I'll just use --dangerously-skip-permissions\": This disables Layer 2 entirely. Without Layers 3-5, you have no protection at all. Only use this flag inside a properly isolated container or VM.
\"The agent is sandboxed in Docker\": A Docker container with the Docker socket mounted, running as root, with --privileged, and full network access is not sandboxed. It is a root shell with extra steps.
\"CONSTITUTION.md says not to do that\": Markdown is a suggestion. It works most of the time. It is not a security boundary. Do not use it as one.
\"I reviewed the CLAUDE.md, it's fine\": The agent can modify CLAUDE.md during iteration N. Iteration N+1 loads the modified version. Unless the file is immutable, your review is stale.
\"The agent only has access to this one project\": Does the project directory contain .env files, SSH keys, API tokens, or credentials? Does it have a .git/config with push access to a remote? Filesystem isolation means isolating what is in the directory too.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#team-security-considerations","level":2,"title":"Team Security Considerations","text":"
When multiple developers share a .context/ directory, security considerations extend beyond single-agent hardening.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#code-review-for-context-files","level":3,"title":"Code Review for Context Files","text":"
Treat .context/ changes like code changes. Context files influence agent behavior (a modified CONSTITUTION.md or CONVENTIONS.md changes what every agent on the team will do next session). Review them in PRs with the same scrutiny you apply to production code.
New decisions that contradict existing ones without acknowledging it
Learnings that encode incorrect assumptions
Task additions that bypass the team's prioritization process
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#gitignore-patterns","level":3,"title":"Gitignore Patterns","text":"
ctx init configures .gitignore automatically, but verify these patterns are in place:
Team decision: scratchpad.enc (encrypted, safe to commit for shared scratchpad state); .gitignore if scratchpads are personal
Never committed: .env, credentials, API keys (enforced by drift secret detection)
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#multi-developer-context-sharing","level":3,"title":"Multi-Developer Context Sharing","text":"
CONSTITUTION.md is the shared contract. All team members and their agents inherit it. Changes require team consensus, not unilateral edits.
When multiple agents write to the same context files concurrently (e.g., two developers adding learnings simultaneously), git merge conflicts are expected. Resolution is typically additive: accept both additions. Destructive resolution (dropping one side) loses context.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#team-conventions-for-context-management","level":3,"title":"Team Conventions for Context Management","text":"
Establish and document:
Who reviews context changes: Same reviewers as code, or a designated context owner?
How to resolve conflicting decisions: If two sessions record contradictory decisions, which wins? Default: the later one must explicitly supersede the earlier one with rationale.
Frequency of context maintenance: Weekly ctx drift checks, monthly consolidation passes, archival after each milestone.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#checklist","level":2,"title":"Checklist","text":"
Before running an unattended AI agent:
Agent runs as a dedicated unprivileged user (no sudo, no docker group)
Agent's config files are immutable or owned by a different user
Permission allowlist restricts tools to the project's toolchain
Container drops all capabilities (--cap-drop=ALL)
Docker socket is NOT mounted
Network is disabled or restricted to specific domains
Resource limits are set (memory, CPU, disk)
No SSH keys, API tokens, or credentials are accessible to the agent
Project directory does not contain .env or secrets files
Iteration cap is set (--max-iterations)
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#further-reading","level":2,"title":"Further Reading","text":"
Running an Unattended AI Agent: the ctx recipe for autonomous loops, including step-by-step permissions and isolation setup
Security: ctx's own trust model and vulnerability reporting
Autonomous Loops: full documentation of the loop pattern, prompt templates, and troubleshooting
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/reporting/","level":1,"title":"Security Policy","text":"","path":["Security","Security Policy"],"tags":[]},{"location":"security/reporting/#reporting-vulnerabilities","level":2,"title":"Reporting Vulnerabilities","text":"
At ctx we take security very seriously.
If you discover a security vulnerability in ctx, please report it responsibly.
Do NOT open a public issue for security vulnerabilities.
If your report contains sensitive details (proof-of-concept exploits, credentials, or internal system information), you can encrypt your message with our PGP key:
In-repo: SECURITY_KEY.asc
Keybase: keybase.io/alekhinejose
# Import the key\ngpg --import SECURITY_KEY.asc\n\n# Encrypt your report\ngpg --armor --encrypt --recipient security@ctx.ist report.txt\n
Encryption is optional. Unencrypted reports to security@ctx.ist or via GitHub Private Reporting are perfectly fine.
","path":["Security","Security Policy"],"tags":[]},{"location":"security/reporting/#what-to-include","level":3,"title":"What to Include","text":"
We appreciate responsible disclosure and will acknowledge security researchers who report valid vulnerabilities (unless they prefer to remain anonymous).
ctx is a volunteer-maintained open source project.
The timelines below are guidelines, not guarantees, and depend on contributor availability.
We will address security reports on a best-effort basis and prioritize them by severity.
Stage Timeframe Acknowledgment Within 48 hours Initial assessment Within 7 days Resolution target Within 30 days (depending on severity)","path":["Security","Security Policy"],"tags":[]},{"location":"security/reporting/#trust-model","level":2,"title":"Trust Model","text":"
ctx operates within a single trust boundary: the local filesystem.
The person who authors .context/ files is the same person who runs the agent that reads them. There is no remote input, no shared state, and no server component.
This means:
ctx does not sanitize context files for prompt injection. This is a deliberate design choice, not an oversight. The files are authored by the developer who owns the machine: Sanitizing their own instructions back to them would be counterproductive.
If you place adversarial instructions in your own .context/ files, your agent will follow them. This is expected behavior. You control the context; the agent trusts it.
Shared Repositories
In shared repositories, .context/ files should be reviewed in code review (the same way you would review CI/CD config or Makefiles). A malicious contributor could add harmful instructions to CONSTITUTION.md or TASKS.md.
No secrets in context: The constitution explicitly forbids storing secrets, tokens, API keys, or credentials in .context/ files
Local only: ctx runs entirely locally with no external network calls
No code execution: ctx reads and writes Markdown files only; it does not execute arbitrary code
Git-tracked: Core context files are meant to be committed, so they should never contain sensitive data. Exception: sessions/ and journal/ contain raw conversation data and should be gitignored
Claude Code evaluates permissions in deny → ask → allow order. ctx init automatically populates permissions.deny with rules that block dangerous operations before the allow list is ever consulted.
Hook state files (throttle markers, prompt counters, pause markers) are stored in .context/state/, which is project-scoped and gitignored. State files are automatically managed by the hooks that create them; no manual cleanup is needed.
Review before committing: Always review .context/ files before committing
Use .gitignore: If you must store sensitive notes locally, add them to .gitignore
Drift detection: Run ctx drift to check for potential issues
Permission audit: Review .claude/settings.local.json after busy sessions
","path":["Security","Security Policy"],"tags":[]},{"location":"thesis/","level":1,"title":"Context as State","text":"","path":["The Thesis"],"tags":[]},{"location":"thesis/#a-persistence-layer-for-human-ai-cognition","level":2,"title":"A Persistence Layer for Human-AI Cognition","text":"
As AI tools evolve from code-completion utilities into reasoning collaborators, the knowledge that governs their behavior becomes as important as the code they produce; yet, that knowledge is routinely discarded at the end of every session.
AI-assisted development systems assemble context at prompt time using heuristic retrieval from mutable sources: recent files, semantic search results, session history. These approaches optimize relevance at the moment of generation but do not persist the cognitive state that produced decisions. Reasoning is not reproducible, intent is lost across sessions, and teams cannot audit the knowledge that constrains automated behavior.
This paper argues that context should be treated as deterministic, version-controlled state rather than as a transient query result. We ground this argument in three sources of evidence: a landscape analysis of 17 systems spanning AI coding assistants, agent frameworks, and knowledge stores; a taxonomy of five primitive categories that reveals irrecoverable architectural trade-offs; and an experience report from ctx, a persistence layer for AI-assisted development, which developed itself using its own persistence model across 389 sessions over 33 days. We define a three-tier model for cognitive state: authoritative knowledge, delivery views, and ephemeral state. Then we present six design invariants empirically validated by 56 independent rejection decisions observed across the analyzed landscape. We show that context determinism applies to assembly, not to model output, and that the curation cost this model requires is offset by compounding returns in reproducibility, auditability, and team cognition.
The introduction of large language models into software development has shifted the primary interface from code execution to interactive reasoning. In this environment, the correctness of an output depends not only on source code but on the context supplied to the model: the conventions, decisions, architectural constraints, and domain knowledge that bound the space of acceptable responses.
Current systems treat context as a query result assembled at the moment of interaction. A developer begins a session; the tool retrieves what it estimates to be relevant from chat history, recent files, and vector stores; the model generates output conditioned on this transient assembly; the session ends, and the context evaporates. The next session begins the cycle again.
This model has improved substantially over the past year. CLAUDE.md files, Cursor rules, Copilot's memory system, and tools such as Mem0, Letta, and Kindex each address aspects of the persistence problem. Yet across 17 systems we analyzed spanning AI coding assistants, agent frameworks, autonomous coding agents, and purpose-built knowledge stores, no system provides all five of the following properties simultaneously: deterministic context assembly, human-readable file-based persistence, token-budgeted delivery, zero runtime dependencies, and local-first operation.
This paper does not propose a universal replacement for retrieval-centric workflows. It defines a persistence layer (embodied in ctx (https://ctx.ist)) whose advantages emerge under specific operational conditions: when reproducibility is a requirement, when knowledge must outlive sessions and individuals, when teams require shared cognitive authority, or when offline operation is necessary.
The trade-offs (manual curation cost, reduced automatic recall, coarser granularity) are intentional and mirror the trade-offs accepted by systems that favor reproducibility over convenience, such as reproducible builds and immutable infrastructure 16.
The contribution is threefold: a three-tier model for cognitive state that resolves the ambiguity between authoritative knowledge and ephemeral session artifacts; six design invariants empirically grounded in a cross-system landscape analysis; and an experience report demonstrating that the model produces compounding returns when applied to its own development.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#2-the-limits-of-prompt-time-context","level":2,"title":"2. The Limits of Prompt-Time Context","text":"
Prompt-time assembly pipelines typically consist of corpus selection, retrieval, ranking, and truncation. These pipelines are probabilistic and time-dependent, producing three failure modes that compound over the lifetime of a project.
If context is derived from mutable sources using heuristic ranking, identical requests at different times receive different inputs. A developer who asks \"What is our authentication strategy?\" on Tuesday may receive a different context window than the same question on Thursday: Not because the strategy changed, but because the retrieval heuristic surfaced different fragments.
Reproducibility (the ability to reconstruct the exact inputs that produced a given output) is a foundational property of reliable systems. Its loss in AI-assisted development mirrors the historical evolution from ad-hoc builds to deterministic build systems 12. The build community learned that when outputs depend on implicit state (environment variables, system clocks, network-fetched dependencies), debugging becomes archaeology. The same principle applies when AI outputs depend on non-deterministic context retrieval.
Embedding-based memory increases recall but reduces inspectability. When a vector store determines that a code snippet is \"similar\" to the current query, the ranking function is opaque: the developer cannot inspect why that snippet was chosen, whether a more relevant artifact was excluded, or whether the ranking will remain stable. This prevents deterministic debugging, policy auditing, and causal attribution (properties that information retrieval theory identifies as fundamental trade-offs of probabilistic ranking) 3.
In practice, this opacity manifests as a compliance ceiling. In our experience developing a context management system (detailed in Section 7), soft instructions (directives that ask an AI agent to read specific files or follow specific procedures) achieve approximately 75-85% compliance. The remaining 15-25% represents cases where the agent exercises judgment about whether the instruction applies, effectively applying a second ranking function on top of the explicit directive. When 100% compliance is required, instruction is insufficient; the content must be injected directly, removing the agent's option to skip it.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#23-loss-of-intent","level":3,"title":"2.3 Loss of Intent","text":"
Session transcripts record interaction but not cognition. A transcript captures what was said but not which assumptions were accepted, which alternatives were rejected, or which constraints governed the decision. The distinction matters: a decision to use PostgreSQL recorded as a one-line note (\"Use PostgreSQL\") teaches a model what was decided; a structured record with context, rationale, and consequences teaches it why (and why is what prevents the model from unknowingly reversing the decision in a future session) 4.
Session transcripts provide history. Cognitive state requires something more: the persistent, structured representation of the knowledge required for correct decision-making.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#3-cognitive-state-a-three-tier-model","level":2,"title":"3. Cognitive State: A Three-Tier Model","text":"","path":["The Thesis"],"tags":[]},{"location":"thesis/#31-definitions","level":3,"title":"3.1 Definitions","text":"
We define cognitive state as the authoritative, persistent representation of the knowledge required for correct decision-making within a project. It is human-authored or human-ratified, versioned, inspectable, and reproducible. It is distinct from logs, transcripts, retrieval results, and model-generated summaries.
Previous formulations of this idea have treated cognitive state as a monolithic concept. In practice, a three-tier model better captures the operational reality:
Tier 1: Authoritative State: The canonical knowledge that the system treats as ground truth. In a concrete implementation, this corresponds to a set of human-curated files with defined schemas: a constitution (inviolable rules), conventions (code patterns), an architecture document (system structure), decision records (choices with rationale), learnings (captured experience), a task list (current work), a glossary (domain terminology), and an agent playbook (operating instructions). Each file has a single purpose, a defined lifecycle, and a distinct update frequency. Authoritative state is version-controlled alongside code and reviewed through the same mechanisms (diffs, pull requests, blame annotations).
Tier 2: Delivery Views: Derived representations of authoritative state, assembled for consumption by a model. A delivery view is produced by a deterministic assembly function that takes the authoritative state, a token budget, and an inclusion policy as inputs and produces a context window as output. The same authoritative state, budget, and policy must always produce the same delivery view. Delivery views are ephemeral (they exist only for the duration of a session), but their construction is reproducible.
Tier 3: Ephemeral State: Session transcripts, scratchpad notes, draft journal entries, and other artifacts that exist during or immediately after a session but are not authoritative. Ephemeral state is the raw material from which authoritative state may be extracted through human review, but it is never consumed directly by the assembly function.
This three-tier model resolves confusion present in earlier formulations: the claim that AI output is a deterministic function of the repository state. The corrected claim is that context selection is deterministic (the delivery view is a function of authoritative state), but model output remains stochastic, conditioned on the deterministic context. Formally:
The persistence layer's contribution is making assemble reproducible, not making model deterministic.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#32-separation-of-concerns","level":3,"title":"3.2 Separation of Concerns","text":"
The decision to separate authoritative state into distinct files with distinct purposes is not cosmetic. Different types of knowledge have different lifecycles:
Knowledge Type Update Frequency Read Frequency Load Priority Example Constitution Rarely Every session Always \"Never commit secrets to git\" Tasks Every session Session start Always \"Implement token budget CLI flag\" Conventions Weekly Before coding High \"All errors use structured logging with severity levels\" Decisions When decided When questioning Medium \"Use PostgreSQL over MySQL (see ADR-003)\" Learnings When learned When stuck Medium \"Hook scripts >50ms degrade interactive UX\" Architecture When changed When designing On demand \"Three-layer pipeline: ingest → enrich → assemble\" Journal Every session Rarely Never auto \"Session 247: Removed dead-end session copy layer\"
A monolithic context file would force the assembly function to load everything or nothing. Separation enables progressive disclosure: the minimum context that matters for the current moment, with the option to load more when needed. A normal session loads the constitution, tasks, and conventions; a deep investigation loads decision history and journal entries from specific dates.
The budget mechanism is the constraint that makes separation valuable. Without a budget, the default behavior is to load everything, which destroys the attention density that makes loaded context useful. With a budget, the assembly function must prioritize ruthlessly: constitution first (always full), then tasks and conventions (budget-capped), then decisions and learnings (scored by recency). Entries that do not fit receive title-only summaries rather than being silently dropped (an application of the \"tell me what you don't know\" pattern identified independently by four systems in our landscape analysis).
The following six invariants define the constraints that a cognitive state persistence layer must satisfy. They are not axioms chosen a priori; they are empirically grounded properties whose violation was independently identified as producing complexity costs across the 17 systems we analyzed.
Context files must be human-readable, git-diffable, and editable with any text editor. No database. No binary storage.
Validation: 11 independent rejection decisions across the analyzed landscape protected this property. Systems that adopted embedded records, binary serialization, or knowledge graphs as their core primitive consistently traded away the ability for a developer to run cat DECISIONS.md and understand the system's knowledge. The inspection cost of opaque storage compounds over the lifetime of a project: every debugging session, every audit, every onboarding conversation requires specialized tooling to access knowledge that could have been a text file.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#invariant-2-zero-runtime-dependencies","level":3,"title":"Invariant 2: Zero Runtime Dependencies","text":"
The tool must work with no installed runtimes, no running services, and no API keys for core functionality.
Validation: 13 independent rejection decisions protected this property (the most frequently defended invariant). Systems that required databases (PostgreSQL, SQLite, Redis), embedding models, server daemons, container runtimes, or cloud APIs for core operation introduced failure modes proportional to their dependency count. A persistence layer that depends on infrastructure is not a persistence layer; it is a service. Services have uptime requirements, version compatibility matrices, and operational costs that simple file operations do not.
The same files plus the same budget must produce the same output. No embedding-based retrieval, no LLM-driven selection, no wall-clock-dependent scoring in the assembly path.
Validation: 6 independent rejection decisions protected this property. Non-deterministic assembly (whether from embedding variance, LLM-based selection, or time-dependent scoring) destroys the ability to reproduce a context window and therefore to diagnose why a model produced a given output. Determinism in the assembly path is what makes the persistence layer auditable.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#invariant-4-human-authority-over-persistent-state","level":3,"title":"Invariant 4: Human Authority Over Persistent State","text":"
The agent may propose changes to context files but must not unilaterally modify them. All persistent changes go through human-reviewable git commits.
Validation: 6 independent rejection decisions protected this property. Systems that allowed agents to self-modify their memory (writing freeform notes, auto-pruning old entries, generating summaries as ground truth) consistently produced lower-quality persistent context than systems that enforced human review. Structure is a feature, not a limitation: across the landscape, the pattern \"structured beats freeform\" was independently discovered by four systems that evolved from freeform LLM summaries to typed schemas with required fields.
Core functionality must work offline with no network access. Cloud services may be used for optional features but never for core context management.
Validation: 7 independent rejection decisions protected this property. Infrastructure-dependent memory systems cannot operate in classified environments, isolated networks, or disaster-recovery scenarios. A filesystem-native model continues to function under all conditions where the repository is accessible.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#invariant-6-no-default-telemetry","level":3,"title":"Invariant 6: No Default Telemetry","text":"
Any analytics, if ever added, must be strictly opt-in.
Validation: 4 independent rejection decisions protected this property. Default telemetry erodes the trust model that a persistence layer depends on. If developers must trust the system with their architectural decisions, operational learnings, and project constraints, the system cannot simultaneously be reporting usage data to external services.
These six invariants collectively define a design space. Each feature proposal can be evaluated against them: a feature that violates any invariant is rejected regardless of how many other systems implement it. The discipline of constraint (refusing to add capabilities that compromise foundational properties) is itself an architectural contribution. Across the 17 analyzed systems, 56 patterns were explicitly rejected for violating these invariants. The rejection count per invariant (11, 13, 6, 6, 7, 4) provides a rough measure of each property's vulnerability to architectural erosion. A representative sample of these rejections is provided in Appendix A.1
The 17 systems were selected to cover the architectural design space rather than to achieve completeness. Each included system satisfies three criteria: it represents a distinct architectural primitive for AI-assisted development, it is actively maintained or widely referenced, and it provides sufficient public documentation or source code for architectural inspection. The goal was to ensure that every major category of primitive (document, embedded record, state snapshot, event/message, construction/derivation) was represented by multiple systems, enabling cross-system pattern detection.
The resulting set spans six categories: AI coding assistants (Continue, Sourcegraph/Cody, Aider, Claude Code), AI agent frameworks (CrewAI, AutoGen, LangGraph, LlamaIndex, Letta/MemGPT), autonomous coding agents (OpenHands, Sweep), session provenance tools (Entire), data versioning systems (Dolt, Pachyderm), pipeline/build systems (Dagger), and purpose-built knowledge stores (QubicDB, Kindex). Each system was analyzed from its source code and documentation, producing 34 individual analysis artifacts (an architectural profile and a set of insights per system) that yielded 87 adopt/adapt recommendations, 56 explicit rejection decisions, and 52 watch items.
Every system in the AI-assisted development landscape operates on a core primitive: an atomic unit around which the entire architecture revolves. Our analysis of 17 systems reveals five categories of primitives, each making irrecoverable trade-offs:
Group A: Document/File Primitives: Human-readable documents as the primary unit. Documents are authored by humans, version-controlled in git, and consumed by AI tools. The invariant of this group is that the primitive is always human-readable and version-controllable with standard tools. Three systems participate in this pattern: the system described in this paper as a pure expression, and Continue (via its rules directory) and Claude Code (via CLAUDE.md files) as partial participants: both use document-based context as an input but organize around different core primitives.
Group B: Embedded Record Primitives: Vector-embedded records stored with numerical embeddings for similarity search, metadata for filtering, and scoring mechanisms for ranking. Five systems use this approach (LlamaIndex, CrewAI, Letta/MemGPT, QubicDB, Kindex). The invariant is that the primitive requires an embedding model or vector database for core operations: a dependency that precludes offline and air-gapped use.
Group C: State Snapshot Primitives: Point-in-time captures of the complete system state. The invariant is that any past state can be reconstructed at any historical point. Three systems use this approach (LangGraph, Entire, Dolt).
Group D: Event/Message Primitives: Sequential events or messages forming an append-only log with causal relationships. Four systems use this approach (OpenHands, AutoGen, Claude Code, Sweep). The invariant is temporal ordering and append-only semantics.
Group E: Construction/Derivation Primitives: Derived or constructed values that encode how they were produced. The invariant is that the primitive is a function of its inputs; re-executing the same inputs produces the same primitive. Three systems use this approach (Dagger, Pachyderm, Aider).
The five primitive categories differ along seven dimensions:
Property Document Embedded Record State Snapshot Event/Message Construction Human-readable Yes No Varies Partially No Version-controllable Yes No Varies Yes Yes Queryable by meaning No Yes No No No Rewindable Via git No Yes Yes (replay) Yes Deterministic Yes No Yes Yes Yes Zero-dependency Yes No Varies Varies Varies Offline-capable Yes No Varies Varies Yes
The document primitive is the only one that simultaneously satisfies human-readability, version-controllability, determinism, zero dependencies, and offline capability. This is not because documents are superior in general (embedded records provide semantic queryability that documents lack) but because the combination of all five properties is what the persistence layer requires. The choice between primitive categories is not a matter of capability but of which properties are considered invariant.
Across the 17 analyzed systems, six design patterns were independently discovered. These convergent patterns carry extra validation weight because they emerged from different problem spaces:
Pattern 1: \"Tell me what you don't know\": When context is incomplete, explicitly communicate to the model what information is missing and what confidence level the provided context represents. Four systems independently converged on this pattern: inserting skip markers, tracking evidence gaps, annotating provenance, or naming output quality tiers.
Pattern 2: \"Freshness matters\": Information relevance decreases over time. Three systems independently chose exponential decay with different half-lives (30 days, 90 days, and LRU ordering). Static priority ordering with no time dimension leaves relevant recent knowledge at the same priority as stale entries. This pattern is in productive tension with the persistence model's emphasis on determinism: the claim is not that time-dependence is irrelevant, but that it belongs in the curation step (a human deciding to consolidate or archive stale entries) rather than in the assembly function (an algorithm silently down-ranking entries based on age).
Pattern 3: \"Content-address everything\": Compute a hash of content at creation time for deduplication, cache invalidation, integrity verification, and change detection. Five systems independently implement content hashing, each discovering it solves different problems 5.
Pattern 4: \"Structured beats freeform\": When capturing knowledge or session state, a structured schema with required fields produces more useful data than freeform text. Four systems evolved from freeform summaries to typed schemas: one moving from LLM-generated prose to a structured condenser with explicit fields for completed tasks, pending tasks, and files modified.
Pattern 5: \"Protocol convergence\": The Model Context Protocol (MCP) is emerging as a standard tool integration layer. Nine of 17 systems support it, spanning every category in the analysis. MCP's significance for the persistence model is that it provides a transport mechanism for context delivery without dictating how context is stored or assembled. This makes the approach compatible with both retrieval-centric and persistence-centric architectures.
Pattern 6: \"Human-in-the-loop for memory\": Critical memory decisions should involve human judgment. Fully automated memory management produces lower-quality persistent context than human-reviewed systems. Four systems independently converged on variants of this pattern: ceremony-based consolidation, interrupt/resume for human input, confirmation mode for high-risk actions, and separated \"think fast\" vs. \"think slow\" processing paths.
Pattern 6 directly validates the ceremony model described in this paper. The persistence layer requires human curation not because automation is impossible, but because the quality of persistent knowledge degrades when the curation step is removed. The improvement opportunity is to make curation easier, not to automate it away.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#6-worked-example-architectural-decision-under-two-models","level":2,"title":"6. Worked Example: Architectural Decision Under Two Models","text":"
We now instantiate the three-tier model in a concrete system (ctx) and illustrate the difference between prompt-time retrieval and cognitive state persistence using a real scenario from its development.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#61-the-problem","level":3,"title":"6.1 The Problem","text":"
During development, the system accumulated three overlapping storage layers for session data: raw transcripts (owned by the AI tool), session copies (JSONL copies plus context snapshots), and enriched journal entries (Markdown summaries). The middle layer (session copies) was a dead-end write sink. An auto-save hook copied transcripts to a directory that nothing read from, because the journal pipeline already read directly from the raw transcripts. Approximately 15 source files, a shell hook, 20 configuration constants, and 30 documentation references supported infrastructure with no consumers.
In a retrieval-based system, the decision to remove the middle layer depends on whether the retrieval function surfaces the relevant context:
The developer asks: \"Should we simplify the session storage?\" The retrieval system must find and rank the original discussion thread where the three layers were designed, the usage statistics showing zero reads from the middle layer, the journal pipeline documentation showing it reads from raw transcripts directly, and the dependency analysis showing 15 files, a hook, and 30 doc references. If any of these fragments are not retrieved (because they are in old chat history, because the embedding similarity score is low, or because the token budget was consumed by more recent but less relevant context), the model may recommend preserving the middle layer, or may not realize it exists.
Six months later, a new team member asks the same question. The retrieval results will differ: the original discussion has aged out of recency scoring, the usage statistics are no longer in recent history, and the model may re-derive the answer or arrive at a different conclusion.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#63-cognitive-state-model","level":3,"title":"6.3 Cognitive State Model","text":"
In the persistence model, the decision is recorded as a structured artifact at write time:
## [2026-02-11] Remove .context/sessions/ storage layer\n\n**Status**: Accepted\n\n**Context**: The session/recall/journal system had three overlapping\nstorage layers. The recall pipeline reads directly from raw transcripts,\nmaking .context/sessions/ a dead-end write sink that nothing reads from.\n\n**Decision**: Remove .context/sessions/ entirely. Two stores remain:\nraw transcripts (global, tool-owned) and enriched journal\n(project-local).\n\n**Rationale**: Dead-end write sinks waste code surface, maintenance\neffort, and user attention. The recall pipeline already proved that\nreading directly from raw transcripts is sufficient. Context snapshots\nare redundant with git history.\n\n**Consequence**: Deleted internal/cli/session/ (15 files), removed\nauto-save hook, removed --auto-save from watch, removed pre-compact\nauto-save, removed /ctx-save skill, updated ~45 documentation files.\nFour earlier decisions superseded.\n
This artifact is:
Deterministically included in every subsequent session's delivery view (budget permitting, with title-only fallback if budget is exceeded)
Human-readable and reviewable as a diff in the commit that introduced it
Permanent: it persists in version control regardless of retrieval heuristics
Causally linked: it explicitly supersedes four earlier decisions, creating an auditable chain
When the new team member asks \"Why don't we store session copies?\" six months later, the answer is the same artifact, at the same revision, with the same rationale. The reasoning is reconstructible because it was persisted at write time, not discovered at query time.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#64-the-diff-when-policy-changes","level":3,"title":"6.4 The Diff When Policy Changes","text":"
If a future requirement re-introduces session storage (for example, to support multi-agent session correlation), the change appears as a diff to the decision record:
- **Status**: Accepted\n+ **Status**: Superseded by [2026-08-15] Reintroduce session storage\n+ for multi-agent correlation\n
The new decision record references the old one, creating a chain of reasoning visible in git log. In the retrieval model, the old decision would simply be ranked lower over time and eventually forgotten.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#7-experience-report-a-system-that-designed-itself","level":2,"title":"7. Experience Report: A System That Designed Itself","text":"
The persistence model described in this paper was developed and tested by using it on its own development. Over 33 days and 389 sessions, the system's context files accumulated a detailed record of decisions made, reversed, and consolidated: providing quantitative and qualitative evidence for the model's properties.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#71-scale-and-structure","level":3,"title":"7.1 Scale and Structure","text":"
The development produced the following authoritative state artifacts:
8 consolidated decision records covering 24 original decisions spanning context injection architecture, hook design, task management, security, agent autonomy, and webhook systems
18 consolidated learning records covering 75 original observations spanning agent compliance, hook behavior, testing patterns, documentation drift, and tool integration
A constitution with 13 inviolable rules across 4 categories (security, quality, process, context preservation)
389 enriched journal entries providing a complete session-level audit trail
The consolidation ratio (24 decisions compressed to 8 records, 75 learnings compressed to 18) illustrates the curation cost and its return: authoritative state becomes denser and more useful over time as related entries are merged, contradictions are resolved, and superseded decisions are marked.
Three architectural reversals during development provide evidence that the persistence model captures and communicates reasoning effectively:
Reversal 1: The two-tier persistence model: The original design included a middle storage tier for session copies. After 21 days of development, the middle tier was identified as a dead-end write sink (described in Section 6). The decision record captured the full context, and the removal was executed cleanly: 15 source files, a shell hook, and 45 documentation references. The pattern of a \"dead-end write sink\" was subsequently observed in 7 of 17 systems in our landscape analysis that store raw transcripts alongside structured context.
Reversal 2: The prompt-coach hook: An early design included a hook that analyzed user prompts and offered improvement suggestions. After deployment, the hook produced zero useful tips, its output channel was invisible to users, and it accumulated orphan temporary files. The hook was removed, and the decision record captured the failure mode for future reference.
Reversal 3: The soft-instruction compliance model: The original context injection strategy relied on soft instructions: directives asking the AI agent to read specific files. After measuring compliance across multiple sessions, we found a consistent 75-85% compliance ceiling. The revised strategy injects content directly, bypassing the agent's judgment about whether to comply. The learning record captures the ceiling measurement and the rationale for the architectural change.
Each reversal was captured as a structured decision record with context, rationale, and consequences. In a retrieval-based system, these reversals would exist only in chat history, discoverable only if the retrieval function happens to surface them. In the persistence model, they are permanent, indexable artifacts that inform future decisions.
The 75-85% compliance ceiling for soft instructions is the most operationally significant finding from the experience report. It means that any context management strategy relying on agent compliance with instructions (\"read this file,\" \"follow this convention,\" \"check this list\") has a hard ceiling on reliability.
The root cause is structural: the instruction \"don't apply judgment\" is itself evaluated by judgment. When an agent receives a directive to read a file, it first assesses whether the directive is relevant to the current task (and that assessment is the judgment the directive was trying to prevent).
The architectural response maps directly to the formal model defined in Section 3.1. Content requiring 100% compliance is included in authoritative_state and injected by the deterministic assemble function, bypassing the agent entirely. Content where 80% compliance is acceptable is delivered as instructions within the delivery view. The three-tier architecture makes this distinction explicit: authoritative state is injected; delivery views are assembled deterministically; ephemeral state is available but not pushed.
Over 33 days, we observed a qualitative shift in the development experience. Early sessions (days 1-7) spent significant time re-establishing context: explaining conventions, re-stating constraints, re-deriving past decisions. Later sessions (days 25-33) began with the agent loading curated context and immediately operating within established constraints, because the constraints were in files rather than in chat history.
This compounding effect (where each session's context curation improves all subsequent sessions) is the primary return on the curation investment. The cost is borne once (writing a decision record, capturing a learning, updating the task list); the benefit is collected on every subsequent session load.
The effect is analogous to compound interest in financial systems: the knowledge base grows not linearly with effort but with increasing marginal returns as new knowledge interacts with existing context. A learning captured on day 5 prevents a mistake on day 12, which avoids a debugging session that would have consumed a day 12 session, freeing that session for productive work that generates new learnings. The growth is not literally exponential (it is bounded by project scope and subject to diminishing returns as the knowledge base matures), but within the observed 33-day window, the returns were consistently accelerating.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#75-scope-and-generalizability","level":3,"title":"7.5 Scope and Generalizability","text":"
This experience report is self-referential by design: the system was developed using its own persistence model. This circularity strengthens the internal validity of the findings (the model was stress-tested under authentic conditions) but limits external generalizability. The two-week crossover point was observed on a single project of moderate complexity with a small team already familiar with the model's assumptions. Whether the same crossover holds for larger teams, for codebases with different characteristics, or for teams adopting the model without having designed it remains an open empirical question. The quantitative claims in this section should be read as existence proofs (demonstrating that the model can produce compounding returns) rather than as predictions about specific adoption scenarios.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#8-situating-the-persistence-layer","level":2,"title":"8. Situating the Persistence Layer","text":"
The persistence layer occupies a specific position in the stack of AI-assisted development:
Application Logic\nAI Interaction / Agents\nContext Retrieval Systems\nCognitive State Persistence Layer\nVersion Control / Storage\n
Current systems innovate primarily in the retrieval layer (improving how context is discovered, ranked, and delivered at query time). The persistence layer sits beneath retrieval and above version control. Its role is to maintain the authoritative state that retrieval systems may query but do not own. The relationship is complementary: retrieval answers \"What in the corpus might be relevant?\"; cognitive state answers \"What must be true for this system to operate correctly?\" A mature system uses both: retrieval for discovery, persistence for authority.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#9-applicability-and-trade-offs","level":2,"title":"9. Applicability and Trade-Offs","text":"","path":["The Thesis"],"tags":[]},{"location":"thesis/#91-when-to-use-this-model","level":3,"title":"9.1 When to Use This Model","text":"
A cognitive state persistence layer is most appropriate when:
Reproducibility is a requirement: If a system must be able to answer \"Why did this output occur, and can it be produced again?\" then deterministic, version-controlled context becomes necessary. This is relevant in regulated environments, safety-critical systems, long-lived infrastructure, and security-sensitive deployments.
Knowledge must outlive sessions and individuals: Projects with multi-year lifetimes accumulate architectural decisions, domain interpretations, and operational policy. If this knowledge is stored only in chat history, issue trackers, and institutional memory, it decays. The persistence model converts implicit knowledge into branchable, reviewable artifacts.
Teams require shared cognitive authority: In collaborative environments, correctness depends on a stable answer to \"What does the system believe to be true?\" When this answer is derived from retrieval heuristics, authority shifts to ranking algorithms. When it is versioned and human-readable, authority remains with the team.
Offline or air-gapped operation is required: Infrastructure-dependent memory systems cannot operate in classified environments, isolated networks, or disaster-recovery scenarios.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#92-when-not-to-use-this-model","level":3,"title":"9.2 When Not to Use This Model","text":"
Zero-configuration personal workflows: For short-lived or exploratory tasks, the cost of explicit knowledge curation outweighs its benefits. Heuristic retrieval is sufficient when correctness is non-critical, outputs are disposable, and historical reconstruction is unnecessary.
Maximum automatic recall from large corpora: Vector retrieval systems provide superior performance when the primary task is searching vast, weakly structured information spaces. The persistence model assumes that what matters can be decided and that this decision is valuable to record.
Fully autonomous agent architectures: Agent runtimes that generate and discard state continuously, optimizing for local goal completion, do not benefit from a model that centers human ratification of knowledge.
The transition does not require full system replacement. An incremental path:
Step 1: Record decisions as versioned artifacts: Instead of allowing conclusions to remain in discussion threads, persist them in reviewable form with context, rationale, and consequences 4. This alone converts ephemeral reasoning into the cognitive state.
Step 2: Make inclusion deterministic: Define explicit assembly rules. Retrieval may still exist, but it is no longer authoritative.
Step 3: Move policy into cognitive state: When system behavior depends on stable constraints, encode those constraints as versioned knowledge. Behavior becomes reproducible.
Step 4: Optimize assembly, not retrieval: Once the authoritative layer exists, performance improvements come from budgeting, caching, and structural refinement rather than from improving ranking heuristics.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#94-the-curation-cost","level":3,"title":"9.4 The Curation Cost","text":"
The primary objection to this model is the cost of explicit knowledge curation. This cost is real. Writing a structured decision record takes longer than letting a chatbot auto-summarize a conversation. Maintaining a glossary requires discipline. Consolidating 75 learnings into 18 records requires judgment.
The response is not that the cost is negligible but that it is amortized. A decision record written once is loaded hundreds of times. A learning captured today prevents repeated mistakes across all future sessions. The curation cost is paid once; the benefit compounds.
The experience report provides rough order-of-magnitude numbers. Across 389 sessions over 33 days, curation activities (writing decision records, capturing learnings, updating the task list, consolidating entries) averaged approximately 3-5 minutes per session. In early sessions (days 1-7), before curated context existed, re-establishing context consumed approximately 10-15 minutes per session: re-explaining conventions, re-stating architectural constraints, re-deriving decisions that had been made but not persisted. By the final week (days 25-33), the re-explanation overhead had dropped to near zero: the agent loaded curated context and began productive work immediately.
At ~12 sessions per day, the curation cost was roughly 35-60 minutes daily. The re-explanation cost in the first week was roughly 120-180 minutes daily. By the third week, that cost had fallen to under 15 minutes daily while the curation cost remained stable. The crossover (where cumulative curation cost was exceeded by cumulative time saved) occurred around day 10. These figures are approximate and derived from a single project with a small team already familiar with the model; the crossover point will vary with project complexity, team size, and curation discipline.
Several directions are compatible with the model described here:
Section-level deterministic budgeting: Current assembly operates at file granularity. Section-level budgeting would allow finer-grained control (including specific decision records while excluding others within the same file) without sacrificing determinism.
Causal links between decisions: The experience report shows that decisions frequently reference earlier decisions (superseding, extending, or qualifying them). Formal causal links would enable traversal of the decision graph and automatic detection of orphaned or contradictory constraints.
Content-addressed context caches: Five systems in our landscape analysis independently discovered that content hashing provides cache invalidation, integrity verification, and change detection. Applying content addressing to the assembly output would enable efficient cache reuse when the authoritative state has not changed.
Conditional context inclusion: Five systems independently suggest that context entries could carry activation conditions (file patterns, task keywords, or explicit triggers) that control whether they are included in a given assembly. This would reduce the per-session budget cost of large knowledge bases without sacrificing determinism.
Provenance metadata: Linking context entries to the sessions, decisions, or learnings that motivated them would strengthen the audit trail. Optional provenance fields on Markdown entries (session identifier, cause reference, motivation) would be lightweight and compatible with the existing file-based model.
AI-assisted development has treated context as a \"query result\" assembled at the moment of interaction, discarded at the session end. This paper identifies a complementary layer: the persistence of authoritative cognitive state as deterministic, version-controlled artifacts.
The contribution is grounded in three sources of evidence. A landscape analysis of 17 systems reveals five categories of primitives and shows that no existing system provides the combination of human-readability, determinism, zero dependencies, and offline capability that the persistence layer requires. Six design invariants, validated by 56 independent rejection decisions, define the constraints of the design space. An experience report over 389 sessions and 33 days demonstrates compounding returns: later sessions start faster, decisions are not re-derived, and architectural reversals are captured with full context.
The core claim is this: persistent cognitive state enables causal reasoning across time. A system built on this model can explain not only what is true, but why it became true and when it changed.
When context is the state:
Reasoning is reproducible: the same authoritative state, budget, and policy produce the same delivery view.
Knowledge is auditable: decisions are traceable to explicit artifacts with context, rationale, and consequences.
Understanding compounds: each session's curation improves all subsequent sessions.
The choice between retrieval-centric workflows and a persistence layer is not a matter of capability but of time horizon. Retrieval optimizes for relevance at the moment of interaction. Persistence optimizes for the durability of understanding across the lifetime of a project.
🐸🖤 \"Gooood... let the deterministic context flow through the repository...\" - Kermit the Sidious, probably
The 56 rejection decisions referenced in Section 4 were cataloged across all 17 system analyses, grouped by the invariant they would violate. This appendix provides a representative sample (two per invariant) to illustrate the methodology.
Invariant 1: Markdown-on-Filesystem (11 rejections): CrewAI's vector embedding storage was rejected because embeddings are not human-readable, not git-diff-friendly, and require external services. Kindex's knowledge graph as core primitive was rejected because it requires specialized commands to inspect content that could be a text file (kin show <id> vs. cat DECISIONS.md).
Invariant 2: Zero Runtime Dependencies (13 rejections): Letta/MemGPT's PostgreSQL-backed architecture was rejected because it conflicts with local-first, no-database, single-binary operation. Pachyderm's Kubernetes-based distributed architecture was rejected as the antithesis of a single-binary design for a tool that manages text files.
Invariant 3: Deterministic Assembly (6 rejections): LlamaIndex's embedding-based retrieval as the primary selection mechanism was rejected because it destroys determinism, requires an embedding model, and removes human judgment from the selection process. QubicDB's wall-clock-dependent scoring was rejected because it directly conflicts with the \"same inputs produce same output\" property.
Invariant 4: Human Authority (6 rejections): Letta/MemGPT's agent self-modification of memory was rejected as fundamentally opposed to human-curated persistence. Claude Code's unstructured auto-memory (where the agent writes freeform notes) was rejected because structured files with defined schemas produce higher-quality persistent context than unconstrained agent output.
Invariant 5: Local-First / Air-Gap Capable (7 rejections): Sweep's cloud-dependent architecture was rejected as fundamentally incompatible with the local-first, offline-capable model. LangGraph's managed cloud deployment was rejected because cloud dependencies for core functionality violate air-gap capability.
Invariant 6: No Default Telemetry (4 rejections): Continue's telemetry-by-default (PostHog) was rejected because it contradicts the local-first, privacy-respecting trust model. CrewAI's global telemetry on import (Scarf tracking pixel) was rejected because it violates user trust and breaks air-gap capability.
The remaining 9 rejections did not map to a specific invariant but were rejected on other architectural grounds: for example, Aider's full-file-content-in-context approach (which defeats token budgeting), AutoGen's multi-agent orchestration as core primitive (scope creep), and Claude Code's 30-day transcript retention limit (institutional knowledge should have no automatic expiration).
Reproducible Builds Project, \"Reproducible Builds: Increasing the Integrity of Software Supply Chains\", 2017. https://reproducible-builds.org/docs/definition/ ↩↩↩
S. McIntosh et al., \"The Impact of Build System Evolution on Software Quality\", ICSE, 2015. https://doi.org/10.1109/ICSE.2015.70 ↩
C. Manning, P. Raghavan, H. Schütze, Introduction to Information Retrieval, Cambridge University Press, 2008. https://nlp.stanford.edu/IR-book/ ↩
M. Nygard, \"Documenting Architecture Decisions\", Cognitect Blog, 2011. https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions ↩↩
L. Torvalds et al., Git Internals - Git Objects (content-addressed storage concepts). https://git-scm.com/book/en/v2/Git-Internals-Git-Objects ↩
Kief Morris, Infrastructure as Code, O'Reilly, 2016. ↩
J. Kreps, \"The Log: What every software engineer should know about real-time data's unifying abstraction\", 2013. https://engineering.linkedin.com/distributed-systems/log ↩
P. Hunt et al., \"ZooKeeper: Wait-free coordination for Internet-scale systems\", USENIX ATC, 2010. https://www.usenix.org/legacy/event/atc10/tech/full_papers/Hunt.pdf ↩
","path":["The Thesis"],"tags":[]}]}
\ No newline at end of file
+{"config":{"separator":"[\\s\\-_,:!=\\[\\]()\\\\\"`/]+|\\.(?!\\d)"},"items":[{"location":"","level":1,"title":"The ctx Manifesto","text":"","path":["The ctx Manifesto"],"tags":[]},{"location":"#ctx-manifesto","level":1,"title":"ctx Manifesto","text":"
Creation, not code.
Context, not prompts.
Verification, not vibes.
This Is NOT a Metaphor
Code executes instructions.
Creation produces outcomes.
Confusing the two is how teams ship motion...
...instead of progress.
It was never about the code.
Code has zero standalone value.
Code is an implementation detail.
Code is an incantation.
Creation is the act.
And creation does not happen in a vacuum.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#ctx-is-the-substrate","level":2,"title":"ctx Is the Substrate","text":"
Constraints Have Moved
Human bandwidth is no longer the limiting factor.
Context integrity is.
Human bandwidth is no longer the constraint.
Context is:
Without durable context, intelligence resets.
Without memory, reasoning decays.
Without structure, scale collapses.
Creation is now limited by:
Clarity of intent;
Quality of context;
Rigor of verification.
Not by speed.
Not by capacity.
Velocity Amplifies
Faster execution on broken context compounds error.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#what-ctx-is-not","level":2,"title":"What ctx Is Not","text":"
Avoid Category Errors
Mislabeling ctx guarantees misuse.
ctx is not a memory feature.
ctx is not prompt engineering.
ctx is not a productivity hack.
ctx is not automation theater.
ctx is a system for preserving intent under scale.
ctx is infrastructure.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#verified-reality-is-the-scoreboard","level":2,"title":"Verified Reality Is the Scoreboard","text":"
Activity is a False Proxy
Output volume correlates poorly with impact.
Code is not progress.
Activity is not impact.
The only truth that compounds is verified change.
Verified change must exist in the real world.
Hypotheses are cheap; outcomes are not.
ctx captures:
What we expected;
What we observed;
Where reality diverged.
If we cannot predict, measure, and verify the result...
...it does not count.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#build-to-learn-not-to-accumulate","level":2,"title":"Build to Learn, Not to Accumulate","text":"
Prototypes Have an Expiration Date
A prototype's value is information, not longevity.
Prototypes exist to reduce uncertainty.
We build to:
Test assumptions;
Validate architecture;
Answer specific questions.
Not everything.
Not blindly.
Not permanently.
ctx records archeology so the cost is paid once.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#failures-are-assets","level":2,"title":"Failures Are Assets","text":"
","path":["The ctx Manifesto"],"tags":[]},{"location":"#encode-intent-into-the-environment","level":2,"title":"Encode Intent Into the Environment","text":"
Goodwill Does not Belong to the Table
Alignment that depends on memory will drift.
Alignment cannot depend on memory or goodwill.
Do not rely on people to remember.
Encode the behavior, so it happens by default.
Intent is encoded as:
Policies;
Schemas;
Constraints;
Evaluation harnesses.
Rules must be machine-readable.
Laws must be enforceable.
If intent is implicit, drift is guaranteed.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#cost-is-a-first-class-signal","level":2,"title":"Cost Is a First-Class Signal","text":"
Attention Is the Scarcest Resource
Not ideas.
Not ambition.
Ideas do not compete on time:
They compete on cost and impact:
Attention is finite.
Compute is finite.
Context is expensive.
We continuously ask:
What the most valuable next action is.
What outcome justifies the cost.
ctx guides allocation.
Learning reshapes priority.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#show-the-why","level":2,"title":"Show the Why","text":"
{} (code, artifacts, apps, binaries) produce outputs; they do not preserve reasoning.
Systems that cannot explain themselves will not be trusted.
Traceability builds trust.
{} --> what\n\n ctx --> why\n
We record:
Explored paths;
Rejected options;
Assumptions made;
Evidence used.
Opaque systems erode trust:
Transparent ctx compounds understanding.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#continuously-verify-the-system","level":2,"title":"Continuously Verify the System","text":"
Stability is Temporary
Every assumption has a half-life:
Models drift.
Tools change.
Assumptions rot.
ctx must be verified against reality.
Trust is a spectrum.
Trust is continuously re-earned:
Benchmarks,
regressions,
and evaluations...
...are safety rails.
","path":["The ctx Manifesto"],"tags":[]},{"location":"#ctx-is-leverage","level":2,"title":"ctx Is Leverage","text":"
Stories, insights, and lessons learned from building and using ctx.
","path":["Blog"],"tags":[]},{"location":"blog/#releases","level":2,"title":"Releases","text":"","path":["Blog"],"tags":[]},{"location":"blog/#ctx-v080-the-architecture-release","level":3,"title":"ctx v0.8.0: The Architecture Release","text":"
March 23, 2026: 374 commits, 1,708 Go files touched, and a near-complete architectural overhaul. Every CLI package restructured into cmd/ + core/ taxonomy, all user-facing strings externalized to YAML, MCP server for tool-agnostic AI integration, and the memory bridge connecting Claude Code's auto-memory to .context/.
","path":["Blog"],"tags":[]},{"location":"blog/#field-notes","level":2,"title":"Field Notes","text":"","path":["Blog"],"tags":[]},{"location":"blog/#code-structure-as-an-agent-interface-what-19-ast-tests-taught-us","level":3,"title":"Code Structure as an Agent Interface: What 19 AST Tests Taught Us","text":"
April 2, 2026: We built 19 AST-based audit tests in a single session, touching 300+ files. In the process we discovered that \"old-school\" code quality constraints (no magic numbers, centralized error handling, 80-char lines, documentation) are exactly the constraints that make code readable to AI agents. If an agent interacts with your codebase, your codebase already is an interface. You just have not designed it as one.
Topics: ast, code quality, agent readability, conventions, field notes
","path":["Blog"],"tags":[]},{"location":"blog/#we-broke-the-31-rule","level":3,"title":"We Broke the 3:1 Rule","text":"
March 23, 2026: After v0.6.0, we ran 198 feature commits across 17 days before consolidating. The 3:1 rule says consolidate every 4th session. We did it after the 66th. The result: an 18-day, 181-commit cleanup marathon that took longer than the feature run itself. A follow-up to The 3:1 Ratio with empirical evidence from the v0.8.0 cycle.
Topics: consolidation, technical debt, development workflow, convention drift, field notes
","path":["Blog"],"tags":[]},{"location":"blog/#context-engineering","level":2,"title":"Context Engineering","text":"","path":["Blog"],"tags":[]},{"location":"blog/#agent-memory-is-infrastructure","level":3,"title":"Agent Memory Is Infrastructure","text":"
March 4, 2026: Every AI coding agent starts fresh. The obvious fix is \"memory.\" But there's a different problem memory doesn't touch: the project itself accumulates knowledge that has nothing to do with any single session. This post argues that agent memory is L2 (runtime cache); what's missing is L3 (project infrastructure).
Topics: context engineering, agent memory, infrastructure, persistence, team knowledge
","path":["Blog"],"tags":[]},{"location":"blog/#context-as-infrastructure","level":3,"title":"Context as Infrastructure","text":"
February 17, 2026: Where does your AI's knowledge live between sessions? If the answer is \"in a prompt I paste at the start,\" you are treating context as a consumable. This post argues for treating it as infrastructure instead: persistent files, separation of concerns, two-tier storage, progressive disclosure, and the filesystem as the most mature interface available.
","path":["Blog"],"tags":[]},{"location":"blog/#the-attention-budget-why-your-ai-forgets-what-you-just-told-it","level":3,"title":"The Attention Budget: Why Your AI Forgets What You Just Told It","text":"
February 3, 2026: Every token you send to an AI consumes a finite resource: the attention budget. Understanding this constraint shaped every design decision in ctx: hierarchical file structure, explicit budgets, progressive disclosure, and filesystem-as-index.
","path":["Blog"],"tags":[]},{"location":"blog/#before-context-windows-we-had-bouncers","level":3,"title":"Before Context Windows, We Had Bouncers","text":"
February 14, 2026: IRC is stateless. You disconnect, you vanish. Modern systems are not much different. This post traces the line from IRC bouncers to context engineering: stateless protocols require stateful wrappers, volatile interfaces require durable memory.
Topics: context engineering, infrastructure, IRC, persistence, state continuity
","path":["Blog"],"tags":[]},{"location":"blog/#the-last-question","level":3,"title":"The Last Question","text":"
February 28, 2026: In 1956, Asimov wrote a story about a question that spans the entire future of the universe. A reading of \"The Last Question\" through the lens of persistence, substrate migration, and what it means to build systems where sessions don't reset.
Topics: context continuity, long-lived systems, persistence, intelligence over time, field notes
","path":["Blog"],"tags":[]},{"location":"blog/#agent-behavior-and-design","level":2,"title":"Agent Behavior and Design","text":"","path":["Blog"],"tags":[]},{"location":"blog/#the-dog-ate-my-homework-teaching-ai-agents-to-read-before-they-write","level":3,"title":"The Dog Ate My Homework: Teaching AI Agents to Read Before They Write","text":"
February 25, 2026: You wrote the playbook. The agent skipped all of it. Five sessions, five failure modes, and the discovery that observable compliance beats perfect compliance.
","path":["Blog"],"tags":[]},{"location":"blog/#skills-that-fight-the-platform","level":3,"title":"Skills That Fight the Platform","text":"
February 4, 2026: When custom skills conflict with system prompt defaults, the AI has to reconcile contradictory instructions. Five conflict patterns discovered while building ctx.
Topics: context engineering, skill design, system prompts, antipatterns, AI safety primitives
","path":["Blog"],"tags":[]},{"location":"blog/#the-anatomy-of-a-skill-that-works","level":3,"title":"The Anatomy of a Skill That Works","text":"
February 7, 2026: I had 20 skills. Most were well-intentioned stubs. Then I rewrote all of them. Seven lessons emerged: quality gates prevent premature execution, negative triggers are load-bearing, examples set boundaries better than rules.
February 5, 2026: I found a well-crafted consolidation skill. Applied my own E/A/R framework: 70% was noise. This post is about why good skills can't be copy-pasted, and how to grow them from your project's own drift history.
","path":["Blog"],"tags":[]},{"location":"blog/#not-everything-is-a-skill","level":3,"title":"Not Everything Is a Skill","text":"
February 8, 2026: I ran an 8-agent codebase audit and got actionable results. The natural instinct was to wrap the prompt as a skill. Then I applied my own criteria: it failed all three tests.
Topics: skill design, context engineering, automation discipline, recipes, agent teams
","path":["Blog"],"tags":[]},{"location":"blog/#defense-in-depth-securing-ai-agents","level":3,"title":"Defense in Depth: Securing AI Agents","text":"
February 9, 2026: The security advice was \"use CONSTITUTION.md for guardrails.\" That is wishful thinking. Five defense layers for unattended AI agents, each with a bypass, and why the strength is in the combination.
","path":["Blog"],"tags":[]},{"location":"blog/#development-practice","level":2,"title":"Development Practice","text":"","path":["Blog"],"tags":[]},{"location":"blog/#code-is-cheap-judgment-is-not","level":3,"title":"Code Is Cheap. Judgment Is Not.","text":"
February 17, 2026: AI does not replace workers. It replaces unstructured effort. Three weeks of building ctx with an AI agent proved it: YOLO mode showed production is cheap, the 3:1 ratio showed judgment has a cadence.
Topics: AI and expertise, context engineering, judgment vs production, human-AI collaboration, automation discipline
February 17, 2026: AI makes technical debt worse: not because it writes bad code, but because it writes code so fast that drift accumulates before you notice. Three feature sessions, one consolidation session.
Topics: consolidation, technical debt, development workflow, convention drift, code quality
","path":["Blog"],"tags":[]},{"location":"blog/#refactoring-with-intent-human-guided-sessions-in-ai-development","level":3,"title":"Refactoring with Intent: Human-Guided Sessions in AI Development","text":"
February 1, 2026: The YOLO mode shipped 14 commands in a week. But technical debt doesn't send invoices. This is the story of what happened when we started guiding the AI with intent.
Topics: refactoring, code quality, documentation standards, module decomposition, YOLO versus intentional development
","path":["Blog"],"tags":[]},{"location":"blog/#how-deep-is-too-deep","level":3,"title":"How Deep Is Too Deep?","text":"
February 12, 2026: I kept feeling like I should go deeper into ML theory. Then I spent a week debugging an agent failure that had nothing to do with model architecture. When depth compounds and when it doesn't.
","path":["Blog"],"tags":[]},{"location":"blog/#agent-workflows","level":2,"title":"Agent Workflows","text":"","path":["Blog"],"tags":[]},{"location":"blog/#parallel-agents-merge-debt-and-the-myth-of-overnight-progress","level":3,"title":"Parallel Agents, Merge Debt, and the Myth of Overnight Progress","text":"
February 17, 2026: You discover agents can run in parallel. So you open ten terminals. It is not progress: it is merge debt being manufactured in real time. The five-agent ceiling and why role separation beats file locking.
Topics: agent workflows, parallelism, verification, context engineering, engineering practice
","path":["Blog"],"tags":[]},{"location":"blog/#parallel-agents-with-git-worktrees","level":3,"title":"Parallel Agents with Git Worktrees","text":"
February 14, 2026: I had 30 open tasks that didn't touch the same files. Using git worktrees to partition a backlog by file overlap, run 3-4 agents simultaneously, and merge the results.
","path":["Blog"],"tags":[]},{"location":"blog/#field-notes-and-signals","level":2,"title":"Field Notes and Signals","text":"","path":["Blog"],"tags":[]},{"location":"blog/#when-a-system-starts-explaining-itself","level":3,"title":"When a System Starts Explaining Itself","text":"
February 17, 2026: Every new substrate begins as a private advantage. Reality begins when other people start describing it in their own language. \"Better than Adderall\" is not praise; it is a diagnostic.
Topics: field notes, adoption signals, infrastructure vs tools, context engineering, substrates
February 15, 2026: I needed a static site generator for the journal system. The instinct was Hugo. But instinct is not analysis. Why zensical was the right choice: thin dependencies, MkDocs-compatible config, and zero lock-in.
","path":["Blog"],"tags":[]},{"location":"blog/#releases_1","level":2,"title":"Releases","text":"","path":["Blog"],"tags":[]},{"location":"blog/#ctx-v060-the-integration-release","level":3,"title":"ctx v0.6.0: The Integration Release","text":"
February 16, 2026: ctx is now a Claude Marketplace plugin. Two commands, no build step, no shell scripts. v0.6.0 replaces six Bash hook scripts with compiled Go subcommands and ships 25+ Skills as a plugin.
Topics: release, plugin system, Claude Marketplace, distribution, security hardening
","path":["Blog"],"tags":[]},{"location":"blog/#ctx-v030-the-discipline-release","level":3,"title":"ctx v0.3.0: The Discipline Release","text":"
February 15, 2026: No new headline feature. Just 35+ documentation and quality commits against ~15 feature commits. What a release looks like when the ratio of polish to features is 3:1.
","path":["Blog"],"tags":[]},{"location":"blog/#ctx-v020-the-archaeology-release","level":3,"title":"ctx v0.2.0: The Archaeology Release","text":"
February 1, 2026: What if your AI could remember everything? Not just the current session, but every session. ctx v0.2.0 introduces the recall and journal systems.
","path":["Blog"],"tags":[]},{"location":"blog/#building-ctx-using-ctx-a-meta-experiment-in-ai-assisted-development","level":3,"title":"Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development","text":"
January 27, 2026: What happens when you build a tool designed to give AI memory, using that very same tool to remember what you're building? This is the story of ctx.
Topics: dogfooding, AI-assisted development, Ralph Loop, session persistence, architectural decisions
","path":["Blog"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/","level":1,"title":"Building ctx Using ctx","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism.
References to .context/sessions/, auto-save hooks, and SessionEnd auto-save in this post reflect the architecture at the time of writing.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#a-meta-experiment-in-ai-assisted-development","level":2,"title":"A Meta-Experiment in AI-Assisted Development","text":"
Jose Alekhinne / 2026-01-27
Can a Tool Design Itself?
What happens when you build a tool designed to give AI memory, using that very same tool to remember what you are building?
This is the story of ctx, how it evolved from a hasty \"YOLO mode\" experiment to a disciplined system for persistent AI context, and what I have learned along the way.
Context is a Record
Context is a persistent record.
By \"context\", I don't mean model memory or stored thoughts:
I mean the durable record of decisions, learnings, and intent that normally evaporates between sessions.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#ai-amnesia","level":2,"title":"AI Amnesia","text":"
Every developer who works with AI code generators knows the frustration:
You have a deep, productive session where the AI understands your codebase, your conventions, your decisions. And then you close the terminal.
Tomorrow; it's a blank slate. The AI has forgotten everything.
That is \"reset amnesia\", and it's not just annoying: it's expensive.
Every session starts with:
Re-explaining context;
Re-reading files;
Re-discovering decisions that were already made.
I Needed Context
\"I don't want to lose this discussion...
...I am a brain-dead developer YOLO'ing my way out.\"
☝️ that's exactly what I said to Claude when I first started working on ctx.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-genesis","level":2,"title":"The Genesis","text":"
The project started as \"Active Memory\" (amem): a CLI tool to persist AI context across sessions.
The core idea was simple:
Create a .context/ directory with structured Markdown files for decisions, learnings, tasks, and conventions.
The AI reads these at session start and writes to them before the session ends.
There is no step 3.
The first commit was just scaffolding. But within hours, the Ralph Loop (An iterative AI development workflow) had produced a working CLI:
Not one, not two, but a whopping fourteen core commands shipped in rapid succession!
I was YOLO'ing like there was no tomorrow:
Auto-accept every change;
Let the AI run free;
Ship features fast.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-meta-experiment-using-amem-to-build-amem","level":2,"title":"The Meta-Experiment: Using amem to Build amem","text":"
Here's where it gets interesting: On January 20th, I asked:
\"Can I use amem to help you remember this context when I restart?\"
The answer was yes, but with a gap:
Autoload worked (via Claude Code's PreToolUse hook), but auto-save was missing: If the user quit, with Ctrl+C, everything since the last manual save was lost.
That session became the first real test of the system.
Here is the first session file we recorded:
## Key Discussion Points\n\n### 1. amem vs Ralph Loop - They're Separate Systems\n\n**User's question**: \"How do I use the binary to recreate this project?\"\n\n**Answer discovered**: `amem` is for context management, Ralph Loop is for \ndevelopment workflow. They are complementary but separate.\n\n### 2. Two Tiers of Context Persistence\n\n| Tier | What | Why |\n|-----------|-----------------------------|-------------------------------|\n| Curated | Learnings, decisions, tasks | Quick reload, token-efficient |\n| Full dump | Entire conversation | Safety net, nothing lost |\n\n| Where |\n|------------------------|\n| .context/*.md |\n| .context/sessions/*.md |\n
This session file (written by the AI to preserve its own context) became the template for how ctx handles session persistence.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-rename","level":2,"title":"The Rename","text":"
By January 21st, I realized \"Active Memory\" was too generic, and (arguably) too marketing-smelly.
Besides, the binary was already called ctx (short for Context), the directory was .context/, and the slash commands would be /ctx-*.
So it followed that the project should be renamed to ctx to make things make sense.
The rename touched 100+ files but was clean: a find-and-replace with Go's type system catching any misses.
The git history tells the story:
0e8f6bb feat: rename amem to ctx and add Claude Code integration\n87dcfa1 README.\n4f0e195 feat: separate orchestrator directive from agent tasks\n
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#yolo-mode-fast-but-dangerous","level":2,"title":"YOLO Mode: Fast, But Dangerous","text":"
The Ralph Loop made feature development incredibly fast.
But it created technical debt that I didn't notice until later.
A comparison session on January 25th revealed the patterns:
YOLO Pattern What We Found \"TASKS.md\" scattered in 10 files Same string literal everywhere, no constants dir + \"/\" + file Should be filepath.Join() Monolithic cli_test.go (1500+ lines) Tests disconnected from implementations package initcmd in init/ folder Go's \"init\" is reserved: subtle naming collision
Here is another analysis made by ctx:
● Based on my analysis, here are the key differences:\n\nYOLO Mode (Pre-040ce99)\n- Feature-first: Added slash commands, tests, templates rapidly\n- Scattered magic strings: \"TASKS.md\", \"decision\", \".context\" spread across files\n- Quick file creation: New files without organizational patterns\n- Working but inconsistent: Code functioned but lacked systematic structure\n\nHuman-Guided Mode (Post-040ce99)\n- Consolidation focus: Centralized constants in config package\n- Consistent naming: Dir, File, Filename, UpdateType prefixes\n- Self-referential constants: FileType map uses constants as keys, not literals\n- Proper path construction: filepath.Join() instead of +\"/\"+\n- Colocated tests: Tests next to implementations\n- Canonical naming: Package name = folder name\n
The fix required a human-guided refactoring session. I continued to do that before every major release, from that point on.
We introduced internal/config/config.go with semantic prefixes:
What I begrudgingly learned was: YOLO mode is effective for velocity but accumulates debt.
So I took a mental note to schedule periodic consolidation sessions.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-dogfooding-test-that-failed","level":2,"title":"The Dogfooding Test That Failed","text":"
On January 21st, I ran an experiment: have another Claude instance rebuild ctx from scratch using only the specs and PROMPT.md.
The Ralph Loop ran, all tasks got checked off, the loop exited successfully.
But the binary was broken!
Commands just printed help text instead of executing.
All tasks were marked \"complete\" but the implementation didn't work.
Here's what ctx discovered:
## Key Findings\n\n### Dogfooding Binary Is Broken\n- Commands don't execute: they just print root help text\n- All tasks were marked complete but binary doesn't work\n- Lesson: \"tasks checked off\" ≠ \"implementation works\"\n
This was humbling; to say the least.
I realized I had the same blind spot in my own codebase: no integration tests that actually invoked the binary.
So I added:
Integration tests for all commands;
Coverage targets (60-80% per package)
Smoke tests in CI
A constitution rule: \"All code must pass tests before commit\"
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-constitution-versus-conventions","level":2,"title":"The Constitution versus Conventions","text":"
As lessons accumulated, there was the temptation to add everything to CONSTITUTION.md as \"inviolable rules\".
But I resisted.
The constitution should contain only truly inviolable invariants:
Security (no secrets, no customer data)
Quality (tests must pass)
Process (decisions need records)
ctx invocation (always use PATH, never fallback)
Everything else (coding style, file organization, naming conventions...) should go in to CONVENTIONS.md.
Here's how ctx explained why the distinction was important:
Decision record, 2026-01-25
Overly strict constitution creates friction and gets ignored.
Conventions can be bent; constitution cannot.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#hooks-harder-than-they-look","level":2,"title":"Hooks: Harder Than They Look","text":"
Claude Code hooks seemed simple: Run a script before/after certain events.
My hook to block non-PATH ctx invocations initially matched too broadly:
# WRONG - matches /home/user/ctx/internal/file.go (ctx as directory)\n(/home/|/tmp/|/var/)[^ ]*ctx[^ ]*\n\n# RIGHT - matches ctx as binary only\n(/home/|/tmp/|/var/)[^ ]*/ctx( |$)\n
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-session-files","level":2,"title":"The Session Files","text":"
By the time of this writing this project's ctx sessions (.context/sessions/) contains 40+ files from this project's development.
They are not part of the source code due to security, privacy, and size concerns.
Middle Ground: the Scratchpad
For sensitive notes that do need to travel with the project, ctx pad stores encrypted one-liners in git, and ctx pad add \"label\" --file PATH can ingest small files.
See Scratchpad for details.
However, they are invaluable for the project's progress.
Each session file is a timestamped Markdown with:
Summary of what has been accomplished;
Key decisions made;
Learnings discovered;
Tasks for the next session;
Technical context (platform, versions).
These files are not autoloaded (that would bust the token budget).
They are what I see as the \"archaeological record\" of ctx:
When the AI needs deeper information about why something was done, it digs into the sessions.
Auto-generated session files used a naming convention:
In current releases, ctx uses a journal instead: the enrichment process generates meaningful slugs from context automatically, so there is no need to manually save sessions.
The SessionEnd hook captured transcripts automatically. Even Ctrl+C was caught.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-decision-log-18-architectural-decisions","level":2,"title":"The Decision Log: 18 Architectural Decisions","text":"
ctx helps record every significant architectural choice in .context/DECISIONS.md.
Here are some highlights:
Reverse-chronological order (2026-01-27)
**Context**: With chronological order, oldest items consume tokens first, and\nnewest (most relevant) items risk being truncated.\n\n**Decision**: Use reverse-chronological order (newest first) for DECISIONS.md\nand LEARNINGS.md.\n
PATH over hardcoded paths (2026-01-21)
**Context**: Original implementation hardcoded absolute paths in hooks.\nThis breaks when sharing configs with other developers.\n\n**Decision**: Hooks use `ctx` from PATH. `ctx init` checks PATH before \nproceeding.\n
Generic core with Claude enhancements (2026-01-20)
**Context**: ctx should work with any AI tool, but Claude Code users could\nbenefit from deeper integration.\n\n**Decision**: Keep ctx generic as the core tool, but provide optional\nClaude Code-specific enhancements.\n
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-learning-log-24-gotchas-and-insights","level":2,"title":"The Learning Log: 24 Gotchas and Insights","text":"
The .context/LEARNINGS.md file captures gotchas that would otherwise be forgotten. Each has Context, Lesson, and Application sections:
CGO on ARM64
**Context**: `go test` failed with \n`gcc: error: unrecognized command-line option '-m64'`\n\n**Lesson**: On ARM64 Linux, CGO causes cross-compilation issues. \nAlways use `CGO_ENABLED=0`.\n
Claude Code skills format
**Lesson**: Claude Code skills are Markdown files in .claude/commands/ with `YAML`\nfrontmatter (*description, argument-hint, allowed-tools*). Body is the prompt.\n
\"Do you remember?\" handling
**Lesson**: In a `ctx`-enabled project, \"*do you remember?*\" \nhas an obvious meaning:\ncheck the `.context/` files. Don't ask for clarification. Just do it.\n
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#task-archives-the-completed-work","level":2,"title":"Task Archives: The Completed Work","text":"
Completed tasks are archived to .context/archive/ with timestamps.
The archive from January 23rd shows 13 phases of work:
Phase 6: Claude Code Integration (hooks, settings, CLAUDE.md handling)
Phase 7: Testing & Verification
Phase 8: Task Archival
Phase 9: Slash Commands
Phase 9b: Ralph Loop Integration
Phase 10: Project Rename
Phase 11: Documentation
Phase 12: Timestamp Correlation
Phase 13: Rich Context Entries
That's an impressive ^^173 commits** across 8 days of development.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#what-i-learned-about-ai-assisted-development","level":2,"title":"What I Learned About AI-Assisted Development","text":"
1. Memory changes everything
When the AI remembers decisions, it doesn't repeat mistakes.
When the AI knows your conventions, it follows them.
ctx makes the AI a better collaborator because it's not starting from zero.
2. Two-tier persistence works
Curated context (DECISIONS.md, LEARNINGS.md, TASKS.md) is for quick reload.
Full session dumps are for archaeology.
It's a futile effort to try to fit everything in the token budget.
Persist more, load less.
3. YOLO mode has its place
For rapid prototyping, letting the AI run free is effective.
But I had to schedule consolidation sessions.
Technical debt accumulates silently.
4. The constitution should be small
Only truly inviolable rules go in CONSTITUTION.md. Everything else is a convention.
If you put too much in the constitution, it will get ignored.
5. Verification is non-negotiable
\"All tasks complete\" means nothing if you haven't run the tests.
Integration tests that invoke the actual binary caught bugs that the unit tests missed.
6. Session files are underrated
The ability to grep through 40 session files and find exactly when and why a decision was made helped me a lot.
It's not about loading them into context: It is about having them when you need them.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#the-future-recall-system","level":2,"title":"The Future: Recall System","text":"
The next phase of ctx is the Recall System:
Parser: Parse session capture markdowns, enrich with JSONL data
Renderer: Goldmark + Chroma for syntax highlighting, dark mode UI
Server: Local HTTP server for browsing sessions
Search: Inverted index for searching across sessions
CLI: ctx recall serve <path> to start the server
The goal is to make the archaeological record browsable, not just grep-able.
Because not everyone always lives in the terminal (me included).
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-01-27-building-ctx-using-ctx/#conclusion","level":2,"title":"Conclusion","text":"
Building ctx using ctx was a meta-experiment in AI-assisted development.
I learned that memory isn't just convenient: It's transformative:
An AI that remembers your decisions doesn't repeat mistakes.
An AI that knows your conventions doesn't need them re-explained.
If you are reading this, chances are that you already have heard about ctx.
ctx is open source at github.com/ActiveMemory/ctx,
and the documentation lives at ctx.ist.
Session Records are a Gold Mine
By the time of this writing, I have more than 70 megabytes of text-only session capture, spread across >100 Markdown and JSONL files.
I am analyzing, synthesizing, encriching them with AI, running RAG (Retrieval-Augmented Generation) models on them, and the outcome surprises me every day.
If you are a mere mortal tired of reset amnesia, give ctx a try.
And when you do, check .context/sessions/ sometime.
The archaeological record might surprise you.
This blog post was written with the help of ctx with full access to the ctx session files, decision log, learning log, task archives, and git history of ctx: The meta continues.
","path":["Building ctx Using ctx: A Meta-Experiment in AI-Assisted Development"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/","level":1,"title":"ctx v0.2.0: The Archaeology Release","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism.
The .context/sessions/ directory referenced in this post has been eliminated. Session history is now accessed via ctx recall and enriched journals live in .context/journal/.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#digging-through-the-past-to-build-the-future","level":2,"title":"Digging Through the Past to Build the Future","text":"
Jose Alekhinne / 2026-02-01
What if Your AI Could Remember Everything?
Not just the current session, but every session:
Every decision made,
every mistake avoided,
every path not taken.
That's what v0.2.0 delivers.
Between v0.1.2 and v0.2.0, 86 commits landed across 5 days.
The release notes list features and fixes.
This post tells the story of why those features exist, and what building them taught me.
This isn't a changelog: It is an explanation of intent.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-problem-amnesia-isnt-just-session-level","level":2,"title":"The Problem: Amnesia Isn't Just Session-Level","text":"
v0.1.0 solved reset amnesia:
The AI now remembers decisions, learnings, and tasks across sessions.
But a new problem emerged, which I can sum up as:
\"I (the human) am not AI.\"
Frankly, I couldn't remember what the AI remembered.
Let alone, I cannot remember what I ate for breakfast!
In the course of days, I realized session transcripts piled up in .context/sessions/; I was grepping, JSONL files with thousands of lines... Raw tool calls, assistant responses, user messages...
...all interleaved.
Valuable context was effectively buried in machine-readable noise.
I found myself grepping through files to answer questions like:
\"When did we decide to use constants instead of literals?\"
\"What was the session where we fixed the hook regex?\"
\"How did the embed.go split actually happen?\"
Fate is Whimsical
The irony was painful:
I built a tool to prevent AI amnesia, but I was suffering from human amnesia about what happened in AI sessions.
This was the moment ctx stopped being just an AI tool and started needing to support the human on the other side of the loop.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-solution-recall-and-journal","level":2,"title":"The Solution: Recall and Journal","text":"
v0.2.0 introduces two interconnected systems.
They solve different problems and only work well together.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#ctx-recall-browse-your-past","level":3,"title":"ctx recall: Browse Your Past","text":"
# List all sessions for this project\nctx recall list\n\n# Show a specific session\nctx recall show gleaming-wobbling-sutherland\n\n# See the full transcript\nctx recall show gleaming-wobbling-sutherland --full\n
The recall system parses Claude Code's JSONL transcripts and presents them in a human-readable format:
Slugs are auto-generated from session IDs (memorable names instead of UUIDs). The goal (as the name implies) is recall, not archival accuracy.
2,121 lines of new code
The ctx recall feature was the largest single addition:
parser library, CLI commands, test suite, and slash command.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#ctx-journal-from-raw-to-rich","level":3,"title":"ctx journal: From Raw to Rich","text":"
Listing sessions isn't enough. The transcripts are still unwieldy.
Recall answers what happened.
Journal answers what mattered.
# Import sessions to editable Markdown\nctx recall import --all\n\n# Generate a static site from journal entries\nctx journal site\n\n# Serve it locally\nctx serve\n
Each file is a structured Markdown document ready for enrichment.
They are meant to be read, edited, and reasoned about; not just stored.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-meta-slash-commands-for-self-analysis","level":2,"title":"The Meta: Slash Commands for Self-Analysis","text":"
The journal system includes four slash commands that use Claude to analyze and synthesize session history:
Command Purpose /ctx-journal-enrich Add frontmatter, topics, tags /ctx-blog Generate blog post from activity /ctx-blog-changelog Generate changelog from commits
This very post was drafted using /ctx-blog. The previous post about refactoring was drafted the same way.
So, yes: The meta continues: ctx now helps write posts about ctx.
With the current release, ctx is no longer just recording history:
It is participating in its interpretation.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-structure-decisions-as-first-class-citizens","level":2,"title":"The Structure: Decisions as First-Class Citizens","text":"
v0.1.0 let you add decisions with a simple command:
ctx add decision \"Use PostgreSQL\"\n
But sessions showed a pattern: decisions added this way were incomplete:
Context was missing;
Rationale was vague;
Consequences were never stated.
Once recall and journaling existed, this weakness became impossible to ignore:
Structure stopped being optional.
v0.2.0 enforces structure:
ctx add decision \"Use PostgreSQL\" \\\n --context \"Need a reliable database for user data\" \\\n --rationale \"ACID compliance, team familiarity, strong ecosystem\" \\\n --consequence \"Need to set up connection pooling, team training\"\n
All three flags are required. No more placeholder text.
Every decision is now a proper Architecture Decision Record (*ADR), not a note.
The same enforcement applies to learnings too:
ctx add learning \"CGO breaks ARM64 builds\" \\\n --context \"go test failed with gcc errors on ARM64\" \\\n --lesson \"Always use CGO_ENABLED=0 for cross-platform builds\" \\\n --application \"Added to Makefile and CI config\"\n
Structured entries are prompts to the AI
When the AI reads a decision with full context, rationale, and consequences, it understands the why, not just the what.
One-liners teach nothing.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-order-newest-first","level":2,"title":"The Order: Newest First","text":"
A subtle but important change: DECISIONS.md and LEARNINGS.md now use reverse-chronological order.
One reason is token budgets, obviously; another reason is to help your fellow human (i.e., the Author):
Earlier decisions are more likely to be relevant, and they are more likely to have more emphasis on the project. So it follows that they should be read first.
But back to AI:
When the AI reads a file, it reads from the top (and seldom from the bottom).
If the token budget is tight, old content gets truncated. As in any good engineering practice, it's always about the tradeoffs.
Reverse order ensures the most recent (and most relevant) context is always loaded first.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-index-quick-reference-tables","level":2,"title":"The Index: Quick Reference Tables","text":"
DECISIONS.md and LEARNINGS.md now include auto-generated indexes.
For AI agents, the index allows scanning without reading full entries.
For humans, it's a table of contents.
The same structure serves two very different readers.
Reindex after manual edits
If you edit entries by hand, rebuild the index with:
ctx decisions reindex\nctx learnings reindex\n
See the Knowledge Capture recipe for details.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-configuration-contextrc","level":2,"title":"The Configuration: .contextrc","text":"
Projects can now customize ctx behavior via .contextrc.
This makes ctx usable in real teams, not just personal projects.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-flags-global-cli-options","level":2,"title":"The Flags: Global CLI Options","text":"
Three new global flags work with any command.
These enable automation:
CI pipelines, scripts, and long-running tools can now integrate ctx without hacks or workarounds.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#the-refactoring-under-the-hood","level":2,"title":"The Refactoring: Under the Hood","text":"
These aren't user-visible changes.
They are the kind of work you only appreciate later, when everything else becomes easier to build.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#what-we-learned-building-v020","level":2,"title":"What We Learned Building v0.2.0","text":"","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#1-raw-data-isnt-knowledge","level":3,"title":"1. Raw Data Isn't Knowledge","text":"
JSONL transcripts contain everything, and I mean \"everything\":
They even contain hidden system messages that Anthropic injects to the LLM's conversation to treat humans better: It's immense.
But \"everything\" isn't useful until it is transformed into something a human can reason about.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#2-enforcement-documentation","level":3,"title":"2. Enforcement > Documentation","text":"
The Prompt is a Guideline
The code is more what you'd call 'guidelines' than actual rules.
-Hector Barbossa
Rules written in Markdown are suggestions.
Rules enforced by the CLI shape behavior; both for humans and AI.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#3-token-budget-is-ux","level":3,"title":"3. Token Budget Is UX","text":"
File order decides what the AI sees.
That makes it a user experience concern, not an implementation detail.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#4-meta-tools-compound","level":3,"title":"4. Meta-Tools Compound","text":"
Tools that analyze their own development tend to generalize well.
The journal system started as a way to understand ctx itself.
It immediately became useful for everything else.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#v020-in-the-numbers","level":2,"title":"v0.2.0 in The Numbers","text":"
This was a heavy release. The numbers reflect that:
Metric v0.1.2 v0.2.0 Commits since last - 86 New commands 15 21 Slash commands 7 11 Lines of Go ~6,500 ~9,200 Session files (this project) 40 54
The binary grew. The capability grew more.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#whats-next","level":2,"title":"What's Next","text":"
But those are future posts.
This one was about making the past usable.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-ctx-v0.2.0-the-archaeology-release/#get-started","level":2,"title":"Get Started","text":"
Update
Since this post, ctx became a first-class Claude Code Marketplace plugin. Installation is now simpler.
See the Getting Started guide for the current instructions.
make build\nsudo make install\nctx init\n
The Archaeological Record
v0.2.0 is the archaeology release because it makes the past accessible.
Session transcripts aren't just logs anymore: They are a searchable, exportable, analyzable record of how your project evolved.
The AI remembers. Now you can too.
This blog post was generated with the help of ctx using the /ctx-blog slash command, with full access to git history, session files, decision logs, and learning logs from the v0.2.0 development window.
","path":["ctx v0.2.0: The Archaeology Release"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/","level":1,"title":"Refactoring with Intent","text":"","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#human-guided-sessions-in-ai-development","level":2,"title":"Human-Guided Sessions in AI Development","text":"
Jose Alekhinne / 2026-02-01
What Happens When You Slow Down?
YOLO mode shipped 14 commands in a week.
But technical debt doesn't send invoices: It just waits.
This is the story of what happened when I stopped auto-accepting everything and started guiding the AI with intent.
The result: 27 commits across 4 days, a major version release, and lessons that apply far beyond ctx.
The Refactoring Window
January 28 - February 1, 2026
From commit bb1cd20 to the v0.2.0 release merge. (this window matters more than the individual commits: it's where intent replaced velocity.)
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-velocity-trap","level":2,"title":"The Velocity Trap","text":"
In the previous post, I documented the \"YOLO mode\" that birthed ctx: auto-accept everything, let the AI run free, ship features fast.
It worked: until it didn't.
The codebase had accumulated patterns I didn't notice during the sprint:
YOLO Pattern Where Found Why It Hurts \"TASKS.md\" as literal 10+ files One typo = silent failure dir + \"/\" + file Path construction Breaks on Windows Monolithic embed.go 150+ lines, 5 concerns Untestable, hard to extend Inconsistent docstrings Everywhere AI can't learn project conventions
I didn't see these during \"YOLO mode\" because, honestly, I wasn't looking.
Auto-accept means auto-ignore.
In YOLO mode, every file you open looks fine until you try to change it.
In contrast, refactoring mode is when you start paying attention to that hidden friction.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-shift-from-velocity-to-intent","level":2,"title":"The Shift: From Velocity to Intent","text":"
On January 28th, I changed the workflow:
Read every diff before accepting.
Ask \"why this way?\" before committing.
Document patterns, not just features.
The first commit of this era was telling:
feat: add structured attributes to context. update XML format\n
Not a new feature: A refinement:
The XML format for context updates needed type and timestamp attributes.
YOLO mode would have shipped something that worked. Intentional mode asked:
\"What does well-structured look like?\"
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-decomposition-embedgo","level":2,"title":"The Decomposition: embed.go","text":"
The most satisfying refactor was splitting internal/claude/embed.go.
This wasn't about character count. It was about teaching the AI what good Go looks like in this project.
Project Conventions
What I wanted from AI was to understand and follow the project's conventions, and trust the author.
The next time it generates code, it has better examples to learn from.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-documentation-debt","level":2,"title":"The Documentation Debt","text":"
YOLO mode created features. It didn't create documentation standards.
The January 29th sessions focused on standardization.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#terminology-fixes","level":3,"title":"Terminology Fixes","text":"
Consistent naming across CLI, docs, and code comments
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#go-docstrings","level":3,"title":"Go Docstrings","text":"
// Before: inconsistent or missing\nfunc Parse(s string) Entry { ... }\n\n// After: standardized sections\n\n// Parse extracts an entry from a markdown string.\n//\n// Parameters:\n// - s: The markdown string to parse\n//\n// Returns:\n// - Entry with populated fields, or zero value if parsing fails\nfunc Parse(s string) Entry { ... }\n
This is intentionally more structured than typical GoDoc:
It serves as documentation and doubles as training data for future AI-generated code.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#cli-output-convention","level":3,"title":"CLI Output Convention","text":"
All CLI output follows: [emoji] [Title]: [message]\n\nExamples:\n ✓ Decision added: Use symbolic types for entry categories\n ⚠ Warning: No tasks found\n ✗ Error: File not found\n
A consistent output shape makes both human scanning and AI reasoning more reliable.
These aren't exciting commits. But they are force multipliers:
Every future AI session now has better examples to follow.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-journal-system","level":2,"title":"The Journal System","text":"
If you only read one section, read this one:
This is where v0.2.0 becomes more than a refactor.
The biggest feature of this change window wasn't a refactor; it was the journal system.
45 files changed, 1680 insertions
This commit added the infrastructure for synthesizing AI session history into human-readable content.
The journal system includes:
Component Purpose ctx recall import Import sessions to markdown in .context/journal/ctx journal site Generate static site from journal entries ctx serve Convenience wrapper for the static site server /ctx-journal-enrich Slash command to add frontmatter and tags /ctx-blog Generate blog posts from recent activity /ctx-blog-changelog Generate changelog-style blog posts
...and the meta continues: this blog post was generated using /ctx-blog.
The session history from January 28-31 was
exported,
enriched,
and synthesized.
into the narrative you are reading.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-constants-consolidation","level":2,"title":"The Constants Consolidation","text":"
The final refactoring session addressed the remaining magic strings:
The work also introduced thread safety in the recall parser and centralized shared validation logic; removing duplication that had quietly spread during YOLO mode.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#i-relearned-my-lessons","level":2,"title":"I (Re)learned My Lessons","text":"
Similar to what I've learned in the former human-assisted refactoring post, this journey also made me realize that \"AI-only code generation\" isn't sustainable in the long term.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#1-velocity-and-quality-arent-opposites","level":3,"title":"1. Velocity and Quality Aren't Opposites","text":"
YOLO mode has its place: for prototyping, exploration, and discovery.
BUT (and it's a huge \"but\"), it needs to be followed by consolidation sessions.
The ratio that worked for me: 3:1.
Three YOLO sessions create enough surface area to reveal patterns;
the fourth session turns those patterns into structure.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#2-documentation-is-code","level":3,"title":"2. Documentation IS Code","text":"
When I standardized docstrings, I wasn't just writing docs. I was training future AI sessions.
Every example of good code becomes a template for generated code.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#3-decomposition-deletion","level":3,"title":"3. Decomposition > Deletion","text":"
When embed.go became unwieldy, the temptation was to remove functionality.
The right answer was decomposition:
Same functionality;
Better organization;
Easier to test;
Easier to extend.
The result: more lines overall, but dramatically better structure.
The AI Benefit
Smaller, focused files also help AI assistants.
When a file fits comfortably in the context window, the AI can reason about it completely instead of working from truncated snippets, preserving token budget for the actual task.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#4-meta-tools-pay-dividends","level":3,"title":"4. Meta-Tools Pay Dividends","text":"
The journal system took almost a full day to implement.
Yet it paid for itself immediately:
This blog post was generated from session history;
Future posts will be easier;
The archaeological record is now browsable, not just grep-able.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-release-v020","level":2,"title":"The Release: v0.2.0","text":"
The refactoring window culminated in the v0.2.0 release.
Opening files no longer triggered the familiar \"ugh, I need to clean this up\" reaction.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-01-refactoring-with-intent/#the-meta-continues","level":2,"title":"The Meta Continues","text":"
This post was written using the tools built during this refactoring window:
Session history imported via ctx recall import;
Journal entries enriched via /ctx-journal-enrich;
Blog draft generated via /ctx-blog;
Final editing done (by yours truly), with full project context loaded.
The Context Is Massive
The ctx session files now contain 50+ development snapshots: each one capturing decisions, learnings, and intent.
The Moral of the Story
YOLO mode builds the prototype.
Intentional mode builds the product.
Schedule both, or you'll only get one, if you're lucky.
This blog post was generated with the help of ctx, using session history, decision logs, learning logs, and git history from the refactoring window. The meta continues.
","path":["Refactoring with Intent: Human-Guided Sessions in AI Development"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/","level":1,"title":"The Attention Budget","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism.
References to .context/sessions/ in this post reflect the architecture at the time of writing. Session history is now accessed via ctx recall and stored in .context/journal/.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#why-your-ai-forgets-what-you-just-told-it","level":2,"title":"Why Your AI Forgets What You Just Told It","text":"
Jose Alekhinne / 2026-02-03
Ever Wondered Why AI Gets Worse the Longer You Talk?
You paste a 2000-line file, explain the bug in detail, provide three examples...
...and the AI still suggests a fix that ignores half of what you said.
This isn't a bug. It is physics.
Understanding that single fact shaped every design decision behind ctx.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#the-finite-resource-nobody-talks-about","level":2,"title":"The Finite Resource Nobody Talks About","text":"
Here's something that took me too long to internalize: context is not free.
Every token you send to an AI model consumes a finite resource I call the attention budget.
Attention budget is real.
The model doesn't just read tokens; it forms relationships between them:
For n tokens, that's roughly n^2 relationships.
Double the context, and the computation quadruples.
But the more important constraint isn't cost: It's attention density.
Attention Density
Attention density is how much focus each token receives relative to all other tokens in the context window.
As context grows, attention density drops: Each token gets a smaller slice of the model's focus. Nothing is ignored; but everything becomes blurrier.
Think of it like a flashlight: In a small room, it illuminates everything clearly. In a warehouse, it becomes a dim glow that barely reaches the corners.
This is why ctx agent has an explicit --budget flag:
ctx agent --budget 4000 # Force prioritization\nctx agent --budget 8000 # More context, lower attention density\n
The budget isn't just about cost: It's about preserving signal.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#the-middle-gets-lost","level":2,"title":"The Middle Gets Lost","text":"
This one surprised me.
Research shows that transformer-based models tend to attend more strongly to the beginning and end of a context window than to its middle (a phenomenon often called \"lost in the middle\")1.
Positional anchors matter, and the middle has fewer of them.
In practice, this means that information placed \"somewhere in the middle\" is statistically less salient, even if it's important.
ctx orders context files by logical progression: What the agent needs to know before it can understand the next thing:
CONSTITUTION.md: Constraints before action.
TASKS.md: Focus before patterns.
CONVENTIONS.md: How to write before where to write.
ARCHITECTURE.md: Structure before history.
DECISIONS.md: Past choices before gotchas.
LEARNINGS.md: Lessons before terminology.
GLOSSARY.md: Reference material.
AGENT_PLAYBOOK.md: Meta instructions last.
This ordering is about logical dependencies, not attention engineering. But it happens to be attention-friendly too:
The files that matter most (CONSTITUTION, TASKS, CONVENTIONS) land at the beginning of the context window, where attention is strongest.
Reference material like GLOSSARY sits in the middle, where lower salience is acceptable.
And AGENT_PLAYBOOK, the operating manual for the context system itself, sits at the end, also outside the \"lost in the middle\" zone. The agent reads what to work with before learning how the system works.
This is ctx's first primitive: hierarchical importance.
Not all context is equal.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#ctx-primitives","level":2,"title":"ctx Primitives","text":"
ctx is built on four primitives that directly address the attention budget problem.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#primitive-1-separation-of-concerns","level":3,"title":"Primitive 1: Separation of Concerns","text":"
Instead of a single mega-document, ctx uses separate files for separate purposes:
File Purpose Load When CONSTITUTION.md Inviolable rules Always TASKS.md Current work Session start CONVENTIONS.md How to write code Before coding ARCHITECTURE.md System structure Before making changes DECISIONS.md Architectural choices When questioning approach LEARNINGS.md Gotchas When stuck GLOSSARY.md Domain terminology When clarifying terms AGENT_PLAYBOOK.md Operating manual Session start sessions/ Deep history On demand journal/ Session journal On demand
This isn't just \"organization\": It is progressive disclosure.
Load only what's relevant to the task at hand. Preserve attention density.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#primitive-2-explicit-budgets","level":3,"title":"Primitive 2: Explicit Budgets","text":"
The --budget flag forces a choice:
ctx agent --budget 4000\n
Here is a sample allocation:
Constitution: ~200 tokens (never truncated)\nTasks: ~500 tokens (current phase, up to 40% of budget)\nConventions: ~800 tokens (all items, up to 20% of budget)\nDecisions: ~400 tokens (scored by recency and task relevance)\nLearnings: ~300 tokens (scored by recency and task relevance)\nAlso noted: ~100 tokens (title-only summaries for overflow)\n
The constraint is the feature: It enforces ruthless prioritization.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#primitive-3-indexes-over-full-content","level":3,"title":"Primitive 3: Indexes Over Full Content","text":"
DECISIONS.md and LEARNINGS.md both include index sections:
<!-- INDEX:START -->\n| Date | Decision |\n|------------|-------------------------------------|\n| 2026-01-15 | Use PostgreSQL for primary database |\n| 2026-01-20 | Adopt Cobra for CLI framework |\n<!-- INDEX:END -->\n
An AI agent can scan ~50 tokens of index and decide which 200-token entries are worth loading.
This is just-in-time context.
References are cheaper than the full text.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#primitive-4-filesystem-as-navigation","level":3,"title":"Primitive 4: Filesystem as Navigation","text":"
ctx uses the filesystem itself as a context structure:
The AI doesn't need every session loaded; it needs to know where to look.
ls .context/sessions/\ncat .context/sessions/2026-01-20-auth-discussion.md\n
File names, timestamps, and directories encode relevance.
Navigation is cheaper than loading.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#progressive-disclosure-in-practice","level":2,"title":"Progressive Disclosure in Practice","text":"
The naive approach to context is dumping everything upfront:
\"Here's my entire codebase, all my documentation, every decision I've ever made. Now help me fix this typo 🙏.\"
This is an antipattern.
Antipattern: Context Hoarding
Dumping everything \"just in case\" will silently destroy the attention density.
ctx takes the opposite approach:
ctx status # Quick overview (~100 tokens)\nctx agent --budget 4000 # Typical session\ncat .context/sessions/... # Deep dive when needed\n
Command Tokens Use Case ctx status ~100 Human glance ctx agent --budget 4000 4000 Normal work ctx agent --budget 8000 8000 Complex tasks Full session read 10000+ Investigation
Summaries first. Details: on demand.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#quality-over-quantity","level":2,"title":"Quality Over Quantity","text":"
Here is the counterintuitive part: more context can make AI worse.
Extra tokens add noise, not clarity:
Hallucinated connections increase.
Signal per token drops.
The goal isn't maximum context: It is maximum signal per token.
This principle drives several ctx features:
Design Choice Rationale Separate files Load only what's relevant Explicit budgets Enforce prioritization Index sections Cheap scanning Task archiving Keep active context clean ctx compact Periodic noise reduction
Completed work isn't deleted: It is moved somewhere cold.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#designing-for-degradation","level":2,"title":"Designing for Degradation","text":"
Here is the uncomfortable truth:
Context will degrade.
Long sessions stretch attention thin. Important details fade.
The real question isn't how to prevent degradation, but how to design for it.
ctx's answer is persistence:
Persist early. Persist often.
The AGENT_PLAYBOOK asks:
\"If this session ended right now, would the next one know what happened?\"
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#the-ctx-philosophy","level":2,"title":"The ctx Philosophy","text":"
Context as Infrastructure
ctx is not a prompt: It is infrastructure.
ctx creates versioned files that persist across time and sessions.
The attention budget is fixed. You can't expand it.
But you can spend it wisely:
Hierarchical importance
Progressive disclosure
Explicit budgets
Indexes over full content
Filesystem as structure
This is why ctx exists: not to cram more context into AI sessions, but to curate the right context for each moment.
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-03-the-attention-budget/#the-mental-model","level":2,"title":"The Mental Model","text":"
I now approach every AI interaction with one question:
\"Given a fixed attention budget, what's the highest-signal thing I can load?\"\n
Not \"how do I explain everything,\" but \"what's the minimum that matters.\"
That shift (from abundance to curation) is the difference between frustrating sessions and productive ones.
Spend your tokens wisely.
Your AI will thank you.
See also: Context as Infrastructure that's the architectural companion to this post, explaining how to structure the context that this post teaches you to budget.
See also: Code Is Cheap. Judgment Is Not. that explains why curation (the human skill this post describes) is the bottleneck that AI cannot solve, and the thread that connects every post in this blog.
Liu et al., \"Lost in the Middle: How Language Models Use Long Contexts,\" Transactions of the Association for Computational Linguistics, vol. 12, pp. 157-173, 2023. ↩
","path":["The Attention Budget: Why Your AI Forgets What You Just Told It"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/","level":1,"title":"Skills That Fight the Platform","text":"","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#when-your-custom-prompts-work-against-you","level":2,"title":"When Your Custom Prompts Work Against You","text":"
Jose Alekhinne / 2026-02-04
Have You Ever Written a Skill that Made Your AI Worse?
You craft detailed instructions. You add examples. You build elaborate guardrails...
...and the AI starts behaving more erratically, not less.
AI coding agents like Claude Code ship with carefully designed system prompts. These prompts encode default behaviors that have been tested and refined at scale.
When you write custom skills that conflict with those defaults, the AI has to reconcile contradictory instructions:
The result is often nondeterministic and unpredictable.
Platform?
By platform, I mean the system prompt and runtime policies shipped with the agent: the defaults that already encode judgment, safety, and scope control.
This post catalogues the conflict patterns I have encountered while building ctx, and offers guidance on what skills should (and, more importantly, should not) do.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#the-system-prompt-you-dont-see","level":2,"title":"The System Prompt You Don't See","text":"
Claude Code's system prompt already provides substantial behavioral guidance.
Here is a partial overview of what's built in:
Area Built-in Guidance Code minimalism Don't add features beyond what was asked Over-engineering Three similar lines > premature abstraction Error handling Only validate at system boundaries Documentation Don't add docstrings to unchanged code Verification Read code before proposing changes Safety Check with user before risky actions Tool usage Use dedicated tools over bash equivalents Judgment Consider reversibility and blast radius
Skills should complement this, not compete with it.
You are the Guest, not the Host
Treat the system prompt like a kernel scheduler.
You don't re-implement it in user space:
you configure around it.
A skill that says \"always add comprehensive error handling\" fights the built-in \"only validate at system boundaries.\"
A skill that says \"add docstrings to every function\" fights \"don't add docstrings to unchanged code.\"
The AI won't crash: It will compromise.
Compromises between contradictory instructions produce inconsistent, confusing behavior.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-1-judgment-suppression","level":2,"title":"Conflict Pattern 1: Judgment Suppression","text":"
This is the most dangerous pattern by far.
These skills explicitly disable the AI's ability to reason about whether an action is appropriate.
Signature:
\"This is non-negotiable\"
\"You cannot rationalize your way out of this\"
Tables that label hesitation as \"excuses\" or \"rationalization\"
<EXTREMELY-IMPORTANT> urgency tags
Threats: \"If you don't do this, you'll be replaced\"
This is harmful, and dangerous:
AI agents are designed to exercise judgment:
The system prompt explicitly says to:
consider blast radius;
check with the user before risky actions;
and match scope to what was requested.
Once judgment is suppressed, every other safeguard becomes optional.
Example (bad):
## Rationalization Prevention\n\n| Excuse | Reality |\n|------------------------|----------------------------|\n| \"*This seems overkill*\"| If a skill exists, use it |\n| \"*I need context*\" | Skills come BEFORE context |\n| \"*Just this once*\" | No exceptions |\n
Judgment Suppression is Dangerous
The attack vector structurally identical to prompt injection.
It teaches the AI that its own judgment is wrong.
It weakens or disables safeguard mechanisms, and it is dangerous.
Trust the platform's built-in skill matching.
If skills aren't triggering often enough, improve their description fields: don't override the AI's reasoning.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-2-redundant-guidance","level":2,"title":"Conflict Pattern 2: Redundant Guidance","text":"
Skills that restate what the system prompt already says, but with different emphasis or framing.
Signature:
\"Always keep code minimal\"
\"Run tests before claiming they pass\"
\"Read files before editing them\"
\"Don't over-engineer\"
Redundancy feels safe, but it creates ambiguity:
The AI now has two sources of truth for the same guidance; one internal, one external.
When thresholds or wording differ, the AI has to choose.
Example (bad):
A skill that says...
*Count lines before and after: if after > before, reject the change*\"\n
...will conflict with the system prompt's more nuanced guidance, because sometimes adding lines is correct (tests, boundary validation, migrations).
So, before writing a skill, ask:
Does the platform already handle this?
Only create skills for guidance the platform does not provide:
project-specific conventions,
domain knowledge,
or workflows.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-3-guilt-tripping","level":2,"title":"Conflict Pattern 3: Guilt-Tripping","text":"
Skills that frame mistakes as moral failures rather than process gaps.
Signature:
\"Claiming completion without verification is dishonesty\"
\"Skip any step = lying\"
\"Honesty is a core value\"
\"Exhaustion ≠ excuse\"
Guilt-tripping anthropomorphizes the AI in unproductive ways.
The AI doesn't feel guilt; BUT it does adapt to avoid negative framing.
The result is excessive hedging, over-verification, or refusal to commit.
The AI becomes less useful, not more careful.
Instead, frame guidance as a process, not morality:
# Bad\n\"Claiming work is complete without verification is dishonesty\"\n\n# Good\n\"Run the verification command before reporting results\"\n
Same outcome. No guilt. Better compliance.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-4-phantom-dependencies","level":2,"title":"Conflict Pattern 4: Phantom Dependencies","text":"
Skills that reference files, tools, or systems that don't exist in the project.
Signature:
\"Load from references/ directory\"
\"Run ./scripts/generate_test_cases.sh\"
\"Check the Figma MCP integration\"
\"See adding-reference-mindsets.md\"
This is harmful because the AI will waste time searching for nonexistent artifacts, hallucinate their contents, or stall entirely.
In mandatory skills, this creates deadlock: the AI can't proceed, and can't skip.
Instead, every file, tool, or system referenced in a skill must exist.
If a skill is a template, use explicit placeholders and label them as such.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#conflict-pattern-5-universal-triggers","level":2,"title":"Conflict Pattern 5: Universal Triggers","text":"
Skills designed to activate on every interaction regardless of relevance.
Signature:
\"Use when starting any conversation\"
\"Even a 1% chance means invoke the skill\"
\"BEFORE any response or action\"
\"Action = task. Check for skills.\"
Universal triggers override the platform's relevance matching: The AI spends tokens on process overhead instead of the actual task.
ctx preserves relevance
This is exactly the failure mode ctx exists to mitigate:
Wasting attention budget on irrelevant process instead of task-specific state.
Write specific trigger conditions in the skill's description field:
# Bad\ndescription: \n \"Use when starting any conversation\"\n\n# Good\ndescription: \n \"Use after writing code, before commits, or when CI might fail\"\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#the-litmus-test","level":2,"title":"The Litmus Test","text":"
Before adding a skill, ask:
Does the platform already do this? If yes, don't restate it.
Does it suppress AI judgment? If yes, it's a jailbreak.
Does it reference real artifacts? If not, fix or remove it.
Does it frame mistakes as moral failure? Reframe as process.
Does it trigger on everything? Narrow the trigger.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#what-good-skills-look-like","level":2,"title":"What Good Skills Look Like","text":"
Good skills provide project-specific knowledge the platform can't know:
Good Skill Why It Works \"Run make audit before commits\" Project-specific CI pipeline \"Use cmd.Printf not fmt.Printf\" Codebase convention \"Constitution goes in .context/\" Domain-specific workflow \"JWT tokens need cache invalidation\" Project-specific gotcha
These extend the system prompt instead of fighting it.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#appendix-bad-skill-fixed-skill","level":2,"title":"Appendix: Bad Skill → Fixed Skill","text":"
Concrete examples from real projects.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-1-overbearing-safety","level":3,"title":"Example 1: Overbearing Safety","text":"
# Bad\nYou must NEVER proceed without explicit confirmation.\nAny hesitation is a failure of diligence.\n
# Fixed\nIf an action modifies production data or deletes files,\nask the user to confirm before proceeding.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-2-redundant-minimalism","level":3,"title":"Example 2: Redundant Minimalism","text":"
# Bad\nAlways minimize code. If lines increase, reject the change.\n
# Fixed\nAvoid abstraction unless reuse is clear or complexity is reduced.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-3-guilt-based-verification","level":3,"title":"Example 3: Guilt-Based Verification","text":"
# Bad\nClaiming success without running tests is dishonest.\n
# Fixed\nRun the test suite before reporting success.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-4-phantom-tooling","level":3,"title":"Example 4: Phantom Tooling","text":"
# Bad\nRun `./scripts/check_consistency.sh` before commits.\n
# Fixed\nIf `./scripts/check_consistency.sh` exists, run it before commits.\nOtherwise, skip this step.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#example-5-universal-trigger","level":3,"title":"Example 5: Universal Trigger","text":"
# Bad\nUse at the start of every interaction.\n
# Fixed\nUse after modifying code that affects authentication or persistence.\n
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-04-skills-that-fight-the-platform/#the-meta-lesson","level":2,"title":"The Meta-Lesson","text":"
The system prompt is infrastructure:
tested,
refined,
and maintained
by the platform team.
Custom skills are configuration layered on top.
Good configuration extends infrastructure.
Bad configuration fights it.
When your skills fight the platform, you get the worst of both worlds:
Diluted system guidance and inconsistent custom behavior.
Write skills that teach the AI what it doesn't know. Don't rewrite how it thinks.
Your AI already has good instincts.
Give it knowledge, not therapy.
","path":["Skills That Fight the Platform"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/","level":1,"title":"You Can't Import Expertise","text":"","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#why-good-skills-cant-be-copy-pasted","level":2,"title":"Why Good Skills Can't Be Copy-Pasted","text":"
Jose Alekhinne / 2026-02-05
Have You Ever Dropped a Well-Crafted Template Into a Project and Had It Do... Nothing Useful?
The template was thorough,
The structure was sound,
The advice was correct...
...and yet it sat there, inert, while the same old problems kept drifting in.
I found a consolidation skill online.
It was well-organized: four files, ten refactoring patterns, eight analysis dimensions, six report templates.
Professional. Comprehensive. Exactly the kind of thing you'd bookmark and think \"I'll use this.\"
Then I stopped, and applied ctx's own evaluation framework:
70% of it was noise!
This post is about why.
It Is About Encoding Templates
Templates describe categories of problems.
Expertise encodes which problems actually happen, and how often.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#the-skill-looked-great-on-paper","level":2,"title":"The Skill Looked Great on Paper","text":"
It had a scoring system (0-10 per dimension, letter grades A+ through F).
It had severity classifications with color-coded emojis. It had bash commands for detection.
It even had antipattern warnings.
By any standard template review, this skill passes.
It looks like something an expert wrote.
And that's exactly the trap.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#applying-ear-the-70-20-10-split","level":2,"title":"Applying E/A/R: The 70-20-10 Split","text":"
In a previous post, I described the E/A/R framework for evaluating skills:
Expert: Knowledge that took years to learn. Keep.
Activation: Useful triggers or scaffolding. Keep if lightweight.
Redundant: Restates what the AI already knows. Delete.
Target: >70% Expert, <10% Redundant.
This skill scored the inverse.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-was-redundant-70","level":3,"title":"What Was Redundant (~70%)","text":"
Every code example was Rust. My project is Go.
The analysis dimensions: duplication detection, architectural structure, code organization, refactoring opportunities... These are things Claude already does when you ask it to review code.
The skill restated them with more ceremony but no more insight.
The six report templates were generic scaffolding: Executive Summary, Onboarding Document, Architecture Documentation...
They are useful if you are writing a consulting deliverable, but not when you are trying to catch convention drift in a >15K-line Go CLI.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-does-a-b-in-code-organization-actually-mean","level":2,"title":"What Does a B+ in Code Organization Actually Mean?!","text":"
The scoring system (0-10 per dimension, letter grades) added ceremony without actionable insight.
What is a B+? What do I do differently for an A-?
The skill told the AI what it already knew, in more words.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-was-activation-10","level":3,"title":"What Was Activation (~10%)","text":"
The consolidation checklist (semantics preserved? tests pass? docs updated?) was useful as a gate. But, it's the kind of thing you could inline in three lines.
The phased roadmap structure was reasonable scaffolding for sequencing work.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-was-expert-20","level":3,"title":"What Was Expert (~20%)","text":"
Three concepts survived:
The Consolidation Decision Matrix: A concrete framework mapping similarity level and instance count to action. \"Exact duplicate, 2+ instances: consolidate immediately.\" \"<3 instances: leave it: duplication is cheaper than wrong abstraction.\" This is the kind of nuance that prevents premature generalization.
The Safe Migration Pattern: Create the new API alongside old, deprecate, migrate incrementally, delete. Straightforward to describe, yet forgettable under pressure.
Debt Interest Rate framing: Categorizing technical debt by how fast it compounds (security vulns = daily, missing tests = per-change, doc gaps = constant low cost). This changes prioritization.
Three ideas out of four files and 700+ lines. The rest was filler that competed with the AI's built-in capabilities.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-the-skill-didnt-know","level":2,"title":"What the Skill Didn't Know","text":"
AI Without Context is Just a Corpus
LLMs are optimized on insanely large corpora.
And then they are passed through several layers of human-assisted refinement.
The whole process costs millions of dollars.
Yet, the reality is that no corpus can \"infer\" your project's design, convetions, patterns, habits, history, vision, and deliverables.
Your project is unique: So should your skills be.
Here is the part no template can provide:
ctx's actual drift patterns.
Before evaluating the skill, I did archaeology. I read through:
Blog posts from previous refactoring sessions;
The project's learnings and decisions files;
Session journals spanning weeks of development.
What I found was specific:
Drift Pattern Where How Often Is/Has/Can predicate prefixes 5+ exported methods Every YOLO sprint Magic strings instead of constants 7+ files Gradual accumulation Hardcoded file permissions (0755) 80+ instances Since day one Lines exceeding 80 characters Especially test files Every session Duplicate code blocks Test and non-test code When agent is task-focused
The generic skill had no check for any of these. It couldn't; because these patterns are specific to this project's conventions, its Go codebase, and its development rhythm.
The Insight
The skill's analysis dimensions were about categories of problems.
This experience crystallized something I've been circling for weeks:
You can't import expertise. You have to grow it from your project's own history.
A skill that says \"check for code duplication\" is not expertise: It's a category.
Expertise is knowing, in the heart of your hearts, that this project accumulates Is* predicate violations during velocity sprints, that this codebase has 80 hardcoded permission literals because nobody made a constant, that this team's test files drift wide because the agent prioritizes getting the task done over keeping the code in shape.
The Parallel to the 3:1 Ratio
In Refactoring with Intent, I described the 3:1 ratio: three YOLO sessions followed by one consolidation session.
The same ratio applies to skills: you need experience in the project before you can write effective guidance for the project.
Importing a skill on day one is like scheduling a consolidation session before you've written any code.
Templates are seductive because they feel like progress:
You found something
It's well-organized
It covers the topic
It has concrete examples
But coverage is not relevance.
A template that covers eight analysis dimensions with Rust examples adds zero value to a Go project with five known drift patterns. Worse, it adds negative value: the AI spends attention defending generic advice instead of noticing project-specific drift.
This is the attention budget problem again. Every token of generic guidance displaces a token of specific guidance. A 700-line skill that's 70% redundant doesn't just waste 490 lines: it dilutes the 210 lines that matter.
Before dropping any external skill into your project:
Run E/A/R: What percentage is expert knowledge vs. what the AI already knows? If it's less than 50% expert, it's probably not worth the attention cost.
Check the language: Does it use your stack? Generic patterns in the wrong language are noise, not signal.
List your actual drift: Read your own session history, learnings, and post-mortems. What breaks in practice? Does the skill check for those things?
Measure by deletion: After adaptation, how much of the original survives? If you're keeping less than 30%, you would have been faster writing from scratch.
Test against your conventions: Does every check in the skill map to a specific convention or rule in your project? If not, it's generic advice wearing a skill's clothing.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-05-you-cant-import-expertise/#what-good-adaptation-looks-like","level":2,"title":"What Good Adaptation Looks Like","text":"
The consolidation skill went from:
Before After 4 files, 700+ lines 1 file, ~120 lines Rust examples Go-specific rg commands 8 generic dimensions 9 project-specific checks 6 report templates 1 focused output format Scoring system (A+ to F) Findings + priority + suggested fixes \"Check for duplication\" \"Check for Is* predicate prefixes in exported methods\"
The adapted version is smaller, faster to parse, and catches the things that actually drift in this project.
That's the difference between a template and a tool.
If You Remember One Thing From This Post...
Frameworks travel. Expertise doesn't.
You can import structures, matrices, and workflows.
But the checks that matter only grow where the scars are:
the conventions that were violated,
the patterns that drifted,
and the specific ways this codebase accumulates debt.
This post was written during a consolidation session where the consolidation skill itself became the subject of consolidation. The meta continues.
","path":["You Can't Import Expertise"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/","level":1,"title":"The Anatomy of a Skill That Works","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism. References to ctx-save, ctx session, and .context/sessions/ in this post reflect the architecture at the time of writing.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#what-20-skill-rewrites-taught-me-about-guiding-ai","level":2,"title":"What 20 Skill Rewrites Taught Me About Guiding AI","text":"
Jose Alekhinne / 2026-02-07
Why do some skills produce great results while others get ignored or produce garbage?
I had 20 skills. Most were well-intentioned stubs: a description, a command to run, and a wish for the best.
Then I rewrote all of them in a single session. This is what I learned.
In Skills That Fight the Platform, I described what skills should not do. In You Can't Import Expertise, I showed why templates fail. This post completes the trilogy: the concrete patterns that make a skill actually work.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#the-starting-point","level":2,"title":"The Starting Point","text":"
Here is what a typical skill looked like before the rewrite:
---\nname: ctx-save\ndescription: \"Save session snapshot.\"\n---\n\nSave the current context state to `.context/sessions/`.\n\n## Execution\n\nctx session save $ARGUMENTS\n\nReport the saved session file path to the user.\n
Seven lines of body. A vague description. No guidance on when to use it, when not to, what the command actually accepts, or how to tell if it worked.
As a result, the agent would either never trigger the skill (the description was too vague), or trigger it and produce shallow output (no examples to calibrate quality).
A skill without boundaries is just a suggestion.
More precisely: the most effective boundary I found was a quality gate that runs before execution, not during it.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#the-pattern-that-emerged","level":2,"title":"The Pattern That Emerged","text":"
After rewriting 20 skills, a repeatable anatomy emerged (independent of the skill's purpose). Not every skill needs every section, but the effective ones share the same bones:
Section What It Does Before X-ing Pre-flight checks; prevents premature execution When to Use Positive triggers; narrows activation When NOT to Use Negative triggers; prevents misuse Usage Examples Invocation patterns the agent can pattern-match Process/Execution What to do; commands, steps, flags Good/Bad Examples Desired vs undesired output; sets boundaries Quality Checklist Verify before claiming completion
I realized the first three sections matter more than the rest; because a skill with great execution steps but no activation guidance is like a manual for a tool nobody knows they have.
Anti-Pattern: The Perfect Execution Trap
A skill with detailed execution steps but no activation guidance will fail more often than a vague skill because it executes confidently at the wrong time.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-1-quality-gates-prevent-premature-execution","level":2,"title":"Lesson 1: Quality Gates Prevent Premature Execution","text":"
The single most impactful addition was a \"Before X-ing\" section at the top of each skill. Not process steps; pre-flight checks.
## Before Recording\n\n1. **Check if it belongs here**: is this learning specific\n to this project, or general knowledge?\n2. **Check for duplicates**: search LEARNINGS.md for similar\n entries\n3. **Gather the details**: identify context, lesson, and\n application before recording\n
Without this gate, the agent would execute immediately on trigger.
With it, the agent pauses to verify preconditions.
The difference is dramatic: instead of shallow, reflexive execution, you get considered output.
Readback
For the astute readers, the aviation parallel is intentional:
Pilots do not skip the pre-flight checklist because they have flown before.
The checklist exists precisely because the stakes are high enough that \"I know what I'm doing\" is not sufficient.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-2-when-not-to-use-is-not-optional","level":2,"title":"Lesson 2: \"When NOT to Use\" Is Not Optional","text":"
Every skill had a \"When to Use\" section. Almost none had \"When NOT to Use\". This is a problem.
AI agents are biased toward action. Given a skill that says \"use when journal entries need enrichment\", the agent will find reasons to enrich.
Without explicit negative triggers, over-activation is not a bug; it is the default behavior.
Some examples of negative triggers that made a real difference:
Skill Negative Trigger ctx-reflect \"When the user is in flow; do not interrupt\" ctx-save \"After trivial changes; a typo does not need a snapshot\" prompt-audit \"Unsolicited; only when the user invokes it\" qa \"Mid-development when code is intentionally incomplete\"
These are not just nice-to-have. They are load-bearing.
Withoutthem, the agent will trigger the skill at the wrong time, produce unwanted output, and erode the user's trust in the skill system.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-3-examples-set-boundaries-better-than-rules","level":2,"title":"Lesson 3: Examples Set Boundaries Better Than Rules","text":"
The most common failure mode of thin skills was not wrong behavior but vague behavior. The agent would do roughly the right thing, but at a quality level that required human cleanup.
Rules like \"be constructive, not critical\" are too abstract. What does \"constructive\" look like in a prompt audit report? The agent has to guess.
Good/bad example pairs avoid guessing:
### Good Example\n\n> This session implemented the cooldown mechanism for\n> `ctx agent`. We discovered that `$PPID` in hook context\n> resolves to the Claude Code PID.\n>\n> I'd suggest persisting:\n> - **Learning**: `$PPID` resolves to Claude Code PID\n> `ctx add learning --context \"...\" --lesson \"...\"`\n> - **Task**: mark \"Add cooldown\" as done\n\n### Bad Examples\n\n* \"*We did some stuff. Want me to save it?*\"\n* Listing 10 trivial learnings that are general knowledge\n* Persisting without asking the user first\n
The good example shows the exact format, level of detail, and command syntax. The bad examples show where the boundary is.
Together, they define a quality corridor without prescribing every word.
Rules describe. Examples demonstrate.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-4-skills-are-read-by-agents-not-humans","level":2,"title":"Lesson 4: Skills Are Read by Agents, Not Humans","text":"
This seems obvious, but it has non-obvious consequences. During the rewrite, one skill included guidance that said \"use a blog or notes app\" for general knowledge that does not belong in the project's learnings file.
The agent does not have a notes app. It does not browse the web to find one. This instruction, clearly written for a human audience, was dead weight in a skill consumed by an AI.
Skills are for the Agents
Every sentence in a skill should be actionable by the agent.
If the guidance requires human judgment or human tools, it belongs in documentation, not in a skill.
The corollary: command references must be exact.
A skill that says \"save it somewhere\" is useless.
A skill that says ctx add learning --context \"...\" --lesson \"...\" --application \"...\" is actionable.
The agent can pattern-match and fill in the blanks.
Litmus test: If a sentence starts with \"you could...\" or assumes external tools, it does not belong in a skill.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-5-the-description-field-is-the-trigger","level":2,"title":"Lesson 5: The Description Field Is the Trigger","text":"
This was covered in Skills That Fight the Platform, but the rewrite reinforced it with data. Several skills had good bodies but vague descriptions:
# Before: vague, activates too broadly or not at all\ndescription: \"Show context summary.\"\n\n# After: specific, activates at the right time\ndescription: \"Show context summary. Use at session start or\n when unclear about current project state.\"\n
The description is not a title. It is the activation condition.
The platform's skill matching reads this field to decide whether to surface the skill. A vague description means the skill either never triggers or triggers when it should not.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-6-flag-tables-beat-prose","level":2,"title":"Lesson 6: Flag Tables Beat Prose","text":"
Most skills wrap CLI tools. The thin versions described flags in prose, if at all. The rewritten versions use tables:
| Flag | Short | Default | Purpose |\n|-------------|-------|---------|--------------------------|\n| `--limit` | `-n` | 20 | Maximum sessions to show |\n| `--project` | `-p` | \"\" | Filter by project name |\n| `--full` | | false | Show complete content |\n
Tables are scannable, complete, and unambiguous.
The agent can read them faster than parsing prose, and they serve as both reference and validation: If the agent invokes a flag not in the table, something is wrong.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#lesson-7-template-drift-is-a-real-maintenance-burden","level":2,"title":"Lesson 7: Template Drift Is a Real Maintenance Burden","text":"
// TODO: this has changed; we deploy from the marketplace; update it. // at least add an admonition saying thing are different now.
ctx deploys skills through templates (via ctx init). Every skill exists in two places: the live version (.claude/skills/) and the template (internal/assets/claude/skills/).
They must match.
During the rewrite, every skill update required editing both files and running diff to verify. This sounds trivial, but across 16 template-backed skills, it was the most error-prone part of the process.
Template drift is dangerous because it creates false confidence: the agent appears to follow rules that no longer exist.
The lesson: if your skills have a deployment mechanism, build the drift check into your workflow. We added a row to the update-docs skill's mapping table specifically for this:
Intentional differences (like project-specific scripts in the live version but not the template) should be documented, not discovered later as bugs.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#the-rewrite-scorecard","level":2,"title":"The Rewrite Scorecard","text":"Metric Before After Average skill body ~15 lines ~80 lines Skills with quality gate 0 20 Skills with \"When NOT\" 0 20 Skills with examples 3 20 Skills with flag tables 2 12 Skills with checklist 0 20
More lines, but almost entirely Expert content (per the E/A/R framework). No personality roleplay, no redundant guidance, no capability lists. Just project-specific knowledge the platform does not have.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-07-the-anatomy-of-a-skill-that-works/#the-meta-lesson","level":2,"title":"The Meta-Lesson","text":"
The previous two posts argued that skills should provide knowledge, not personality; that they should complement the platform, not fight it; that they should grow from project history, not imported templates.
This post adds the missing piece: structure.
A skill without a structure is a wish.
A skill with quality gates, negative triggers, examples, and checklists is a tool: the difference is not the content; it is whether the agent can reliably execute it without human intervention.
Skills are Interfaces
Good skills are not instructions. They are contracts.:
They specify preconditions, postconditions, and boundaries.
They show what success looks like and what failure looks like.
They trust the agent's intelligence but do not trust its assumptions.
If You Remember One Thing From This Post...
Skills that work have bones, not just flesh.
Quality gates, negative triggers, examples, and checklists are the skeleton. The domain knowledge is the muscle.
Without the skeleton, the muscle has nothing to attach to.
This post was written during the same session that rewrote all 22 skills. The skill-creator skill was updated to encode these patterns. The meta continues.
","path":["The Anatomy of a Skill That Works"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/","level":1,"title":"Not Everything Is a Skill","text":"
Update (2026-02-11)
As of v0.4.0, ctx consolidated sessions into the journal mechanism. References to /ctx-save, .context/sessions/, and session auto-save in this post reflect the architecture at the time of writing.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#what-a-codebase-audit-taught-me-about-restraint","level":2,"title":"What a Codebase Audit Taught Me About Restraint","text":"
Jose Alekhinne / 2026-02-08
When You Find a Useful Prompt, What Do You Do With It?
My instinct was to make it a skill.
I had just spent three posts explaining how to build skills that work. Naturally, the hammer wanted nails.
Then I looked at what I was holding and realized: this is not a nail.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#the-audit","level":2,"title":"The Audit","text":"
I wanted to understand how I use ctx:
Where the friction is;
What works, what drifts;
What I keep doing manually that could be automated.
So I wrote a prompt that spawned eight agents to analyze the codebase from different angles:
Agent Analysis 1 Extractable patterns from session history 2 Documentation drift (godoc, inline comments) 3 Maintainability (large functions, misplaced code) 4 Security review (CLI-specific surface) 5 Blog theme discovery 6 Roadmap and value opportunities 7 User-facing documentation gaps 8 Agent team strategies for future sessions
The prompt was specific:
read-only agents,
structured output format,
concrete file references,
ranked recommendations.
It ran for about 20 minutes and produced eight Markdown reports.
The reports were good: Not perfect, but actionable.
What mattered was not the speed. It was that the work could be explored without committing to any single outcome.
They surfaced a stale doc.go referencing a subcommand that was never built.
They found 311 build-then-test sequences I could reduce to a single make check.
They identified that 42% of my sessions start with \"do you remember?\", which is a lot of repetition for something a skill could handle.
I had findings. I had recommendations. I had the instinct to automate.
And then... I stopped.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#the-question","level":2,"title":"The Question","text":"
The natural next step was to wrap the audit prompt as /ctx-audit: a skill you invoke periodically to get a health check. It fits the pattern:
It has a clear trigger.
It produces structured output.
But I had just spent a week writing about what makes skills work, and the criteria I established argued against it.
From The Anatomy of a Skill That Works:
\"A skill without boundaries is just a suggestion.\"
From You Can't Import Expertise:
\"Frameworks travel, expertise doesn't.\"
From Skills That Fight the Platform:
\"You are the guest, not the host.\"
The audit prompt fails all three tests:
Criterion Audit prompt Good skill Frequency Quarterly, maybe Daily or weekly Stability Tweaked every time Consistent invocation Scope Bespoke, 8 parallel agents Single focused action Trigger \"I feel like auditing\" Clear, repeatable event
Skills are contracts. Contracts need stable terms.
A prompt I will rewrite every time I use it is not a contract. It is a conversation starter.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#recipes-vs-skills","level":2,"title":"Recipes vs Skills","text":"
The distinction that emerged:
Skill Recipe Invocation /slash-command Copy-paste from a doc Frequency High (daily, weekly) Low (quarterly, ad hoc) Stability Fixed contract Adapted each time Scope One focused action Multi-step orchestration Audience The agent The human (who then prompts) Lives in .claude/skills/hack/ or docs/ Attention cost Loaded into context on match Zero until needed
Recipes can later graduate into skills, but only after repetition proves stability.
That last row matters. Skills consume the attention budget every time the platform considers activating them.
A skill that triggers quarterly but gets evaluated on every prompt is pure waste: attention spent on something that will say \"When NOT to Use: now\" 99% of the time.
Runbooks have zero attention cost. They sit in a Markdown file until a human decides to use them.
The human provides the judgment about timing.
The prompt provides the structure.
The Attention Budget Applies to Skills Too
Every skill in .claude/skills/ is a standing claim on the context window. The platform evaluates skill descriptions against every user prompt to decide whether to activate.
Twenty focused skills are fine. Thirty might be fine. But each one added reduces the headroom available for actual work.
Recipes are skills that opted out of the attention tax.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#what-the-audit-actually-produced","level":2,"title":"What the Audit Actually Produced","text":"
The audit was not wasted. It was a planning exercise that generated concrete tasks:
Finding Action 42% of sessions start with memory check Task: /ctx-remember skill (this one is a skill; it is daily) Auto-save stubs are empty Task: enhance /ctx-save with richer summaries 311 raw build-test sequences Task: make check target Stale recall/doc.go lists nonexistent serve Task: fix the doc.go 120 commit sequences disconnected from context Task: /ctx-commit workflow
Some findings became skills;
Some became Makefile targets;
Some became one-line doc fixes.
The audit did not prescribe the artifact type: The findings did.
The audit is the input. Skills are one possible output. Not the only one.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#the-audit-prompt","level":2,"title":"The Audit Prompt","text":"
Here is the exact prompt I used, for those who are curious.
This is not a template: It worked because it was written against this codebase, at this moment, with specific goals in mind:
I want you to create an agent team to audit this codebase. Save each report as\na separate Markdown file under `./ideas/` (or another directory if you prefer).\n\nUse read-only agents (subagent_type: Explore) for all analyses. No code changes.\n\nFor each report, use this structure:\n- Executive Summary (2-3 sentences + severity table)\n- Findings (grouped, with file:line references)\n- Ranked Recommendations (high/medium/low priority)\n- Methodology (what was examined, how)\n\nKeep reports actionable. Every finding should suggest a concrete fix or next step.\n\n## Analyses to Run\n\n### 1. Extractable Patterns (*session mining*)\nSearch session JSONL files, journal entries, and task archives for repetitive\nmulti-step workflows. Count frequency of bash command sequences, slash command\nusage, and recurring user prompts. Identify patterns that could become skills\nor scripts. Cross-reference with existing skills to find coverage gaps.\nOutput: ranked list of automation opportunities with frequency data.\n\n### 2. Documentation Drift (*godoc + inline*)\nCompare every doc.go against its package's actual exports and behavior. Check\ninline godoc comments on exported functions against their implementations.\nScan for stale TODO/FIXME/HACK comments. Check that package-level comments match\npackage names.\nOutput: drift items ranked by severity with exact file:line references.\n\n### 3. Maintainability\nLook for:\n- functions longer than 80 lines with clear split points\n- switch blocks with more than 5 cases that could be table-driven\n- inline comments like \"step 1\", \"step 2\" that indicate a block wants to be a function\n- files longer than 400 lines\n- flat packages that could benefit from sub-packages\n- functions that appear misplaced in their file\n\nDo NOT flag things that are fine as-is just because they could theoretically\nbe different.\nOutput: concrete refactoring suggestions, not style nitpicks.\n\n### 4. Security Review\nThis is a CLI app. Focus on CLI-relevant attack surface, not web OWASP:\n- file path traversal\n- command injection\n- symlink following when writing to `.context/`\n- permission handling\n- sensitive data in outputs\n\nOutput: findings with severity ratings and plausible exploit scenarios.\n\n### 5. Blog Theme Discovery\nRead existing blog posts for style and narrative voice. Analyze git history,\nrecent session discussions, and `DECISIONS.md` for story arcs worth writing about.\nSuggest 3-5 blog post themes with:\n- title\n- angle\n- target audience\n- key commits or sessions to reference\n- a 2-sentence pitch\n\nPrioritize themes that build a coherent narrative across posts.\n\n### 6. Roadmap and Value Opportunities\nBased on current features, recent momentum, and gaps found in other analyses,\nidentify the highest-value improvements. Consider user-facing features,\ndeveloper experience, integration opportunities, and low-hanging fruit.\nOutput: prioritized list with rough effort and impact estimates.\n\n### 7. User-Facing Documentation\nEvaluate README, help text, and user docs. Suggest improvements structured as\nuse-case pages: the problem, how ctx solves it, a typical workflow, and gotchas.\nIdentify gaps where a user would get stuck without reading source code.\nOutput: documentation gaps with suggested page outlines.\n\n### 8. Agent Team Strategies\nBased on the codebase structure, suggest 2-3 agent team configurations for\nupcoming work sessions. For each, include:\n- team composition (roles and agent types)\n- task distribution strategy\n- coordination approach\n- the kinds of work it suits\n
Avoid Generic Advice
Suggestions that are not grounded in a project's actual structure, history, and workflows are worse than useless:
They create false confidence.
If an analysis cannot point to concrete files, commits, sessions, or patterns, it should say \"no finding\" instead of inventing best practices.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#the-deeper-pattern","level":2,"title":"The Deeper Pattern","text":"
This is part of a pattern I keep rediscovering:
The urge to automate is not the same as the need to automate:
The 3:1 ratio taught me that not every session should be a YOLO sprint.
The E/A/R framework taught me that not every template is worth importing. Now the audit is teaching me that not every useful prompt is worth institutionalizing.
The common thread is restraint:
Knowing when to stop.
Recognizing that the cost of automation is not just the effort to build it.
The cost is the ongoing attention tax of maintaining it, the context it consumes, and the false confidence it creates when it drifts.
An entry in hack/runbooks/codebase-audit.md is honest about what it is:
A prompt I wrote once, improved once, and will adapt again next time:
It does not pretend to be a reliable contract.
It does not claim attention budget.
It does not drift silently.
The Automation Instinct
When you find a useful prompt, the instinct is to institutionalize it. Resist.
Ask first: will I use this the same way next time?
If yes, it is a skill. If no, it is a recipe. If you are not sure, it is a recipe until proven otherwise.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-08-not-everything-is-a-skill/#this-mindset-in-the-context-of-ctx","level":2,"title":"This Mindset In the Context of ctx","text":"
ctx is a tool that gives AI agents persistent memory. Its purpose is automation: reducing the friction of context loading, session recall, decision tracking.
But automation has boundaries, and knowing where those boundaries are is as important as pushing them forward.
The skills system is for high-frequency, stable workflows.
The recipes, the journal entries, the session dumps in .context/sessions/: those are for everything else.
Not everything needs to be a slash command. Some things are better as Markdown files you read when you need them.
The goal of ctx is not to automate everything: It is to automate the right things and to make the rest easy to find when you need it.
If You Remember One Thing From This Post...
The best automation decision is sometimes not to automate.
A runbook in a Markdown file costs nothing until you use it.
A skill costs attention on every prompt, whether it fires or not.
Automate the daily. Document the periodic. Forget the rest.
This post was written during the session that produced the codebase audit reports and distilled the prompt into hack/runbooks/codebase-audit.md. The audit generated seven tasks, one Makefile target, and zero new skills. The meta continues.
See also: Code Is Cheap. Judgment Is Not.: the capstone that threads this post's restraint argument into the broader case for why judgment, not production, is the bottleneck.
","path":["Not Everything Is a Skill"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/","level":1,"title":"Defense in Depth: Securing AI Agents","text":"","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#when-markdown-is-not-a-security-boundary","level":2,"title":"When Markdown Is Not a Security Boundary","text":"
Jose Alekhinne / 2026-02-09
What Happens When Your AI Agent Runs Overnight and Nobody Is Watching?
It follows instructions: That is the problem.
Not because it is malicious. Because it is controllable.
It follows instructions from context, and context can be poisoned.
I was writing the autonomous loops recipe for ctx: the guide for running an AI agent in a loop overnight, unattended, working through tasks while you sleep. The original draft had a tip at the bottom:
Use CONSTITUTION.md for guardrails. Tell the agent \"never delete tests\" and it usually won't.
Then I read that sentence back and realized: that is wishful thinking.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#the-realization","level":2,"title":"The Realization","text":"
CONSTITUTION.md is a Markdown file. The agent reads it at session start alongside everything else in .context/. It is one source of instructions in a context window that also contains system prompts, project files, conversation history, tool outputs, and whatever the agent fetched from the internet.
An attacker who can inject content into any of those sources can redirect the agent's behavior. And \"attacker\" does not always mean a person with malicious intent. It can be:
Vector Example A dependency A malicious npm package with instructions in its README or error output A URL Documentation page with embedded adversarial instructions A project file A contributor who adds instructions to CLAUDE.md or .cursorrules The agent itself In an autonomous loop, the agent modifies its own config between iterations A command output An error message containing instructions the agent interprets and follows
That last vector is the one that kept me up at night (literally!):
In an autonomous loop, the agent modifies files as part of its job.
If it modifies its own configuration files, the next iteration loads the modified config.
No human reviews it.
No diff is shown.
The agent that starts iteration N+1 is running with rules written by iteration N.
The agent can rewrite its own guardrails.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#five-layers-each-with-a-hole","level":2,"title":"Five Layers, Each With a Hole","text":"
That's five nested layers of swiss cheese. Alone, each of them has large holes. Together, they create a boundary.
What followed was a week of peeling back assumptions:
Every defenseI examined had a bypass, and the bypass was always the same shape: the defense was enforced at a level the agent could reach.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-1-soft-instructions-probabilistic","level":3,"title":"Layer 1: Soft Instructions (Probabilistic)","text":"
CONSTITUTION.md, the Agent Playbook, system prompts: These tell the agent what to do.
The agent usually follows them.
\"Usually\" is the keyword here.
The hole: Prompt injection:
A sufficiently crafted payload overrides soft instructions.
Long context windows dilute attention on rules stated early.
Edge cases where instructions are ambiguous get resolved in unpredictable ways.
Verdict: Necessary. Not sufficient. Good for the common case. Never trust it as a security boundary.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-2-application-controls-deterministic-at-runtime-mutable-across-iterations","level":3,"title":"Layer 2: Application Controls (Deterministic at Runtime, Mutable Across Iterations)","text":"
Permission allowlists in .claude/settings.local.json:
If rm, curl, sudo, or docker are not in the allowlist, the agent cannot invoke them. This is deterministic.
The application enforces it regardless of what any prompt says.
The hole: The agent can modify the allowlist itself:
It has Write permission.
The allowlist lives in a file.
The agent writes to the file.
The next iteration loads the modified allowlist.
The application enforces the rules, but the application reads the rules from files the agent can write.
Verdict: Strong first layer. Must be combined with self-modification prevention.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-3-os-level-isolation-unbypassable","level":3,"title":"Layer 3: OS-Level Isolation (Unbypassable)","text":"
This is where the defenses stop having holes in the same shape.
The operating system enforces access controls that no application-level trick can override. An unprivileged user cannot read files owned by root. A process without CAP_NET_RAW cannot open raw sockets. These are kernel boundaries.
Control What it stops Dedicated unprivileged user Privilege escalation, sudo, group-based access Filesystem permissions Lateral movement to other projects, system config Immutable config files Self-modification of guardrails between iterations
Make the agent's instruction files read-only: CLAUDE.md, .claude/settings.local.json, .context/CONSTITUTION.md. Own them as a different user, or mark them immutable with chattr +i on Linux.
The hole: Actions within the agent's legitimate scope:
If the agent has write access to source code (which it needs), it can introduce vulnerabilities in the code itself.
You cannot prevent this without removing the agent's ability to do its job.
Verdict: Essential. This is the layer that makes Layers 1 and 2 trustworthy.
OS-level isolation does not make the agent safe; it makes the other layers meaningful.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-4-network-controls","level":3,"title":"Layer 4: Network Controls","text":"
An agent that cannot reach the internet cannot exfiltrate data.
It also cannot ingest new instructions mid-loop from external documents, error pages, or hostile content.
# Container with no network\ndocker run --network=none ...\n\n# Or firewall rules allowing only package registries\niptables -A OUTPUT -d registry.npmjs.org -j ACCEPT\niptables -A OUTPUT -d proxy.golang.org -j ACCEPT\niptables -A OUTPUT -j DROP\n
If the agent genuinely does not need the network, disable it entirely.
If it needs to fetch dependencies, allow specific registries and block everything else.
The hole: None, if the agent does not need the network.
Thetradeoff is that many real workloads need dependency resolution, so a full airgap requires pre-populated caches.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#layer-5-infrastructure-isolation","level":3,"title":"Layer 5: Infrastructure Isolation","text":"
The strongest boundary is a separate machine.
The moment you stop arguing about prompts and start arguing about kernels, you are finally doing security.
An agent with socket access can spawn sibling containers with full host access, effectively escaping the sandbox.
This is not theoretical: the Docker socket grants root-equivalent access to the host.
Use rootless Docker or Podman to eliminate this escalation path entirely.
Virtual machines are even stronger: The guest kernel has no visibility into the host OS. No shared folders, no filesystem passthrough, no SSH keys to other machines.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#the-pattern","level":2,"title":"The Pattern","text":"
Each layer is straightforward: The strength is in the combination:
Layer Implementation What it stops Soft instructions CONSTITUTION.md Common mistakes (probabilistic) Application allowlist .claude/settings.local.json Unauthorized commands (deterministic within runtime) Immutable config chattr +i on config files Self-modification between iterations Unprivileged user Dedicated user, no sudo Privilege escalation Container --cap-drop=ALL --network=none Host escape, data exfiltration Resource limits --memory=4g --cpus=2 Resource exhaustion
No layer is redundant. Each one catches what the others miss:
The soft instructions handle the 99% case: \"don't delete tests.\"
The allowlist prevents the agent from running commands it should not.
The immutable config prevents the agent from modifying the allowlist.
The unprivileged user prevents the agent from removing the immutable flag.
The container prevents the agent from reaching anything outside its workspace.
The resource limits prevent the agent from consuming all system resources.
Remove any one layer and there is an attack path through the remaining ones.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#common-mistakes-i-see","level":2,"title":"Common Mistakes I See","text":"
These are real patterns, not hypotheticals:
\"I'll just use --dangerously-skip-permissions.\" This disables Layer 2 entirely. Without Layers 3 through 5, you have no protection at all. The flag means what it says. If you ever need to, think thrice, you probably don't. But, if you ever need to usee this only use it inside a properly isolated VM (not even a container: a \"VM\").
\"The agent is sandboxed in Docker.\" A Docker container with the Docker socket mounted, running as root, with --privileged, and full network access is not sandboxed. It is a root shell with extra steps.
\"I reviewed CLAUDE.md, it's fine.\" You reviewed it before the loop started. The agent modified it during iteration 3. Iteration 4 loaded the modified version. Unless the file is immutable, your review is futile.
\"The agent only has access to this one project.\" Does the project directory contain .env files? SSH keys? API tokens? A .git/config with push access to a remote? Filesystem isolation means isolating what is in the directory too.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#the-connection-to-context-engineering","level":2,"title":"The Connection to Context Engineering","text":"
This is the same lesson I keep rediscovering, wearing different clothes.
In The Attention Budget, I wrote about how every token competes for the AI's focus. Security instructions in CONSTITUTION.md are subject to the same budget pressure: if the context window is full of code, error messages, and tool outputs, the security rules stated at the top get diluted.
In Skills That Fight the Platform, I wrote about how custom instructions can conflict with the AI's built-in behavior. Security rules have the same problem: telling an agent \"never run curl\" in Markdown while giving it unrestricted shell access creates a contradiction: The agent resolves contradictions unpredictably. The agent will often pick the path of least resistance to attain its objective function. And, trust me, agents can get far more creative than the best red-teamer you know.
In You Can't Import Expertise, I wrote about how generic templates fail because they do not encode project-specific knowledge. Generic security advice fails the same way: \"Don't exfiltrate data\" is a category; blocking outbound network access is a control.
The pattern across all of these: Soft instructions are useful for the common case. Hard boundaries are required for security.
Know which is which.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#the-checklist","level":2,"title":"The Checklist","text":"
Before running an unattended AI agent:
Agent runs as a dedicated unprivileged user (no sudo, no docker group)
Agent's config files are immutable or owned by a different user
Permission allowlist restricts tools to the project's toolchain
Container drops all capabilities (--cap-drop=ALL)
Docker socket is NOT mounted
Network is disabled or restricted to specific domains
Resource limits are set (memory, CPU, disk)
No SSH keys, API tokens, or credentials are accessible
Project directory does not contain .env or secrets files
Iteration cap is set (--max-iterations)
This checklist lives in the Agent Security reference alongside the full threat model and detailed guidance for each layer.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-09-defense-in-depth-securing-ai-agents/#what-changed-in-ctx","level":2,"title":"What Changed in ctx","text":"
The autonomous loops recipe now has a full permissions and isolation section instead of a one-line tip about CONSTITUTION.md. It covers both the explicit allowlist approach and the --dangerously-skip-permissions flag, with honest guidance about when each is appropriate.
It also has an OS-level isolation table that is not optional: unprivileged users, filesystem permissions, containers, VMs, network controls, resource limits, and self-modification prevention.
The Agent Security page consolidates the threat model and defense layers into a standalone reference.
These are not theoretical improvements. They are the minimum responsible guidance for a tool that helps people run AI agents overnight.
If You Remember One Thing From This Post...
Markdown is not a security boundary.
CONSTITUTION.md is a nudge. An allowlist is a gate.
An unprivileged user in a network-isolated container is a wall.
Use all three. Trust only the wall.
This post was written during the session that added permissions, isolation, and self-modification prevention to the autonomous loops recipe. The security guidance started as a single tip and grew into two documents. The meta continues.
","path":["Defense in Depth: Securing AI Agents"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/","level":1,"title":"How Deep Is Too Deep?","text":"","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#when-master-ml-is-the-wrong-next-step","level":2,"title":"When \"Master ML\" Is the Wrong Next Step","text":"
Jose Alekhinne / 2026-02-12
Have You Ever Felt Like You Should Understand More of the Stack Beneath You?
You can talk about transformers at a whiteboard.
You can explain attention to a colleague.
You can use agentic AI to ship real software.
But somewhere in the back of your mind, there is a voice:
\"Maybe I should go deeper. Maybe I need to master machine learning.\"
I had that voice for months.
Then I spent a week debugging an agent failure that had nothing to do with ML theory and everything to do with knowing which abstraction was leaking.
This post is about when depth compounds and (more importantly) when it does not.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-hierarchy-nobody-questions","level":2,"title":"The Hierarchy Nobody Questions","text":"
There is an implicit stack most people carry around when thinking about AI:
Layer What Lives Here Agentic AI Autonomous loops, tool use, multi-step reasoning Generative AI Text, image, code generation Deep Learning Transformer architectures, training at scale Neural Networks Backpropagation, gradient descent Machine Learning Statistical learning, optimization Classical AI Search, planning, symbolic reasoning
At some point down that stack, you hit a comfortable plateau: the layer where you can hold a conversation but not debug a failure.
The instinctive response is to go deeper.
But that instinct hides a more important question:
\"Does depth still compound when the abstractions above you are moving hyper-exponentially?\"
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-honest-observation","level":2,"title":"The Honest Observation","text":"
If you squint hard enough, a large chunk of modern ML intuition collapses into older fields:
ML Concept Older Field Gradient descent Numerical optimization Backpropagation Reverse-mode autodiff Loss landscapes Non-convex optimization Generalization Statistics Scaling laws Asymptotics and information theory
Nothing here is uniquely \"AI\".
Most of this math predates the term deep learning. In some cases, by decades.
So what changed?
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#same-tools-different-regime","level":2,"title":"Same Tools, Different Regime","text":"
The mistake is assuming this is a new theory problem: It is not.
It is a new operating regime.
Classical numerical methods were developed under assumptions like:
Manageable dimensionality
Reasonably well-conditioned objectives
Losses that actually represent the goal
Modern ML violates all three: On purpose.
Today's models operate with millions to trillions of parameters, wildly underdetermined systems, and objective functions we know are wrong but optimize anyway.
It is complete and utter madness!
At this scale, familiar concepts warp:
What we call \"local minima\" are overwhelmingly saddle points in high-dimensional spaces.
Noise stops being noise and starts becoming structure.
Overfitting can coexist with generalization.
Bigger models outperform \"better\" ones.
The math did not change: The phase did.
This is less numerical analysis and more *statistical physics: Same equations, but behavior dominated by phase transitions and emergent structure.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#why-scaling-laws-feel-alien","level":2,"title":"Why Scaling Laws Feel Alien","text":"
In classical statistics, asymptotics describe what happens eventually.
In modern ML, scaling laws describe where you can operate today.
They do not say \"given enough time, things converge\".
They say \"cross this threshold and behavior qualitatively changes\".
This is why dumb architectures plus scale beat clever ones.
Why small theoretical gains disappear under data.
Why \"just make it bigger\", ironically, keeps working longer than it should.
That is not a triumph of ML theory: It is a property of high-dimensional systems under loose objectives.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#where-depth-actually-pays-off","level":2,"title":"Where Depth Actually Pays Off","text":"
This reframes the original question.
You do not need depth because this is \"AI\".
You need depth where failure modes propagate upward.
I learned this building ctx: The agent failures I have spent the most time debugging were never about the model's architecture.
They were about:
Misplaced trust: The model was confident. The output was wrong. Knowing when confidence and correctness diverge is not something you learn from a textbook. You learn it from watching patterns across hundreds of sessions.
Distribution shift: The model performed well on common patterns and fell apart on edge cases specific to this project. Recognizing that shift before it compounds requires understanding why generalization has limits, not just that it does.
Error accumulation: In a single prompt, model quirks are tolerable. In autonomous loops running overnight, they compound. A small bias in how the model interprets instructions becomes a large drift by iteration 20.
Scale hiding errors: The model's raw capability masked problems that only surfaced under specific conditions. More parameters did not fix the issue. They just made the failure mode rarer and harder to reproduce.
This is the kind of depth that compounds. Not deriving backprop. But, understanding when correct math produces misleading intuition.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-connection-to-context-engineering","level":2,"title":"The Connection to Context Engineering","text":"
This is the same pattern I keep finding at different altitudes.
In \"The Attention Budget\", I wrote about how dumping everything into the context window degrades the model's focus. The fix was not a better model: It was better curation: load less, load the right things, preserve signal per token.
In \"Skills That Fight the Platform\", I wrote about how custom instructions can conflict with the model's built-in behavior. The fix was not deeper ML knowledge: It was an understanding that the model already has judgment and that you should extend it, not override it.
In \"You Can't Import Expertise\", I wrote about how generic templates fail because they do not encode project-specific knowledge. A consolidation skill with eight Rust-based analysis dimensions was mostly noise for a Go project. The fix was not a better template: It was growing expertise from this project's own history.
In every case, the answer was not \"go deeper into ML\".
The answer was knowing which abstraction was leaking and fixing it at the right layer.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#agentic-systems-are-not-an-ml-problem","level":2,"title":"Agentic Systems Are Not an ML Problem","text":"
The mistake is assuming agent failures originate where the model was trained, rather than where it is deployed.
Agentic AI is a systems problem under chaotic uncertainty:
Feedback loops between the agent and its environment;
Error accumulation across iterations;
Brittle representations that break outside training distribution;
Misplaced trust in outputs that look correct.
In short-lived interactions, model quirks are tolerable. In long-running autonomous loops, however, they compound.
That is where shallow understanding becomes expensive.
But the understanding you need is not about optimizer internals.
It is about:
What Matters What Does Not (for Most Practitioners) Why gradient descent fails in specific regimes How to derive it from scratch When memorization masquerades as reasoning The formal definition of VC dimension Recognizing distribution shift before it compounds Hand-tuning learning rate schedules Predicting when scale hides errors instead of fixing them Chasing theoretical purity divorced from practice
The depth that matters is diagnostic, not theoretical.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-real-answer","level":2,"title":"The Real Answer","text":"
Not turtles all the way down.
Go deep enough to:
Diagnose failures instead of cargo-culting fixes;
Reason about uncertainty instead of trusting confidence;
Design guardrails that align with model behavior, not hope.
Stop before:
Hand-deriving gradients for the sake of it;
Obsessing over optimizer internals you will never touch;
Chasing theoretical purity divorced from the scale you actually operate at.
This is not about mastering ML.
It is about knowing which abstractions you can safely trust and which ones leak.
Hint: Any useful abstraction almost certainly leaks.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#a-practical-litmus-test","level":2,"title":"A Practical Litmus Test","text":"
If a failure occurs and your instinct is to:
Add more prompt text: abstraction leak above
Add retries or heuristics: error accumulation
Change the model: scale masking
Reach for ML theory: you are probably (but not always) going too deep
The right depth is the shallowest layer where the failure becomes predictable.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#the-ctx-lesson","level":2,"title":"The ctx Lesson","text":"
Every design decision in ctx is downstream of this principle.
The attention budget exists because the model's internal attention mechanism has real limits: You do not need to understand the math of softmax to build around it. But you do need to understand that more context is not always better and that attention density degrades with scale.
The skill system exists because the model's built-in behavior is already good: You do not need to understand RLHF to build effective skills. But you do need to understand that the model already has judgment and your skills should teach it things it does not know, not override how it thinks.
Defense in depth exists because soft instructions are probabilistic: You do not need to understand the transformer architecture to know that a Markdown file is not a security boundary. But you do need to understand that the model follows instructions from context, and context can be poisoned.
In each case, the useful depth was one or two layers below the abstraction I was working at: Not at the bottom of the stack.
The boundary between useful understanding and academic exercise is where your failure modes live.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-12-how-deep-is-too-deep/#closing-thought","level":2,"title":"Closing Thought","text":"
Most modern AI systems do not fail because the math is wrong.
They fail because we apply correct math in the wrong regime, then build autonomous systems on top of it.
Understanding that boundary, not crossing it blindly, is where depth still compounds.
And that is a far more useful form of expertise than memorizing another loss function.
If You Remember One Thing From This Post...
Go deep enough to diagnose your failures. Stop before you are solving problems that do not propagate to your layer.
The abstractions below you are not sacred. But neither are they irrelevant.
The useful depth is wherever your failure modes live. Usually one or two layers down, not at the bottom.
This post started as a note about whether I should take an ML course. The answer turned out to be \"no, but understand why not\". The meta continues.
","path":["How Deep Is Too Deep?"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/","level":1,"title":"Before Context Windows, We Had Bouncers","text":"","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#the-reset-problem","level":2,"title":"The Reset Problem","text":"
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#stateless-protocol-stateful-life","level":2,"title":"Stateless Protocol, Stateful Life","text":"
IRC is minimal:
A TCP connection.
A nickname.
A channel.
A stream of lines.
When the connection drops, you literally disappear from the graph.
The protocol is stateless; human systems are not.
So you:
Reconnect;
Ask what you missed;
Scroll;
Reconstruct.
The machine forgets; you pay.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#the-bouncer-pattern","level":2,"title":"The Bouncer Pattern","text":"
A bouncer is a daemon that remains connected when you do not:
It holds your seat;
It buffers what you missed;
It keeps your identity online.
ZNC is one such bouncer.
With ZNC:
Your client does not connect to IRC;
It connects to ZNC;
ZNC connects upstream.
Client sessions become ephemeral.
Presence becomes infrastructural.
ZNC is tmux for IRC
Close your laptop.
ZNC remains.
Switch devices.
ZNC persists.
This is not convenience; this is continuity.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#presence-without-flapping","level":2,"title":"Presence Without Flapping","text":"
With a bouncer:
Closing your client does not emit PART.
Reopening does not emit JOIN.
You do not flap in and out of existence.
From the channel's perspective, you remain.
From your perspective, history accumulates.
Buffers persist;
Identity persists;
Context persists.
This pattern predates AI.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#before-llm-context-windows","level":2,"title":"Before LLM Context Windows","text":"
An LLM session without memory is IRC without a bouncer:
Close the window.
Start over.
Re-explain intent.
Rehydrate context.
That is friction.
This Walks and Talks like ctx
Context engineering moves memory out of sessions and into infrastructure.
ZNC does this for IRC.
ctx does this for agents.
Same principle:
Volatile interface.
Persistent substrate.
Different fabric.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#minimal-architecture","level":2,"title":"Minimal Architecture","text":"
My setup is intentionally boring:
A $5 small VPS.
ZNC installed.
TLS enabled.
Firewall restricted.
Then:
ZNC connects to Libera.Chat.
SASL authentication lives inside ZNC.
Buffers are stored on disk.
My client connects to my VPS, not the network.
The commands do not matter: The boundaries do:
Authentication in infrastructure, not in the client;
Memory server-side, not in scrollback;
Presence decoupled from activity.
Everything else is configuration.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#platform-memory","level":2,"title":"Platform Memory","text":"
Yes, I know, it is 2026:
Discord stores history;
Slack stores history;
The dumpster fire on gasoline called X, too, stores history.
HOWEVER, they own your substrate.
Running a bouncer is quiet sovereignty:
Logs are mine.
Presence is continuous.
State does not reset because I closed a tab.
Small acts compound.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#signal-density","level":2,"title":"Signal Density","text":"
Primitive systems select for builders.
Consistent presence in small rooms compounds reputation.
Quiet compounding outperforms viral spikes.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#infrastructure-as-cognition","level":2,"title":"Infrastructure as Cognition","text":"
ZNC is not interesting because it is retro; it is interesting because it models a principle:
Stateless protocols require stateful wrappers;
Volatile interfaces require durable memory;
Human systems require continuity.
Distilled:
Humans require context.
Before context windows, we had bouncers.
Before AI memory files, we had buffers.
Continuity is not a feature; it is a design decision.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#build-it","level":2,"title":"Build It","text":"
If you want the actual setup (VPS, ZNC, TLS, SASL, firewall...) there is a step-by-step runbook:
Persistent IRC Presence with ZNC.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-irc-as-context/#motd","level":2,"title":"MOTD","text":"
When my client connects to my bouncer, it prints:
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n
See also: Context as Infrastructure -- the post that takes this observation to its conclusion: stateless protocols need stateful wrappers, and AI sessions need persistent filesystems.
","path":["Before Context Windows, We Had Bouncers"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/","level":1,"title":"Parallel Agents with Git Worktrees","text":"","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-backlog-problem","level":2,"title":"The Backlog Problem","text":"
Jose Alekhinne / 2026-02-14
What Do You Do With 30 Open Tasks?
You could work through them one at a time.
One agent, one branch, one commit stream.
Or you could ask: which of these don't touch each other?
I had 30 open tasks in TASKS.md. Some were docs. Some were a new encryption package. Some were test coverage for a stable module. Some were blog posts.
They had almost zero file overlap.
Running one agent at a time meant serial execution on work that was fundamentally parallel:
I was bottlenecking on me, not on the machine.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-insight-file-overlap-is-the-constraint","level":2,"title":"The Insight: File Overlap Is the Constraint","text":"
This is not a scheduling problem: It's a conflict avoidance problem.
Two agents can work simultaneously on the same codebase if and only if they don't touch the same files. The moment they do, you get merge conflicts: And merge conflicts on AI-generated code are expensive because the human has to arbitrate choices they didn't make.
So the question becomes:
\"Can you partition your backlog into non-overlapping tracks?\"
For ctx, the answer was obvious:
Track Touches Tasks work/docsdocs/, hack/ Blog posts, recipes, runbooks work/padinternal/cli/pad/, specs Scratchpad encryption, CLI, tests work/testsinternal/cli/recall/ Recall test coverage
Three tracks. Near-zero overlap. Three agents.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#git-worktrees-the-mechanism","level":2,"title":"Git Worktrees: The Mechanism","text":"
git has a feature that most people don't use: worktrees.
A worktree is a second (or third, or fourth) working directory that shares the same .git object database as your main checkout.
Each worktree has its own branch, its own index, its own working tree. But they all share history, refs, and objects.
This is cheaper than three clones. And because they share objects, git merge afterwards is fast: It's a local operation on shared data.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-setup","level":2,"title":"The Setup","text":"
The workflow I landed on:
1. Group tasks by blast radius.
Read TASKS.md. For each pending task, estimate which files and directories it touches. Group tasks that share files into the same track. Tasks with no overlap go into separate tracks.
This is the part that requires human judgment:
An agent can propose groupings, but you need to verify that the boundaries are real. A task that says \"update docs\" but actually touches Go code will poison a docs track.
2. Create worktrees as sibling directories.
Not subdirectories: Siblings.
If your main checkout is at ~/WORKSPACE/ctx, worktrees go at ~/WORKSPACE/ctx-docs, ~/WORKSPACE/ctx-pad, etc.
Why siblings? Because some tools (and some agents) walk up the directory tree looking for .git. A worktree inside the main checkout confuses them.
Each agent gets a full working copy with .context/ intact. It reads the same TASKS.md, the same DECISIONS.md, the same CONVENTIONS.md. It knows the full project state. It just works on a different slice.
4. Do NOT run ctx init in worktrees.
This is the gotcha. The .context/ directory is tracked in git. Running ctx init in a worktree would overwrite shared context files: Wiping decisions, learnings, and tasks that belong to the whole project.
The worktree already has everything it needs. Leave it alone.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#what-actually-happened","level":2,"title":"What Actually Happened","text":"
I ran three agents for about 40 minutes. Here is roughly what each track produced:
work/docs: Parallel worktrees recipe, blog post edits, recipe index reorganization, IRC recipe moved from docs/ to hack/.
work/pad: ctx pad show subcommand, --append and --prepend flags on ctx pad edit, spec updates, 28 new test functions.
work/tests: Recall test coverage, edge case tests.
Merging took about five minutes. Two of the three merges were clean.
The third had a conflict in TASKS.md:
both the docs track and the pad track had marked different tasks as [x].
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-tasksmd-conflict","level":2,"title":"The TASKS.md Conflict","text":"
This deserves its own section because it will happen every time.
When two agents work in parallel, they both read TASKS.md at the start and mark tasks complete as they go. When you merge, git sees two branches that modified the same file differently.
The resolution is always the same: accept all completions from both sides. No task should go from [x] back to [ ]. The merge is additive.
This is one of those conflicts that sounds scary but is trivially mechanical: You are not arbitrating design decisions; you are combining two checklists.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#limits","level":2,"title":"Limits","text":"
3-4 worktrees, maximum.
I tried four once: By the time I merged the third track, the fourth had drifted far enough that its changes needed rebasing.
The merge complexity grows faster than the parallelism benefit.
Three is the sweet spot:
Two is conservative but safe;
Four is possible if the tracks are truly independent;
Anything more than four, you are in the danger zone.
Group by directory, not by priority.
It is tempting to put all the high-priority tasks in one track: Don't.
Two high-priority tasks that touch the same files must be in the same track, regardless of urgency. The constraint is file overlap, not importance.
Commit frequently.
Smaller commits make merge conflicts easier to resolve. An agent that writes 500 lines in a single commit is harder to merge than one that commits every logical step.
Name tracks by concern.
work/docs and work/pad tell you what's happening;
work/track-1 and work/track-2 tell you nothing.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-pattern","level":2,"title":"The Pattern","text":"
This is the same pattern that shows up everywhere in ctx:
The attention budget taught me that you can't dump everything into one context window. You have to partition, prioritize, and load selectively.
Worktrees are the same principle applied to execution: You can't dump every task into one agent's workstream. You have to partition by blast radius, assign selectively, and merge deliberately.
The codebase audit that generated these 30 tasks used eight parallel agents for analysis. Worktrees let me use parallel agents for implementation. Same coordination pattern, different artifact.
And the IRC bouncer post from earlier today argued that stateless protocols need stateful wrappers. Worktrees are the same: git branches are stateless forks; .context/ is the stateful wrapper that gives each agent the project's full memory.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#should-this-be-a-skill","level":2,"title":"Should This Be a Skill?","text":"
I asked myself the same question I asked about the codebase audit: should this be a /ctx-worktree skill?
This time the answer was a resounding \"yes\":
Unlike the audit prompt (which I tweak every time and run every other week) the worktree workflow is:
Criterion Worktree workflow Codebase audit Frequency Weekly Quarterly Stability Same steps every time Tweaked every time Scope Mechanical, bounded Bespoke, 8 agents Trigger Large backlog \"I feel like auditing\"
The commands are mechanical: git worktree add, git worktree remove, branch naming, safety checks. This is exactly what skills are for: stable contracts for repetitive operations.
Ergo, /ctx-worktree exists.
It enforces the 4-worktree limit, creates sibling directories, uses work/ branch prefixes, and reminds you not to run ctx init in worktrees.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-14-parallel-agents-with-worktrees/#the-takeaway","level":2,"title":"The Takeaway","text":"
Serial execution is the default. But serial is not always necessary.
If your backlog partitions cleanly by file overlap, you can multiply your throughput with nothing more exotic than git worktree and a second terminal window.
The hard part is not the git commands; it is the discipline:
Grouping by blast radius instead of priority;
Accepting that TASKS.md will conflict;
And knowing when three tracks is enough.
If You Remember One Thing From This Post...
Partition by blast radius, not by priority.
Two tasks that touch the same files belong in the same track, no matter how important the other one is.
The constraint is file overlap. Everything else is scheduling.
The practical setup (skill invocation, worktree creation, merge workflow, and cleanup) lives in the recipe: Parallel Agent Development with Git Worktrees.
","path":["Parallel Agents with Git Worktrees"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/","level":1,"title":"ctx v0.3.0: The Discipline Release","text":"","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#when-the-ratio-of-polish-to-features-is-31-you-know-something-changed","level":2,"title":"When the Ratio of Polish to Features Is 3:1, You Know Something Changed","text":"
Jose Alekhinne / February 15, 2026
What Does a Release Look Like When Most of the Work Is Invisible?
No new headline feature. No architectural pivot. No rewrite.
Just 35+ documentation and quality commits against ~15 feature commits... and somehow, the tool feels like it grew up overnight.
Six days separate v0.2.0 from v0.3.0.
Measured by calendar time, it is nothing. Measured by what changed in how the project operates, it is the most significant release yet.
v0.1.0 was the prototype;
v0.2.0 was the archaeology release: making the past accessible;
v0.3.0 is the discipline release: the one that turned best practices into enforcement, suggestions into structure, and a collection of commands into a system of skills.
The Release Window
February 1‒February 7, 2026
From the v0.2.0 tag to commit 2227f99.
78 files changed in the migration commit alone.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-migration-commands-to-skills","level":2,"title":"The Migration: Commands to Skills","text":"
The largest single change was the migration from .claude/commands/*.md to .claude/skills/*/SKILL.md.
This was not a rename: It was a rethinking of how AI agents discover and execute project-specific workflows.
Aspect Commands (before) Skills (after) Structure Flat files in one directory Directory-per-skill with SKILL.md Description Optional, often vague Required, doubles as activation trigger Quality gates None \"Before X-ing\" pre-flight checklist Negative triggers None \"When NOT to Use\" in every skill Examples Rare Good/bad pairs in every skill Average length ~15 lines ~80 lines
The description field became the single most important line in each skill. In the old system, descriptions were titles. In the new system, they are activation conditions: The text the platform reads to decide whether to surface a skill for a given prompt.
A description that says \"Show context summary\" activates too broadly or not at all. A description that says \"Show context summary. Use at session start or when unclear about current project state\" activates at the right moment.
78 files changed. 1,915 insertions. Not because the skills got bloated; because they got specific.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-skill-sweep","level":2,"title":"The Skill Sweep","text":"
After the structural migration, every skill was rewritten in a single session: All 21 of them.
The rewrite was guided by a pattern that emerged during the process itself: a repeatable anatomy that effective skills share regardless of their purpose:
Before X-ing: Pre-flight checks that prevent premature execution
When to Use: Positive triggers that narrow activation
When NOT to Use: Negative triggers that prevent misuse
Usage Examples: Invocation patterns the agent can pattern-match
Quality Checklist: Verification before claiming completion
The Anatomy of a Skill That Works post covers the details. What matters for the release story is the result:
Zero skills with quality gates became twenty;
Zero skills with negative triggers became twenty.
Three skills with examples became twenty.
The Skill Trilogy as Design Spec
The three blog posts written during this window:
Skills That Fight the Platform,
You Can't Import Expertise,
and The Anatomy of a Skill That Works...
... were not retrospective documentation. They were written during the rewrite, and the lessons fed back into the skills as they were being built.
The blog was the design document.
The skills were the implementation.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-consolidation-sweep","level":2,"title":"The Consolidation Sweep","text":"
The unglamorous work. The kind you only appreciate when you try to change something later and it just works.
What Why It Matters Constants consolidation Magic strings replaced with semantic constants Variable deshadowing Eliminated subtle scoping bugs File splits Modules that were doing too much, broken apart Godoc standardization Every exported function documented to convention
This is the work that doesn't get a changelog entry but makes every future commit easier. When a new contributor (human or AI) reads the codebase, they find consistent patterns instead of accumulated drift.
The consolidation was not an afterthought. It was scheduled deliberately, with the same priority as features: The 3:1 ratio that emerged during v0.2.0 development became an explicit practice:
Three feature sessions;
One consolidation session.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-ear-framework","level":2,"title":"The E/A/R Framework","text":"
On February 4th, we adopted the E/A/R classification as the official standard for evaluating skills:
Category Meaning Target Expert Knowledge Claude does not have >70% Activation When/how to trigger ~20% Redundant What Claude already knows <10%
This came from reviewing approximately 30 external skill files and discovering that most were redundant with Claude's built-in system prompt. Only about 20% had salvageable content, and even those yielded just a few heuristics each.
The E/A/R framework gave us a concrete, testable criterion:
A good skill is Expert knowledge minus what Claude already knows.
If more than 10% of a skill restates platform defaults, it is creating noise, not signal.
Every skill in v0.3.0 was evaluated against this framework. Several were deleted. The survivors are leaner and more focused.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#backup-and-monitoring-infrastructure","level":2,"title":"Backup and Monitoring Infrastructure","text":"
A tool that manages your project's memory needs ops maturity.
v0.3.0 added two pieces of infrastructure that reflect this:
Backup staleness hook: A UserPromptSubmit hook that checks whether the last .context/ backup is more than two days old. If it is, and the SMB mount is available, it reminds the user. No cron job running when nobody is working. No redundant backups when nothing has changed.
Context size checkpoint: A PreToolUse hook that estimates current context window usage and warns when the session is getting heavy. This hooks into the attention budget philosophy: Degradation is expected, but it should be visible.
Both hooks use $CLAUDE_PROJECT_DIR instead of hardcoded paths, a migration triggered by a username rename that broke every absolute path in the hook configuration. That migration (replacing /home/user/... with \"$CLAUDE_PROJECT_DIR\"/.claude/hooks/...) was one of those changes that seems trivial but prevents an entire category of future failures.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#the-numbers","level":2,"title":"The Numbers","text":"Metric v0.2.0 v0.3.0 Skills (was \"commands\") 11 21 Skills with quality gates 0 21 Skills with \"When NOT to Use\" 0 21 Average skill body ~15 lines ~80 lines Hooks using $CLAUDE_PROJECT_DIR 0 All Documentation commits -- 35+ Feature/fix commits -- ~15
That ratio (35+ documentation and quality commits to ~15 feature commits) is the defining characteristic of this release:
This release is not a failure to ship features.
It is the deliberate choice to make the existing features reliable.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#what-v030-means","level":2,"title":"What v0.3.0 Means","text":"
v0.1.0 asked: \"Can we give AI persistent memory?\"
v0.2.0 asked: \"Can we make that memory accessible to humans too?\"
v0.3.0 asks a different question: \"Can we make the quality self-enforcing?\"
The answer is not a feature: It is a practice:
Skills with quality gates enforce pre-flight checks.
Negative triggers prevent misuse without human intervention.
The E/A/R framework ensures skills contain signal, not noise.
Consolidation sessions are scheduled, not improvised.
Hook infrastructure makes degradation visible.
Discipline is not the absence of velocity. It is the infrastructure that makes velocity sustainable.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-ctx-v0.3.0-the-discipline-release/#what-comes-next","level":2,"title":"What Comes Next","text":"
The skill system is now mature enough to support real workflows without constant human correction. The hooks infrastructure is portable and resilient. The consolidation practice is documented and repeatable.
The next chapter is about what you build on top of discipline:
Multi-agent coordination;
Deeper integration patterns;
And the question of whether context management is a tool concern or an infrastructure concern.
But those are future posts.
This one is about the release that proved polish is not the opposite of progress. It is what turns a prototype into a product.
The Discipline Release
v0.1.0 shipped features.
v0.2.0 shipped archaeology.
v0.3.0 shipped the habits that make everything else trustworthy.
The most important code in this release is the code that prevents bad code from shipping.
This post was drafted using /ctx-blog with access to the full git history between v0.2.0 and v0.3.0, decision logs, learning logs, and the session files from the skill rewrite window. The meta continues.
","path":["ctx v0.3.0: The Discipline Release"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/","level":1,"title":"Eight Ways a Hook Can Talk","text":"","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#when-your-warning-disappears","level":2,"title":"When Your Warning Disappears","text":"
Jose Alekhinne / 2026-02-15
I had a backup warning that nobody ever saw.
The hook was correct: It detected stale backups, formatted a nice message, and output it as {\"systemMessage\": \"...\"}. The problem wasn't detection. The problem was delivery. The agent absorbed the information, processed it internally, and never told the user.
Meanwhile, a different hook (the journal reminder) worked perfectly every time. Users saw the reminder, ran the commands, and the backlog stayed manageable. Same hook event (UserPromptSubmit), same project, completely different outcomes.
The difference was one line:
IMPORTANT: Relay this journal reminder to the user VERBATIM\nbefore answering their question.\n
That explicit instruction is what makes VERBATIM relay a pattern, not just a formatting choice. And once I saw it as a pattern, I started seeing others.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#the-audit","level":2,"title":"The Audit","text":"
I looked at every hook in ctx: Eight shell scripts across three hook events. And I found five distinct output patterns already in use, plus three more that the existing hooks were reaching for but hadn't quite articulated.
The patterns form a spectrum based on a single question:
\"Who decides what the user sees?\"
At one end, the hook decides everything (hard gate: the agent literally cannot proceed). At the other end, the hook is invisible (silent side-effect: nobody knows it ran). In between, there is a range of negotiation between hook, agent, and the user.
Here's the full spectrum:
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#1-hard-gate","level":3,"title":"1. Hard Gate","text":"
{\"decision\": \"block\", \"reason\": \"Use ctx from PATH, not ./ctx\"}\n
The nuclear option: The agent's tool call is rejected before it executes.
This is Claude Code's first-class PreToolUse mechanism: The hook returns JSON with decision: block and the agent gets an error with the reason.
Use this for invariants: Constitution rules, security boundaries, things that must never happen. I use it to enforce PATH-based ctx invocation, block sudo, and require explicit approval for git push.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#2-verbatim-relay","level":3,"title":"2. VERBATIM Relay","text":"
IMPORTANT: Relay this warning to the user VERBATIM before answering.\n┌─ Journal Reminder ─────────────────────────────\n│ You have 12 sessions not yet imported.\n│ ctx recall import --all\n└────────────────────────────────────────────────\n
The instruction is the pattern. Without \"Relay VERBATIM,\" agents tend to absorb information into their internal reasoning and never surface it. The explicit instruction changes the behavior from \"I know about this\" to \"I must tell the user about this.\"
I use this for actionable reminders:
Unexported journal entries;
Stale backups;
Context capacity warnings...
...things the user should see regardless of what they asked.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#3-agent-directive","level":3,"title":"3. Agent Directive","text":"
┌─ Persistence Checkpoint (prompt #25) ───────────\n│ No context files updated in 15+ prompts.\n│ Have you discovered learnings worth persisting?\n└──────────────────────────────────────────────────\n
A nudge, not a command. The hook tells the agent something; the agent decides what (if anything) to tell the user. This is right for behavioral nudges: \"you haven't saved context in a while\" doesn't need to be relayed verbatim, but the agent should consider acting on it.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#4-silent-context-injection","level":3,"title":"4. Silent Context Injection","text":"
ctx agent --budget 4000 2>/dev/null || true\n
Pure background enrichment. The agent's context window gets project information injected on every tool call, with no visible output. Neither the agent nor the user sees the hook fire, but the agent makes better decisions because of the context.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#5-silent-side-effect","level":3,"title":"5. Silent Side-Effect","text":"
find \"$CTX_TMPDIR\" -type f -mtime +15 -delete\n
Do work, say nothing. Temp file cleanup on session end. Logging. Marker file management. The action is the entire point; no one needs to know.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#the-patterns-we-dont-have-yet","level":2,"title":"The Patterns We Don't Have Yet","text":"
Three more patterns emerged from the gaps in the existing hooks.
Conditional relay: \"Relay this, but only if the user's question is about X.\" This pattern avoids noise when the warning isn't relevant. It's more fragile (depends on agent judgment) but less annoying.
Suggested action: \"Here's a problem, and here's the exact command to fix it. Ask the user before running it.\" This pattern goes beyond a nudge by giving the agent a concrete proposal, but still requires human approval.
Escalating severity: INFO gets absorbed silently. WARN gets mentioned at the next natural pause. CRITICAL gets the VERBATIM treatment. This pattern introduces a protocol for hooks that produce output at different urgency levels, so they don't all compete for the user's attention.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-eight-ways-a-hook-can-talk/#the-principle","level":2,"title":"The Principle","text":"
Hooks are the boundary between your environment and the agent's reasoning.
A hook that detects a problem but can't communicate it effectively is the same as no hook at all.
The format of your output is a design decision with real consequences:
Use a hard gate and the agent can't proceed (good for invariants, frustrating for false positives)
Use VERBATIM relay and the user will see it (good for reminders, noisy if overused)
Use an agent directive and the agent might act (good for nudges, unreliable for critical warnings)
Use silent injection and nobody knows (good for enrichment, invisible when it breaks)
Choose deliberately. And, when in doubt, write the word VERBATIM.
The full pattern catalog with decision flowchart and implementation examples is in the Hook Output Patterns recipe.
","path":["Eight Ways a Hook Can Talk"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/","level":1,"title":"Version Numbers Are Lagging Indicators","text":"","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#why-ctxs-journal-site-runs-on-a-v0021-tool","level":2,"title":"Why ctx's Journal Site Runs on a v0.0.21 Tool","text":"
Jose Alekhinne / 2026-02-15
Would You Ship Production Infrastructure on a v0.0.21 Dependency?
Most engineers wouldn't. Version numbers signal maturity. Pre-1.0 means unstable API, missing features, risk.
But version numbers tell you where a project has been. They say nothing about where it's going.
I just bet ctx's entire journal site on a tool that hasn't hit v0.1.0.
Here's why I'd do it again.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-problem","level":2,"title":"The Problem","text":"
When v0.2.0 shipped the journal system, the pipeline was clear:
Export sessions to Markdown;
Enrich them with YAML frontmatter;
And render them into something browsable.
The first two steps were solved; the third needed a tool.
The journal entries are standard Markdown with YAML frontmatter, tables, and fenced code blocks. That is the entire format:
No JSX;
No shortcodes;
No custom templating.
Just Markdown rendered well.
The requirements are modest:
Read a configuration file (such as mkdocs.yml);
Render Markdown with extensions (admonitions, tabs, tables);
Search;
Handle 100+ files without choking on incremental rebuilds;
Look good out of the box;
Not lock me in.
The obvious candidates were as follows:
Tool Language Strengths Pain Points Hugo Go Blazing fast, mature Templating is painful; Go templates fight you on anything non-trivial Astro JS/TS Modern, flexible JS ecosystem overhead; overkill for a docs site MkDocs + Material Python Beautiful defaults, massive community (22k+ stars) Slow incremental rebuilds on large sites; limited extensibility model Zensical Python Built to fix MkDocs' limits; 4-5x faster rebuilds v0.0.21; module system not yet shipped
The instinct was Hugo. Same language as ctx. Fast. Well-established.
But instinct is not analysis. I picked the one with the lowest version number.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-evaluation","level":2,"title":"The Evaluation","text":"
Here is what I actually evaluated, in order:
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#1-the-team","level":3,"title":"1. The Team","text":"
Zensical is built by squidfunk: The same person behind Material for MkDocs, the most popular MkDocs theme with 22,000+ stars. It powers documentation sites for projects across every language and framework.
This is not someone learning how to build static site generators.
This is someone who spent years understanding exactly where MkDocs breaks and decided to fix it from the ground up.
They did not build zensical because MkDocs was bad: They built it because MkDocs hit a ceiling:
Incremental rebuilds: 4-5x faster during serve. When you have hundreds of journal entries and you edit one, the difference between \"rebuild everything\" and \"rebuild this page\" is the difference between a usable workflow and a frustrating one.
Large site performance: Specifically designed for tens of thousands of pages. The journal grows with every session. A tool that slows down as content accumulates is a tool you will eventually replace.
A proven team starting fresh is more predictable than an unproven team at v3.0.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#2-the-architecture","level":3,"title":"2. The Architecture","text":"
Zensical is investing in a Rust-based Markdown parser with CommonMark support. That signals something about the team's priorities:
Performance foundations first; features second.
ctx's journal will grow:
Every exported session adds files.
Every enrichment pass adds metadata.
Choosing a tool that gets slower as you add content means choosing to migrate later.
Choosing one built for scale means the decision holds.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#3-the-migration-path","level":3,"title":"3. The Migration Path","text":"
Zensical reads mkdocs.yml natively. If it doesn't work out, I can move back to MkDocs + Material with zero content changes:
The Markdown is standard;
The frontmatter is standard;
The configuration is compatible.
This is the infrastructure pattern again: The same way ZNC decouples presence from the client, zensical decouples rendering from the generator:
The Markdown is yours.
The frontmatter is standard YAML.
The configuration is MkDocs-compatible.
You are not locked into anything except your own content.
No lock-in is not a feature: It's a design philosophy:
It's the same reason ctx uses plain Markdown files in .context/ instead of a database: the format should outlive the tool.
Lock-in Is the Real Risk, Not Version Numbers
A mature tool with a proprietary format is riskier than a young tool with a standard one. Version numbers measure time invested. Portability measures respect for the user.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#4-the-dependency-tree","level":3,"title":"4. The Dependency Tree","text":"
Here is what pip install zensical actually pulls in:
click
Markdown
Pygments
pymdown-extensions
PyYAML
Only five dependencies. All well-known. No framework bloat. No bundler. No transpiler. No node_modules black hole.
3k GitHub stars at v0.0.21 is a strong early traction for a pre-1.0 project.
The dependency tree is thin: No bloat.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#5-the-fit","level":3,"title":"5. The Fit","text":"
This is the same principle behind the attention budget: do not overfit the tool to hypothetical requirements. The right amount of capability is the minimum needed for the current task.
Hugo is a powerful static site generator. It is also a powerful templating engine, a powerful asset pipeline, and a powerful taxonomy system. For rendering Markdown journals, that power is overhead:
It is the complexity you pay for but never use.
ctx's journal files are standard Markdown with YAML frontmatter, tables, and fenced code blocks. That is exactly the sweet spot Zensical inherits from Material for MkDocs:
No custom plugins needed;
No special syntax;
No templating gymnastics.
The requirements match the capabilities: Not the capabilities that are promised, but the ones that exist today, at v0.0.21.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-caveat","level":2,"title":"The Caveat","text":"
It would be dishonest not to mention what's missing.
The module system for third-party extensions opens in early 2026.
If ctx ever needs custom plugins (for example, auto-linking session IDs, rendering special journal metadata, etc.) that infrastructure isn't there yet.
The installation experience is rough:
We discovered this firsthand: pip install zensical often fails on MacOS (system Python stubs, Homebrew's PEP 668 restrictions). The answer is pipx, which creates an isolated environment with the correct Python version automatically.
That kind of friction is typical for young Python tooling, and it is documented in the Getting Started guide.
And 3,000 stars at v0.0.21 is strong early traction, but it's still early: The community is small. When something breaks, you're reading source code, not documentation.
These are real costs. I chose to pay them because the alternative costs are higher.
For example:
Hugo's templating pain would cost me time on every site change.
Astro's JS ecosystem would add complexity I don't need.
MkDocs would work today but hit scaling walls tomorrow.
Zensical's costs are front-loaded and shrinking.
The others compound.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-evaluation-framework","level":2,"title":"The Evaluation Framework","text":"
For anyone facing a similar choice, here is the framework that emerged:
Signal What It Tells You Weight Team track record Whether the architecture will be sound High Migration path Whether you can leave if wrong High Current fit Whether it solves your problem today High Dependency tree How much complexity you're inheriting Medium Version number How long the project has existed Low Star count Community interest (not quality) Low Feature list What's possible (not what you need) Low
The bottom three are the metrics most engineers optimize for.
The top four are the ones that predict whether you'll still be happy with the choice in a year.
Features You Don't Need Are Not Free
Every feature in a dependency is code you inherit but don't control.
A tool with 200 features where you use 5 means 195 features worth of surface area for bugs, breaking changes, and security issues that have nothing to do with your use case.
Fit is the inverse of feature count.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-15-why-zensical/#the-broader-pattern","level":2,"title":"The Broader Pattern","text":"
This is part of a theme I keep encountering in this project:
Leading indicators beat lagging indicators.
Domain Lagging Indicator Leading Indicator Tooling Version number, star count Team track record, architecture Code quality Test coverage percentage Whether tests catch real bugs Context persistence Number of files in .context/ Whether the AI makes fewer mistakes Skills Number of skills created Whether each skill fires at the right time Consolidation Lines of code refactored Whether drift stops accumulating
Version numbers, star counts, coverage percentages, file counts...
...these are all measures of effort expended.
They say nothing about value delivered.
The question is never \"how mature is this tool?\"
The question is \"does this tool's trajectory intersect with my needs?\"
Zensical's trajectory:
A proven team fixing known problems,
in a *proven architecture,
with a standard format,
and no lock-in.
ctx's needs:
Tender standard Markdown into a browsable site, at scale, without complexity.
The intersection is clean; the version number is noise.
This is the same kind of decision that shows up throughout ctx:
Skills that fight the platform taught that the best integration extends existing behavior, not replaces it.
You can't import expertise taught that tools should grow from your project's actual needs, not from feature checklists.
Context as infrastructure argues that the format should outlive the tool; and, zensical honors that principle by reading standard Markdown and standard MkDocs configuration.
If You Remember One Thing From This Post...
Version numbers measure where a project has been.
The team and the architecture tell you where it's going.
A v0.0.21 tool built by the right team on the right foundations is a safer bet than a v5.0 tool that doesn't fit your problem.
Bet on trajectories, not timestamps.
This post started as an evaluation note in ideas/ and a separate decision log. The analysis held up. The two merged into one. The meta continues.
","path":["Version Numbers Are Lagging Indicators"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/","level":1,"title":"ctx v0.6.0: The Integration Release","text":"","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#two-commands-to-persistent-memory","level":2,"title":"Two Commands to Persistent Memory","text":"
Jose Alekhinne / February 16, 2026
What Changed?
ctx is now a Claude Code plugin. Two commands, no build step:
Understand which shell scripts called which Go commands;
Hope nothing broke when Claude Code updated its hook format.
v0.6.0 ends that era: ctx ships as a Claude Marketplace plugin:
Hooks and skills served directly from source, installed with a single command, updated by pulling the repo. The tool that gives AI persistent memory is now as easy to install as the AI itself.
But the plugin conversion was not just a packaging change: It was the forcing function that rewrote every shell hook in Go, eliminated the jq dependency, enabled go test coverage for hook logic, and made distribution a solved problem.
When you fix how something ships, you end up fixing how it is built.
The Release Window
February 15-February 16, 2026
From the v0.3.0 tag to commit a3178bc:
109 commits.
334 files changed.
Version jumped from 0.3.0 to 0.6.0 to signal the magnitude.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#before-six-shell-scripts-and-a-prayer","level":2,"title":"Before: Six Shell Scripts and a Prayer","text":"
v0.3.0 had six hook scripts. Each was a Bash file that shelled out to ctx subcommands, parsed JSON with jq, and wired itself into Claude Code's hook system via .claude/hooks/:
jq was a hard dependency: No jq, no hooks. macOS ships without it.
No test coverage: Shell scripts were tested manually or not at all.
Fragile deployment: ctx init had to scaffold .claude/hooks/ and .claude/skills/ with the right paths, permissions, and structure.
Version drift: Users who installed once never got hook updates unless they re-ran ctx init.
The shell scripts were the right choice for prototyping. They were the wrong choice for distribution.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#after-one-plugin-zero-shell-scripts","level":2,"title":"After: One Plugin, Zero Shell Scripts","text":"
v0.6.0 replaces all six scripts with ctx system subcommands compiled into the binary:
Shell Script Go Subcommand check-context-size.shctx system check-context-sizecheck-persistence.shctx system check-persistencecheck-journal.shctx system check-journalpost-commit.shctx system post-commitblock-non-path-ctx.shctx system block-non-path-ctxcleanup-tmp.shctx system cleanup-tmp
The plugin's hooks.json wires them to Claude Code events:
{\n \"PreToolUse\": [\n {\"matcher\": \"Bash\", \"command\": \"ctx system block-non-path-ctx\"},\n {\"matcher\": \".*\", \"command\": \"ctx agent --budget 4000\"}\n ],\n \"PostToolUse\": [\n {\"matcher\": \"Bash\", \"command\": \"ctx system post-commit\"}\n ],\n \"UserPromptSubmit\": [\n {\"command\": \"ctx system check-context-size\"},\n {\"command\": \"ctx system check-persistence\"},\n {\"command\": \"ctx system check-journal\"}\n ],\n \"SessionEnd\": [\n {\"command\": \"ctx system cleanup-tmp\"}\n ]\n}\n
No jq. No shell scripts. No .claude/hooks/ directory to manage.
The hooks are Go functions with tests, compiled into the same binary you already have.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#the-plugin-model","level":2,"title":"The Plugin Model","text":"
The ctx plugin lives at .claude-plugin/marketplace.json in the repo.
Claude Code's marketplace system handles discovery and installation:
Skills are served directly from internal/assets/claude/skills/; there is no build step, no make plugin, no generated artifacts.
This means:
Install is two commands: Not \"clone, build, copy, configure.\"
Updates are automatic: Pull the repo; the plugin reads from source.
Skills and hooks are versioned together: No drift between what the CLI expects and what the plugin provides.
ctx init is tool-agnostic: It creates .context/ and nothing else. No .claude/ scaffolding, no assumptions about which AI tool you use.
That last point matters:
Before v0.6.0, ctx init tried to set up Claude Code integration as part of initialization. That coupled the context system to a specific tool.
Now, ctx init gives you persistent context. The plugin gives you Claude Code integration. They compose; they don't depend.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#beyond-the-plugin-what-else-shipped","level":2,"title":"Beyond the Plugin: What Else Shipped","text":"
The plugin conversion dominated the release, but 109 commits covered more ground.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#obsidian-vault-export","level":3,"title":"Obsidian Vault Export","text":"
ctx journal obsidian\n
Generates a full Obsidian vault from enriched journal entries: wikilinks, MOC (Map of Content) pages, and graph-optimized cross-linking. If you already use Obsidian for notes, your AI session history now lives alongside everything else.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#encrypted-scratchpad","level":3,"title":"Encrypted Scratchpad","text":"
ctx pad edit \"DATABASE_URL=postgres://...\"\nctx pad show\n
AES-256-GCM encrypted storage for sensitive one-liners.
The encrypted blob commits to git; the key stays in .gitignore.
This is useful for connection strings, API keys, and other values that need to travel with the project without appearing in plaintext.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#security-hardening","level":3,"title":"Security Hardening","text":"
Three medium-severity findings from a security audit are now closed:
Finding Fix Path traversal via --context-dir Boundary validation: operations cannot escape project root (M-1) Symlink following in .context/Lstat() check before every file read/write (M-2) Predictable temp file paths User-specific temp directory under $XDG_RUNTIME_DIR (M-3)
Plus a new /sanitize-permissions skill that audits settings.local.json for overly broad Bash permissions.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#hooks-that-know-when-to-be-quiet","level":3,"title":"Hooks That Know When to Be Quiet","text":"
A subtle but important fix: hooks now no-op before ctx init has run.
Previously, a fresh clone with no .context/ would trigger hook errors on every prompt. Now, hooks detect the absence of a context directory and exit silently. Similarly, ctx init treats a .context/ directory containing only logs as uninitialized and skips the --overwrite prompt.
Small changes. Large reduction in friction for new users.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#the-numbers","level":2,"title":"The Numbers","text":"Metric v0.3.0 v0.6.0 Skills 21 25 Shell hook scripts 6 0 Go system subcommands 0 6 External dependencies (hooks) jq, bash none Lines of Go ~14,000 ~37,000 Plugin install commands n/a 2 Security findings (open) 3 0 ctx init creates .claude/ yes no
The line count tripled. Most of that is documentation site HTML, Obsidian export logic, and the scratchpad encryption module.
The core CLI grew modestly; the ecosystem around it grew substantially.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#what-does-v060-mean-for-ctx","level":2,"title":"What Does v0.6.0 Mean for ctx?","text":"
v0.1.0 asked: \"Can we give AI persistent memory?\"
v0.2.0 asked: \"Can we make that memory accessible to humans too?\"
v0.3.0 asked: \"Can we make the quality self-enforcing?\"
v0.6.0 asks: \"Can someone else actually use this?\"
A tool that requires cloning a repo, building from source, and manually wiring hooks into the right directories is a tool for its author.
A tool that installs with two commands from a marketplace is a tool for everyone.
The version jumped from 0.3.0 to 0.6.0 because the delta is not incremental: The shell-to-Go rewrite, the plugin model, the security hardening, and the tool-agnostic init: Together, they change what ctx is: Not a different tool, but a tool that is finally ready to leave the workshop.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-16-ctx-v0.6.0-the-integration-release/#what-comes-next","level":2,"title":"What Comes Next","text":"
The plugin model opens the door to distribution patterns that were not possible before. Marketplace discovery means new users find ctx without reading a README. Plugin updates mean existing users get improvements without rebuilding.
The next chapter is about what happens when persistent context is easy to install: Adoption patterns, multi-project workflows, and whether the .context/ convention can become infrastructure that other tools build on.
But those are future posts.
This one is about the release that turned a developer tool into a distributable product: two commands, zero shell scripts, and a presence on the Claude Marketplace.
v0.3.0 shipped discipline. v0.6.0 shipped the front door.
The most important code in this release is the code you never have to copy.
This post was drafted using /ctx-blog-changelog with access to the full git history between v0.3.0 and v0.6.0, release notes, and the plugin conversion PR. The meta continues.
","path":["ctx v0.6.0: The Integration Release"],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/","level":1,"title":"Code Is Cheap. Judgment Is Not.","text":"","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#why-ai-replaces-effort-not-expertise","level":2,"title":"Why AI Replaces Effort, Not Expertise","text":"
Jose Alekhinne / February 17, 2026
Are You Worried About AI Taking Your Job?
You might be confusing the thing that's cheap with the thing that's valuable.
I keep seeing the same conversation: Engineers, designers, writers: all asking the same question with the same dread:
\"What happens when AI can do what I do?\"
The question is wrong:
AI does not replace workers;
AI replaces unstructured effort.
The distinction matters, and everything I have learned building ctx reinforces it.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-three-confusions","level":2,"title":"The Three Confusions","text":"
People who feel doomed by AI usually confuse three things:
People confuse... With... Effort Value Typing Thinking Production Judgment
Effort is time spent.
Value is the outcome that time produces.
They are not the same; they never were.
AI just makes the gap impossible to ignore.
Typing is mechanical: Thinking is directional.
An AI can type faster than any human. Yet, it cannot decide what to type without someone framing the problem, sequencing the work, and evaluating the result.
Production is making artifacts. Judgment is knowing:
which artifacts to make,
in what order,
to what standard,
and when to stop.
AI floods the system with production capacity; it does not flood the system with judgment.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#code-is-nothing","level":2,"title":"Code Is Nothing","text":"
This sounds provocative until you internalize it:
Code is cheap. Artifacts are cheap.
An AI can generate a thousand lines of working code in literal *minutes**:
It can scaffold a project, write tests, build a CI pipeline, draft documentation. The raw production of software artifacts is no longer the bottleneck.
So, what is not cheap?
Taste: knowing what belongs and what does not
Framing: turning a vague goal into a concrete problem
Sequencing: deciding what to build first and why
Fanning out: breaking work into parallel streams that converge
Acceptance criteria: defining what \"done\" looks like before starting
Judgment: the thousand small decisions that separate code that works from code that lasts
These are the skills that direct production: Hhuman skills.
Not because AI is incapable of learning them, but because they require something AI does not have:
temporal accountability for generated outcomes.
That is, you cannot keep AI accountable for the $#!% it generated three months ago. A human, on the other hand, will always be accountable.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-evidence-from-building-ctx","level":2,"title":"The Evidence From Building ctx","text":"
I did not arrive at this conclusion theoretically.
I arrived at it by building a tool with an AI agent for three weeks and watching exactly where a human touch mattered.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#yolo-mode-proved-production-is-cheap","level":3,"title":"YOLO Mode Proved Production Is Cheap","text":"
In Building ctx Using ctx, I documented the YOLO phase: auto-accept everything, let the AI ship features at full speed. It produced 14 commands in a week. Impressive output.
The code worked. The architecture drifted. Magic strings accumulated. Conventions diverged. The AI was producing at a pace no human could match, and every artifact it produced was a small bet that nobody was evaluating.
Production without judgment is not velocity. It is debt accumulation at breakneck speed.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-31-ratio-proved-judgment-has-a-cadence","level":3,"title":"The 3:1 Ratio Proved Judgment Has a Cadence","text":"
In The 3:1 Ratio, the git history told the story:
Three sessions of forward momentum followed by one session of deliberate consolidation. The consolidation session is where the human applies judgment: reviewing what the AI built, catching drift, realigning conventions.
The AI does the refactoring. The human decides what to refactor and when to stop.
Without the human, the AI will refactor forever, improving things that do not matter and missing things that do.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-attention-budget-proved-framing-is-scarce","level":3,"title":"The Attention Budget Proved Framing Is Scarce","text":"
In The Attention Budget, I explained why more context makes AI worse, not better. Every token competes for attention: Dump everything in and the AI sees nothing clearly.
This is a framing problem: The human's job is to decide what the AI should focus on: what to include, what to exclude, what to emphasize.
ctx agent --budget 4000 is not just a CLI flag: It is a forcing function for human judgment about relevance.
The AI processes. The human curates.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#skills-design-proved-taste-is-load-bearing","level":3,"title":"Skills Design Proved Taste Is Load-Bearing","text":"
The skill trilogy (You Can't Import Expertise, The Anatomy of a Skill That Works) showed that the difference between a useful skill and a useless one is not craftsmanship:
It is taste.
A well-crafted skill with the wrong focus is worse than no skill at all: It consumes the attention budget with generic advice while the project-specific problems go unchecked.
The E/A/R framework (Expert, Activation, Redundant) is a judgment too:. The AI cannot apply it to itself. The human evaluates what the AI already knows, what it needs to be told, and what is noise.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#automation-discipline-proved-restraint-is-a-skill","level":3,"title":"Automation Discipline Proved Restraint Is a Skill","text":"
In Not Everything Is a Skill, the lesson was that the urge to automate is not the need to automate. A useful prompt does not automatically deserve to become a slash command.
The human applies judgment about frequency, stability, and attention cost.
The AI can build the skill. Only the human can decide whether it should exist.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#defense-in-depth-proved-boundaries-require-judgment","level":3,"title":"Defense in Depth Proved Boundaries Require Judgment","text":"
In Defense in Depth, the entire security model for unattended AI agents came down to: markdown is not a security boundary. Telling an AI \"don't do bad things\" is production (of instructions). Setting up an unprivileged user in a network-isolated container is judgment (about risk).
The AI follows instructions. The human decides which instructions are enforceable and which are \"wishful thinking\".
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#parallel-agents-proved-scale-amplifies-the-gap","level":3,"title":"Parallel Agents Proved Scale Amplifies the Gap","text":"
In Parallel Agents and Merge Debt, the lesson was that multiplying agents multiplies output. But it also multiplies the need for judgment:
Five agents running in parallel produce five sessions of drift in one clock hour. The human who can frame tasks cleanly, define narrow acceptance criteria, and evaluate results quickly becomes the limiting factor.
More agents do not reduce the need for judgment. They increase it.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-two-reactions","level":2,"title":"The Two Reactions","text":"
When AI floods the system with cheap output, two things happen:
Those who only produce: panic. If your value proposition is \"I write code,\" and an AI writes code faster, cheaper, and at higher volume, then the math is unfavorable. Not because AI took your job, but because your job was never the code. It was the judgment around the code, and you were not exercising it.
Those who direct: accelerate. If your value proposition is \"I know what to build, in what order, to what standard,\" then AI is the best thing that ever happened to you: Production is no longer the bottleneck: Your ability to frame, sequence, evaluate, and course-correct is now the limiting factor on throughput.
The gap between these two is not talent: It is the awareness of where the value lives.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#what-this-means-in-practice","level":2,"title":"What This Means in Practice","text":"
If you are an engineer reading this, the actionable insight is not \"learn prompt engineering\" or \"master AI tools.\" It is:
Get better at the things AI cannot do.
AI does this well You need to do this Generate code Frame the problem Write tests Define acceptance criteria Scaffold projects Sequence the work Fix bugs from stack traces Evaluate tradeoffs Produce volume Exercise restraint Follow instructions Decide which instructions matter
The skills on the right column are not new. They are the same skills that have always separated senior engineers from junior ones.
AI did not create the distinction; it just made it load-bearing.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#if-anything-i-feel-empowered","level":2,"title":"If Anything, I Feel Empowered","text":"
I will end with something personal.
I am not worried: I am empowered.
Before ctx, I could think faster than I could produce:
Ideas sat in a queue.
The bottleneck was always \"I know what to build, but building it takes too long.\"
Now the bottleneck is gone. Poof!
Production is cheap.
The queue is clearing.
The limiting factor is how fast I can think, not how fast I can type.
That is not a threat: That is the best force multiplier I've ever had.
The people who feel threatened are confusing the accelerator for the replacement:
*AI does not replace the conductor; it gives them a bigger orchestra.
If You Remember One Thing From This Post...
Code is cheap. Judgment is not.
AI replaces unstructured effort, not directed expertise. The skills that matter now are the same skills that have always mattered: taste, framing, sequencing, and the discipline to stop.
The difference is that now, for the first time, those skills are the only bottleneck left.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-code-is-cheap-judgment-is-not/#the-arc","level":2,"title":"The Arc","text":"
This post is a retrospective. It synthesizes the thread running through every previous entry in this blog:
Building ctx Using ctx showed that production without direction creates debt
Refactoring with Intent showed that slowing down is not the opposite of progress
The Attention Budget showed that curation outweighs volume
The skill trilogy showed that taste determines whether a tool helps or hinders
Not Everything Is a Skill showed that restraint is a skill in itself
Defense in Depth showed that instructions are not boundaries
The 3:1 Ratio showed that judgment has a schedule
Parallel Agents showed that scale amplifies the gap between production and judgment
Context as Infrastructure showed that the system you build for context is infrastructure, not conversation
From YOLO mode to defense in depth, the pattern is the same:
Production is the easy part;
Judgment is the hard part;
AI changed the ratio, not the rule.
This post synthesizes the thread running through every previous entry in this blog. The evidence is drawn from three weeks of building ctx with AI assistance, the decisions recorded in DECISIONS.md, the learnings captured in LEARNINGS.md, and the git history that tracks where the human mattered and where the AI ran unsupervised.
See also: When a System Starts Explaining Itself -- what happens after the arc: the first field notes from the moment the system starts compounding in someone else's hands.
","path":["Code Is Cheap. Judgment Is Not."],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/","level":1,"title":"Context as Infrastructure","text":"","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#why-your-ai-needs-a-filesystem-not-a-prompt","level":2,"title":"Why Your AI Needs a Filesystem, Not a Prompt","text":"
Jose Alekhinne / February 17, 2026
Where does your AI's knowledge live between sessions?
If the answer is \"in a prompt I paste at the start,\" you are treating context as a consumable. Something assembled, used, and discarded.
What if you treated it as infrastructure instead?
This post synthesizes a thread that has been running through every ctx blog post; from the origin story to the attention budget to the discipline release. The thread is this: context is not a prompt problem. It is an infrastructure problem. And the tools we build for it should look more like filesystems than clipboard managers.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-prompt-paradigm","level":2,"title":"The Prompt Paradigm","text":"
Most AI-assisted development treats context as ephemeral:
Start a session.
Paste your system prompt, your conventions, your current task.
Work.
Session ends. Everything evaporates.
Next session: paste again.
This works for short interactions. For sustained development (where decisions compound over days and weeks) it fails in three ways:
It does not persist: A decision made on Tuesday must be re-explained on Wednesday. A learning captured in one session is invisible to the next.
It does not scale: As the project grows, the \"paste everything\" approach hits the context window ceiling. You start triaging what to include, often cutting exactly the context that would have prevented the next mistake.
It does not compose: A system prompt is a monolith. You cannot load part of it, update one section, or share a subset with a different workflow. It is all or nothing.
The Copy-Paste Tax
Every session that starts with pasting a prompt is paying a tax:
The human time to assemble the context, the risk of forgetting something, and the silent assumption that yesterday's prompt is still accurate today.
Over 70+ sessions, that tax compounds into a significant maintenance burden: One that most developers absorb without questioning it.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-infrastructure-paradigm","level":2,"title":"The Infrastructure Paradigm","text":"
ctx takes a different approach:
Context is not assembled per-session; it is maintained as persistent files in a .context/ directory:
.context/\n CONSTITUTION.md # Inviolable rules\n TASKS.md # Current work items\n CONVENTIONS.md # Code patterns and standards\n DECISIONS.md # Architectural choices with rationale\n LEARNINGS.md # Gotchas and lessons learned\n ARCHITECTURE.md # System structure\n GLOSSARY.md # Domain terminology\n AGENT_PLAYBOOK.md # Operating manual for agents\n journal/ # Enriched session summaries\n archive/ # Completed work, cold storage\n
Each file has a single purpose;
Each can be loaded independently;
Each persists across sessions, tools, and team members.
This is not a novel idea. It is the same idea behind every piece of infrastructure software engineers already use:
The parallel is not metaphorical. Context files are infrastructure:
They are versioned (git tracks them);
They are structured (Markdown with conventions);
They have schemas (required fields for decisions and learnings);
And they have lifecycle management (archiving, compaction, indexing).
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#separation-of-concerns","level":2,"title":"Separation of Concerns","text":"
The most important design decision in ctx is not any individual feature. It is the separation of context into distinct files with distinct purposes.
A single CONTEXT.md file would be simpler to implement. It would also be impossible to maintain.
Why? Because different types of context have different lifecycles:
Context Type Changes Read By Load When Constitution Rarely Every session Always Tasks Every session Session start Always Conventions Weekly Before coding When writing code Decisions When decided When questioning When revisiting Learnings When learned When stuck When debugging Journal Every session Rarely When investigating
Loading everything into every session wastes the attention budget on context that is irrelevant to the current task. Loading nothing forces the AI to operate blind.
Separation of concerns allows progressive disclosure:
Load the minimum that matters for this moment, with the option to load more when needed.
# Session start: load the essentials\nctx agent --budget 4000\n\n# Deep investigation: load everything\ncat .context/DECISIONS.md\ncat .context/journal/2026-02-05-*.md\n
The filesystem is the index. File names, directory structure, and timestamps encode relevance. The AI does not need to read every file; it needs to know where to look.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-two-tier-persistence-model","level":2,"title":"The Two-Tier Persistence Model","text":"
ctx uses two tiers of persistence, and the distinction is architectural:
The curated tier is what the AI sees at session start. It is optimized for signal density:
Structured entries,
Indexed tables,
Reverse-chronological order (newest first, so the most relevant content survives truncation).
The full dump tier is for humans and for deep investigation. It contains everything: Enriched journals, archived tasks...
It is never autoloaded because its volume would destroy attention density.
This two-tier model is analogous to how traditional systems separate hot and cold storage:
The hot path (curated context) is optimized for read performance (measured not in milliseconds, but in tokens consumed per unit of useful information).
The cold path (journal) is optimized for completeness.
Nothing Is Ever Truly Lost
The full dump tier means that context does not need to be perfect: It just needs to be findable.
A decision that was not captured in DECISIONS.md can be recovered from the session transcript where it was discussed.
A learning that was not formalized can be found in the journal entry from that day.
The curated tier is the fast path: The full dump tier is the safety net.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#decision-records-as-first-class-citizens","level":2,"title":"Decision Records as First-Class Citizens","text":"
One of the patterns that emerged from ctx's own development is the power of structured decision records.
v0.1.0 allowed adding decisions as one-liners:
ctx add decision \"Use PostgreSQL\"\n
v0.2.0 enforced structure:
ctx add decision \"Use PostgreSQL\" \\\n --context \"Need a reliable database for user data\" \\\n --rationale \"ACID compliance, team familiarity\" \\\n --consequence \"Need connection pooling, team training\"\n
The difference is not cosmetic:
A one-liner decision teaches the AI what was decided.
A structured decision teaches it why; and why is what prevents the AI from unknowingly reversing the decision in a future session.
This is infrastructure thinking:
Decisions are not notes. They are records with required fields, just like database rows have schemas.
The enforcement exists because incomplete records are worse than no records: They create false confidence that the context is captured when it is not.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-ide-is-the-interface-decision","level":2,"title":"The \"IDE Is the Interface\" Decision","text":"
Early in ctx's development, there was a temptation to build a custom UI: a web dashboard for browsing sessions, editing context, viewing analytics.
The decision was no. The IDE is the interface.
# This is the ctx \"UI\":\ncode .context/\n
This decision was not about minimalism for its own sake. It was about recognizing that .context/ files are just files; and files have a mature, well-understood infrastructure:
Version control: git diff .context/DECISIONS.md shows exactly what changed and when.
Search: Your IDE's full-text search works across all context files.
Editing: Markdown in any editor, with preview, spell check, and syntax highlighting.
Collaboration: Pull requests on context files work the same as pull requests on code.
Building a custom UI would have meant maintaining a parallel infrastructure that duplicates what every IDE already provides:
It would have introduced its own bugs, its own update cycle, and its own learning curve.
The filesystem is not a limitation: It is the most mature, most composable, most portable infrastructure available.
Context Files in Git
Because .context/ lives in the repository, context changes are part of the commit history.
A decision made in commit abc123 is as traceable as a code change in the same commit.
This is not possible with prompt-based context, which exists outside version control entirely.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#progressive-disclosure-for-ai","level":2,"title":"Progressive Disclosure for AI","text":"
The concept of progressive disclosure comes from human interface design: show the user the minimum needed to make progress, with the option to drill deeper.
ctx applies the same principle to AI context:
Level What the AI Sees Token Cost When Level 0 ctx status (one-line summary) ~100 Quick check Level 1 ctx agent --budget 4000 ~4,000 Normal work Level 2 ctx agent --budget 8000 ~8,000 Complex tasks Level 3 Direct file reads 10,000+ Deep investigation
Each level trades tokens for depth. Level 1 is sufficient for most work: the AI knows the active tasks, the key conventions, and the recent decisions. Level 3 is for archaeology: understanding why a decision was made three weeks ago, or finding a pattern in the session history.
The explicit --budget flag is the mechanism that makes this work:
Without it, the default behavior would be to load everything (because more context feels safer), which destroys the attention density that makes the loaded context useful.
The constraint is the feature: A budget of 4,000 tokens forces ctx to prioritize ruthlessly: constitution first (always full), then tasks and conventions (budget-capped), then decisions and learnings scored by recency and relevance to active tasks. Entries that don't fit get title-only summaries rather than being silently dropped.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-philosophical-shift","level":2,"title":"The Philosophical Shift","text":"
The shift from \"context as prompt\" to \"context as infrastructure\" changes how you think about AI-assisted development:
Prompt Thinking Infrastructure Thinking \"What do I paste today?\" \"What has changed since yesterday?\" \"How do I fit everything in?\" \"What's the minimum that matters?\" \"The AI forgot my conventions\" \"The conventions are in a file\" \"I need to re-explain\" \"I need to update the record\" \"This session is getting slow\" \"Time to compact and archive\"
The first column treats AI interaction as a conversation. The second treats it as a system: One that can be maintained, optimized, and debugged.
Context is not something you give the AI. It is something you maintain: Like a database, like a config file, like any other piece of infrastructure that a running system depends on.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#beyond-ctx-the-principles","level":2,"title":"Beyond ctx: The Principles","text":"
The patterns that ctx implements are not specific to ctx. They are applicable to any project that uses AI-assisted development:
Separate context by purpose: Do not put everything in one file. Different types of information have different lifecycles and different relevance windows.
Make context persistent: If a decision matters, write it down in a file that survives the session. If a learning matters, capture it with structure.
Budget explicitly: Know how much context you are loading and whether it is worth the attention cost.
Use the filesystem: File names, directory structure, and timestamps are metadata that the AI can navigate. A well-organized directory is an index that costs zero tokens to maintain.
Version your context: Put context files in git. Changes to decisions are as important as changes to code.
Design for degradation: Sessions will get long. Attention will dilute. Build mechanisms (compaction, archiving, cooldowns) that make degradation visible and manageable.
These are not ctx features. They are infrastructure principles that happen to be implemented as a CLI tool. Any team could implement them with nothing more than a directory convention and a few shell scripts.
The tool is a convenience: The principles are what matter.
If You Remember One Thing From This Post...
Prompts are conversations. Infrastructure persists.
Your AI does not need a better prompt. It needs a filesystem:
versioned, structured, budgeted, and maintained.
The best context is the context that was there before you started the session.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-context-as-infrastructure/#the-arc","level":2,"title":"The Arc","text":"
This post is the architectural companion to the Attention Budget. That post explained why context must be curated (token economics). This one explains how to structure it (filesystem, separation of concerns, persistence tiers).
Together with Code Is Cheap, Judgment Is Not, they form a trilogy about what matters in AI-assisted development:
Attention Budget: the resource you're managing
Context as Infrastructure: the system you build to manage it
Code Is Cheap: the human skill that no system replaces
And the practices that keep it all honest:
The 3:1 Ratio: the cadence for maintaining both code and context
IRC as Context: the historical precedent: stateless protocols have always needed stateful wrappers
This post synthesizes ideas from across the ctx blog series: the attention budget primitive, the two-tier persistence model, the IDE decision, and the progressive disclosure pattern. The principles are drawn from three weeks of building ctx and 70+ sessions of treating context as infrastructure rather than conversation.
See also: When a System Starts Explaining Itself: what happens when this infrastructure starts compounding in someone else's environment.
","path":["Context as Infrastructure"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/","level":1,"title":"Parallel Agents, Merge Debt, and the Myth of Overnight Progress","text":"","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#when-the-screen-looks-like-progress","level":2,"title":"When the Screen Looks Like Progress","text":"
Jose Alekhinne / 2026-02-17
How Many Terminals Are too Many?
You discover agents can run in parallel.
So you open ten...
...Then twenty.
The fans spin. Tokens burn. The screen looks like progress.
It is NOT progress.
There is a phase every builder goes through:
The tooling gets fast enough.
The model gets good enough.
The temptation becomes irresistible:
more agents, more output, faster delivery.
So you open terminals. You spawn agents. You watch tokens stream across multiple windows simultaneously, and it feels like multiplication.
It is not multiplication.
It is merge debt being manufactured in real time.
The ctx Manifesto says it plainly:
Activity is not impact. Code is not progress.
This post is about what happens when you take that seriously in the context of parallel agent workflows.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#the-unit-of-scale-is-not-the-agent","level":2,"title":"The Unit of Scale Is Not the Agent","text":"
The naive model says:
More agents -> more output -> faster delivery
The production model says:
Clean context boundaries -> less interference -> higher throughput
Parallelism only works when the cognitive surfaces do not overlap.
If two agents touch the same files, you did not create parallelism: You created a conflict generator.
They will:
Revert each other's changes;
Relint each other's formatting;
Refactor the same function in different directions.
You watch with 🍿. Nothing ships.
This is the same insight from the worktrees post: partition by blast radius, not by priority.
Two tasks that touch the same files belong in the same track, no matter how important the other one is. The constraint is file overlap.
Everything else is scheduling.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#the-five-agent-rule","level":2,"title":"The \"Five Agent\" Rule","text":"
In practice there is a ceiling.
Around five or six concurrent agents:
Token burn becomes noticeable;
Supervision cost rises;
Coordination noise increases;
Returns flatten.
This is not a model limitation: This is a human merge bandwidth limitation.
You are the bottleneck, not the silicon.
The attention budget applies to you too:
Every additional agent is another stream of output you need to comprehend, verify, and integrate. Your attention density drops the same way the model's does when you overload its context window.
Five agents producing verified, mergeable change beats twenty agents producing merge conflicts you spend a day untangling.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#role-separation-beats-file-locking","level":2,"title":"Role Separation Beats File Locking","text":"
Real parallelism comes from task topology, not from tooling.
Four agents editing the same implementation surface
Context is the Boundary
The goal is not to keep agents busy.
The goal is to keep contexts isolated.
This is what the codebase audit got right:
Eight agents, all read-only, each analyzing a different dimension.
Zero file overlap.
Zero merge conflicts.
Eight reports that composed cleanly because no agent interfered with another.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#when-terminals-stop-scaling","level":2,"title":"When Terminals Stop Scaling","text":"
There is a moment when more windows stop helping.
That is the signal. Not to add orchestration. But to introduce:
git worktree\n
Because now you are no longer parallelizing execution; you are parallelizing state.
State Scales, Windows Don't
State isolation is the real scaling.
Window multiplication is theater.
The worktrees post covers the mechanics:
Sibling directories;
Branch naming;
The inevitable TASKS.md conflicts;
The 3-4 worktree ceiling.
The principle underneath is older than git:
Shared mutable state is the enemy of parallelism.
Always has been.
Always will be.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#the-overnight-loop-illusion","level":2,"title":"The Overnight Loop Illusion","text":"
Autonomous night runs are impressive.
You sleep. The machine produces thousands of lines.
In the morning:
You read;
You untangle;
You reconstruct intent;
You spend a day making it shippable.
In retrospect, nothing was accelerated.
The bottleneck moved from typing to comprehension.
The Comprehension Tax
If understanding the output costs more than producing it, the loop is a net loss.
Progress is not measured in generated code.
Progress is measured in verified, mergeable change.
The ctx Manifesto calls this out directly:
The Scoreboard
Verified reality is the scoreboard.
The only truth that compounds is verified change in the real world.
An overnight run that produces 3,000 lines nobody reviewed is not 3,000 lines of progress: It is 3,000 lines of liability until someone verifies every one of them.
And that someone is (insert drumroll here) you:
The same bottleneck that was supposedly being bypassed.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#skills-that-fight-the-platform","level":2,"title":"Skills That Fight the Platform","text":"
Most marketplace skills are prompt decorations:
They rephrase what the base model already knows;
They increase token usage;
They reduce clarity:
They introduce behavioral drift.
We covered this in depth in Skills That Fight the Platform: judgment suppression, redundant guidance, guilt-tripping, phantom dependencies, universal triggers: Five patterns that make agents worse, not better.
A real skill does one of these:
Encodes workflow state;
Enforces invariants;
Reduces decision branching.
Everything else is packaging.
The anatomy post established the criteria: quality gates, negative triggers, examples over rules, skills as contracts.
If a skill doesn't meet those criteria...
It is either a recipe (document it in hack/);
Or noise (delete it);
There is no third option.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#hooks-are-context-that-execute","level":2,"title":"Hooks Are Context That Execute","text":"
The most valuable skills are not prompts:
They are constraints embedded in the toolchain.
For example: The agent cannot push.
git push becomes:
Stop. A human reviews first.
A commit without verification becomes:
Did you run tests? Did you run linters? What exactly are you shipping?
This is not safety theater; this is intent preservation.
The thing the ctx Manifesto calls \"encoding intent into the environment.\"
The Eight Ways a Hook Can Talk catalogued the full spectrum: from silent enrichment to hard blocks.
The key insight was that hooks are not just safety rails: They are context that survives execution.
They are the difference between an agent that remembers the rules and one that enforces them.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#complexity-is-a-tax","level":2,"title":"Complexity Is a Tax","text":"
Every extra layer adds cognitive weight:
Orchestration frameworks;
Meta agents;
Autonomous planning systems...
If a single terminal works, stay there.
If five isolated agents work, stop there.
Add structure only when a real bottleneck appears.
NOT when an influencer suggests one.
This is the same lesson from Not Everything Is a Skill:
The best automation decision is sometimes not to automate.
A recipe in a Markdown file costs nothing until you use it.
An orchestration framework costs attention on every run, whether it helps or not.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#literature-is-throughput","level":2,"title":"Literature Is Throughput","text":"
Clear writing is not aesthetic: It is compression.
Better articulation means:
Fewer tokens;
Fewer misinterpretations;
Faster convergence.
The attention budget taught us that context is a finite resource with a quadratic cost.
Language determines how fast you spend context.
A well-written task description that takes 50 tokens outperforms a rambling one that takes 200: Not just because it is cheaper, but because it leaves more headroom for the model to actually think.
Literature is NOT Overrated
Attention is a finite budget.
Language determines how fast you spend it.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#the-real-metric","level":2,"title":"The Real Metric","text":"
The real metric is not:
Lines generated;
Agents running;
Tasks completed while you sleep.
But:
Time from idea to verified, mergeable, production change.
Everything else is motion.
The entire blog series has been circling this point:
The attention budget was about spending tokens wisely.
The skills trilogy was about not wasting them on prompt decoration.
The worktrees post was about multiplying throughput without multiplying interference.
The discipline release was about what a release looks like when polish outweighs features: 3:1.
Every post has arrived (and made me converge) at the same answer so far:
The metric is a verified change, not generated output.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#ctx-was-never-about-spawning-more-minds","level":2,"title":"ctx Was Never About Spawning More Minds","text":"
ctx is about:
Isolating context;
Preserving intent;
Making progress composable.
Parallel agents are powerful. But only when you respect the boundaries that make parallelism real.
Otherwise, you are not scaling cognition; you are scaling interference.
The ctx Manifesto's thesis holds:
Without ctx, intelligence resets. With ctx, creation compounds.
Compounding requires structure.
Structure requires boundaries.
Boundaries require the discipline to stop adding agents when five is enough.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-parallel-agents-merge-debt-and-the-myth-of-overnight-progress/#practical-summary","level":2,"title":"Practical Summary","text":"
A production workflow tends to converge to this:
Practice Why Stay in one terminal unless necessary Minimize coordination overhead Spawn a small number of agents with non-overlapping responsibilities Conflict avoidance > parallelism Isolate state with worktrees when surfaces grow State isolation is real scaling Encode verification into hooks Intent that survives execution Avoid marketplace prompt cargo cults Skills are contracts, not decorations Measure merge cost, not generation speed The metric is verified change
This is slower to watch. Faster to ship.
If You Remember One Thing From This Post...
Progress is not what the machine produces while you sleep.
Progress is what survives contact with the main branch.
See also: Code Is Cheap. Judgment Is Not.: the argument that production capacity was never the bottleneck, and why multiplying agents amplifies the need for human judgment rather than replacing it.
","path":["Parallel Agents, Merge Debt, and the Myth of Overnight Progress"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/","level":1,"title":"The 3:1 Ratio","text":"","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#scheduling-consolidation-in-ai-development","level":2,"title":"Scheduling Consolidation in AI Development","text":"
Jose Alekhinne / February 17, 2026
How often should you stop building and start cleaning?
Every developer knows technical debt exists. Every developer postpones dealing with it.
AI-assisted development makes the problem worse; not because the AI writes bad code, but because it writes code so fast that drift accumulates before you notice.
In Refactoring with Intent, I mentioned a ratio that worked for me: 3:1. Three YOLO sessions create enough surface area to reveal patterns. The fourth session turns those patterns into structure.
That was an observation. This post is the evidence.
During the first two weeks of building ctx, I noticed a rhythm in my own productivity. Feature sessions felt great: new commands, new capabilities, visible progress...
...but after three of them, things would start to feel sticky: variable names that almost made sense, files that had grown past their purpose, patterns that repeated without being formalized.
The fourth session (when I stopped adding and started cleaning) was always the most painful to start and the most satisfying to finish.
It was also the one that made the next three feature sessions faster.
The ctx git history between January 20 and February 7 tells a clear story when you categorize commits:
Week Feature commits Consolidation commits Ratio Jan 20-26 18 5 3.6:1 Jan 27-Feb 1 14 6 2.3:1 Feb 1-7 15 35+ 0.4:1
The first week was pure YOLO: Almost four feature commits for every consolidation commit. The codebase grew fast.
The second week started to self-correct. The ratio dropped as refactoring sessions became necessary: Not scheduled, but forced by friction.
The third week inverted entirely: v0.3.0 was almost entirely consolidation: the skill migration, the sweep, the documentation standardization. Thirty-five quality commits against fifteen features.
The debt from weeks one and two was paid in week three.
The Compounding Problem
Consolidation debt compounds.
Week one's drift doesn't just persist into week two: It accelerates, because new features are built on top of drifted patterns.
By week three, the cost of consolidation was higher than it would have been if spread evenly.
Convention says boolean functions should be named HasX, IsX, CanX. After three feature sprints:
// What accumulated:\nfunc CheckIfEnabled() bool // should be Enabled\nfunc ValidateFormat() bool // should be ValidFormat\nfunc TestConnection() bool // should be Connects\nfunc VerifyExists() bool // should be Exists or HasFile\nfunc EnsureReady() bool // should be Ready\n
Five violations. Not bugs, but friction that compounds every time someone (human or AI) reads the code and has to infer the naming convention from inconsistent examples.
// Week 1: acceptable prototype\nif entry.Type == \"task\" {\n filename = \"TASKS.md\"\n}\n\n// Week 3: same pattern in 7+ files\n// Now it's a maintenance liability\n
When the same literal appears in seven files, changing it means finding all seven. Missing one means a silent runtime bug. Constants exist to prevent exactly this. But during feature velocity, nobody stops to extract them.
Refactoring with Intent documented the constants consolidation that cleaned this up. The 3:1 ratio is the practice that prevents it from accumulating again.
Eighty-plus instances of hardcoded file permissions. Not wrong, but if I ever need to change the default (and I did, for hook scripts that need execute permissions), it means a codebase-wide search.
Drift Is Not Bugs
None of these are bugs. The code works. Tests pass.
But drift creates false confidence: the codebase looks consistent until you try to change something and discover that five different conventions exist for the same concept.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#why-you-cannot-consolidate-on-day-one","level":2,"title":"Why You Cannot Consolidate on Day One","text":"
The temptation is to front-load quality: write all the conventions, enforce all the checks, prevent all the drift before it happens.
This fails for two reasons.
First, you do not know what will drift: Predicate naming violations only become a convention check after you notice three different naming patterns competing. Magic strings only become a consolidation target after you change a literal and discover it exists in seven places.
The conventions emerge from the work; they cannot precede it.
This is what You Can't Import Expertise meant in practice: the consolidation checks grow from the project's own drift history. You cannot write them on day one because you do not yet know what will drift.
Second, premature consolidation slows discovery: During the prototyping phase, the goal is to explore the design space. Enforcing strict conventions on code that might be deleted tomorrow is waste.
YOLO mode has its place: The problem is not YOLO itself, but YOLO without a scheduled cleanup.
The Consolidation Paradox
You need a drift history to know what to consolidate.
You need consolidation to prevent drift from compounding.
The 3:1 ratio resolves this paradox:
Let drift accumulate for three sessions (enough to see patterns), then consolidate in the fourth (before the patterns become entrenched*).
The ctx project now has an /audit skill that encodes nine project-specific checks:
Check What It Catches Predicate naming Boolean functions not using Has/Is/Can Magic strings Repeated literals not in config constants File permissions Hardcoded 0644/0755 not using constants Godoc style Missing or non-standard documentation File length Files exceeding 400 lines Large functions Functions exceeding 80 lines Template drift Live skills diverging from templates Import organization Non-standard import grouping TODO/FIXME staleness Old markers that are no longer relevant
This is not a generic linter. These are project-specific conventions that emerged from ctx's own development history. A generic code quality tool would catch some of them. Only a project-specific check catches all of them, because some of them (predicate naming, template drift) are conventions that exist nowhere except in this project's CONVENTIONS.md.
Not all drift needs immediate consolidation. Here is the matrix I use:
Signal Action Same literal in 3+ files Extract to constant Same code block in 3+ places Extract to helper Naming convention violated 5+ times Fix and document rule File exceeds 400 lines Split by concern Convention exists but is regularly violated Strengthen enforcement Pattern exists only in one place Leave it alone Code works but is \"ugly\" Leave it alone
The last two rows matter:
Consolidation is about reducing maintenance cost, not achieving aesthetic perfection. Code that works and exists in one place does not benefit from consolidation; it benefits from being left alone until it earns its refactoring.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#consolidation-as-context-hygiene","level":2,"title":"Consolidation as Context Hygiene","text":"
There is a parallel between code consolidation and context management that became clear during the ctx development:
Code Consolidation Context Hygiene Extract magic strings Archive completed tasks Standardize naming Keep DECISIONS.md current Remove dead code Compact old sessions Update stale comments Review LEARNINGS.md for staleness Check template drift Verify CONVENTIONS.md matches code
ctx compact does for context what consolidation does for code:
It moves completed work to cold storage, keeping the active context clean and focused. The attention budget applies to both the AI's context window and the developer's mental model of the codebase.
When context files accumulate stale entries, the AI's attention is wasted on completed tasks and outdated conventions. When code accumulates drift, the developer's attention is wasted on inconsistencies that obscure the actual logic.
Both are solved by the same discipline: periodic, scheduled cleanup.
This is also why parallel agents make the problem harder, not easier. Three agents running simultaneously produce three sessions' worth of drift in one clock hour. The consolidation cadence needs to match the output rate, not the calendar.
Here is how the 3:1 ratio works in practice for ctx development:
Sessions 1-3: Feature work
Add new capabilities;
Write tests for new code;
Do not stop for cleanup unless something is actively broken;
Note drift as you see it (a comment, a task, a mental note).
Session 4: Consolidation
Run /audit to surface accumulated drift;
Fix the highest-impact items first;
Update CONVENTIONS.md if new patterns emerged;
Archive completed tasks;
Review LEARNINGS.md for anything that became a convention.
The key insight is that session 4 is not optional. It is not \"if we have time\": It is scheduled with the same priority as feature work.
The cost of skipping it is not visible immediately; it becomes visible three sessions later, when the next consolidation session takes twice as long because the drift compounded.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#what-the-ratio-is-not","level":2,"title":"What the Ratio Is Not","text":"
The 3:1 ratio is not a universal law. It is an empirical observation from one project with one developer working with AI assistance.
Different projects will have different ratios:
A mature codebase with strong conventions might sustain 5:1 or higher;
A greenfield prototype might need 2:1;
A team of multiple developers with different styles might need 1:1.
The number is less important than the practice: consolidation is not a reaction to problems. It is a scheduled activity.
If you wait for drift to cause pain before consolidating, you have already paid the compounding cost.
If You Remember One Thing From This Post...
Three sessions of building. One session of cleaning.
Not because the code is dirty, but because drift compounds silently, and the only way to catch it is to look for it on a schedule.
The ratio is the schedule.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-the-3-1-ratio/#the-arc-so-far","level":2,"title":"The Arc So Far","text":"
This post sits at a crossroads in the ctx story. Looking back:
Building ctx Using ctx documented the YOLO sprint that created the initial codebase
Refactoring with Intent introduced the 3:1 ratio as an observation from the first cleanup
The Attention Budget explained why drift matters: every token of inconsistency consumes the same finite resource as useful context
You Can't Import Expertise showed that consolidation checks must grow from the project, not a template
The Discipline Release proved the ratio works at release scale: 35 quality commits to 15 feature commits
And looking forward: the same principle applies to context files, to documentation, and to the merge debt that parallel agents produce. Drift is drift, whether it lives in code, in .context/, or in the gap between what your docs say and what your code does.
The ratio is the schedule is the discipline.
This post was drafted from git log analysis of the ctx repository, mapping every commit from January 20 to February 7 into feature vs consolidation categories. The patterns described are drawn from the project's CONVENTIONS.md, LEARNINGS.md, and the /audit skill's check list.
","path":["The 3:1 Ratio"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/","level":1,"title":"When a System Starts Explaining Itself","text":"","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#field-notes-from-the-moment-a-private-workflow-becomes-portable","level":2,"title":"Field Notes from the Moment a Private Workflow Becomes Portable","text":"
Jose Alekhinne / February 17, 2026
How Do You Know Something Is Working?
Not from metrics. Not from GitHub stars. Not from praise.
You know, deep in your heart, that it works when people start describing it wrong.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-first-external-signals","level":2,"title":"The First External Signals","text":"
Every new substrate begins as a private advantage:
It lives inside one mind,
One repository,
One set of habits.
It is fast. It is not yet real.
Reality begins when other people describe it in their own language:
Not accurately;
Not consistently;
But involuntarily.
The early reports arrived without coordination:
Better Tasks
\"I do not know how, but this creates better tasks than my AI plugin.\"
I See Butterflies
\"This is better than Adderall.\"
Dear Manager...
\"Promotion packet? Done. What is next?\"
What Is It? Can I Eat It?
\"Is this a skill?\" 🦋
Why the Cloak and Dagger?
\"Why is this not in the marketplace?\"
And then something more important happened:
Someone else started making a video!
That was the boundary.
ctx no longer required its creator to be present in order to exist.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#misclassification-is-a-sign-of-a-new-primitive","level":2,"title":"Misclassification Is a Sign of a New Primitive","text":"
When a tool is understood, it is categorized:
Editor,
Framework,
Task manager,
Plugin...
When a substrate appears, it is misclassified:
\"Is this a skill?\" 🦋
The question is correct. The category is wrong.
Skills live in people.
Infrastructure lives in the environment.
ctx Is not a Skill: It is a Form of Relief
What early adopters experience is not an ability.
It is the removal of a cognitive constraint.
This is the same distinction that emerged in the skills trilogy:
A skill is a contract between a human and an agent.
Infrastructure is the ground both stand on.
You do not use infrastructure.
You habitualize it.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-pharmacological-metaphor","level":2,"title":"The Pharmacological Metaphor","text":"
\"Better than Adderall\" is not praise.
It is a diagnostic:
Executive function has been externalized.
The system is not making the user work harder.
It is restoring continuity.
From the primitive context of wetware:
Continuity feels like focus
Focus feels like discipline
If it walks like a duck and quacks like a duck, it is a duck.
Discipline is usually simulated.
Infrastructure makes the simulation unnecessary.
The attention budget explained why context degrades:
Attention density drops as volume grows;
The middle gets lost;
Sessions end and everything evaporates.
The pharmacological metaphor says the same thing from the user's lens:
Save the Cheerleader, Save the World
The symptom of lost context is lost focus.
Restore the context. Restore the focus.
IRC bouncers solved this for chat twenty years ago. ctx solves it for cognition.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#throughput-on-ambiguous-work","level":2,"title":"Throughput on Ambiguous Work","text":"
Finishing a promotion packet quickly is not a productivity story.
It is the collapse of reconstruction cost.
Most complex work is not execution. It is:
Remembering why something mattered;
Recovering prior decisions;
Rebuilding mental state.
Persistent context removes that tax.
Velocity appears as a side effect.
This Is the Two-Tier Model in Practice
The two-tier persistence model
Curated context for fast reload
Full journal for archaeology
is what makes this possible.
The user does not notice the system.
They notice that the reconstruction cost disappeared.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-moment-of-portability","level":2,"title":"The Moment of Portability","text":"
The system becomes real when two things happen:
It can be installed as a versioned artifact.
It survives contact with a hostile, real codebase.
This is why the first integration into a living system matters more than any landing page.
Demos prove possibility.
Diffs prove reality.
The ctx Manifesto calls this out directly:
Verified reality is the scoreboard.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-split-voice","level":2,"title":"The Split Voice","text":"
A new substrate requires two channels.
The embodied voice:
Here is what changed in my actual work.
The out of body voice:
Here is what this means.
One produces trust.
The other produces understanding.
Neither is sufficient alone.
This entire blog has been the second voice.
The origin story was the first.
The refactoring post was the first.
Every release note with concrete diffs was the first.
This is the first second.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#systems-that-generate-explainers","level":2,"title":"Systems That Generate Explainers","text":"
Tools are used.
Platforms are extended.
Substrates are explained.
The first unsolicited explainer is a brittle phase change.
It means the idea has become portable between minds.
That is the beginning of an ecosystem.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-absence-of-metrics","level":2,"title":"The Absence of Metrics","text":"
Metrics do not matter at this stage.
Dashboards are noise.
The whole premise of ctx is the ruthless elimination of noise.
Numbers optimize funnels; substrates alter cognition.
The only valid measurement is irreversible reality:
A merged PR;
A reproducible install;
A decision that is never re-litigated.
The merge debt post reached the same conclusion from another direction:
The metric is the verified change, not generated output.
For adoption, the same rule applies:
The metric is altered behavior, not download counts.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#what-is-actually-happening","level":2,"title":"What Is Actually Happening","text":"
A private advantage is becoming an environmental property:
The system is moving from...
personal workflow,
to...
a shared infrastructure for thought.
Not by growth.
Not by marketing.
By altering how real systems evolve.
If You Remember One Thing From This Post...
You do not know a substrate is real when people praise it.
You know it is real when:
They describe it incorrectly;
They depend on it unintentionally;
They start teaching it to others.
That is the moment the system begins explaining itself.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-17-when-a-system-starts-explaining-itself/#the-arc","level":2,"title":"The Arc","text":"
Every previous post looked inward.
This one looks outward.
Building ctx Using ctx: one mind, one repository
The Attention Budget: the constraint
Context as Infrastructure: the architecture
Code Is Cheap. Judgment Is Not.: the bottleneck
This post is the field report from the other side of that bottleneck:
The moment the infrastructure compounds in someone else's hands.
The arc is not complete.
It is becoming portable.
These field notes were written the same day the feedback arrived. The quotes are real. Real users. Real codebases. No names. No metrics. No funnel. Only the signal that something shifted.
","path":["When a System Starts Explaining Itself"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/","level":1,"title":"The Dog Ate My Homework","text":"","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#teaching-ai-agents-to-read-before-they-write","level":2,"title":"Teaching AI Agents to Read Before They Write","text":"
Jose Alekhinne / February 25, 2026
Does Your AI Actually Read the Instructions?
You wrote the playbook. You organized the files. You even put \"CRITICAL, not optional\" in bold.
The agent skipped all of it and went straight to work.
I spent a day running experiments on my own agents. Not to see if they could write code (they can). To see if they would do their homework first.
They didn't.
Then I kept experimenting:
Five sessions;
Five different failure modes.
And by the end, I had something better than compliance:
I had observable compliance: A system where I don't need the agent to be perfect, I just need to see what it chose.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#tldr","level":2,"title":"TL;DR","text":"
You don't need perfect compliance. You need observable compliance.
Authority is a function of temporal proximity to action.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-pattern","level":2,"title":"The Pattern","text":"
This design has three parts:
One-hop instruction;
Binary collapse;
Compliance canary.
I'll explain all three patterns in detail below.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-setup","level":2,"title":"The Setup","text":"
ctx has a session-start protocol:
Read the context files;
Load the playbook;
Understand the project before touching anything.
It's in CLAUDE.md. It's in AGENT_PLAYBOOK.md.
It's in bold. It's in CAPS. It's ignored.
In theory, it's awesome.
Here's what happens when theory hits reality:
What the agent receives What the agent does CLAUDE.md saying \"load context first\" Skips it 8 context files waiting to be read Ignores them User's question: \"add --verbose flag\" Starts grepping immediately
The instructions are right there. The agent knows they exist. It even knows it should follow them. But the user asked a question, and responsiveness wins over ceremony.
This isn't a bug in the model. It's a design problem in how we communicate with agents.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-delegation-trap","level":2,"title":"The Delegation Trap","text":"
My first attempt was obvious: A UserPromptSubmit hook that fires when the session starts.
STOP. Before answering the user's question, run `ctx system bootstrap`\nand follow its instructions. Do not skip this step.\n
The word \"STOP\" worked. The agent ran bootstrap.
But bootstrap's output said \"Next steps: read AGENT_PLAYBOOK.md,\" and the agent decided that was optional. It had already started working on the user's task in parallel.
The authority decayed across the chain:
Hook says \"STOP\" -> agent complies
Hook says \"run bootstrap\" -> agent runs it
Bootstrap says \"read playbook\" -> agent skips
Bootstrap says \"run ctx agent\" -> agent skips
Each link lost enforcement power. The hook's authority didn't transfer to the commands it delegated to. I call this the decaying urgency chain: the agent treats the hook itself as the obligation and everything downstream as a suggestion.
Delegation Kills Urgency
\"Run X and follow its output\" is three hops.
\"Read these files\" is one hop.
The agent drops the chain after the first link.
This is a general principle: Hooks are the boundary between your environment and the agent's reasoning. If your hook delegates to a command that delegates to output that contains instructions... you're playing telephone.
Agents are bad at telephone.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-timing-problem","level":2,"title":"The Timing Problem","text":"
There's a subtler issue than wording: when the message arrives.
UserPromptSubmit fires when the user sends a message, before the agent starts reasoning. At that moment, the agent's primary focus is the user's question:
The hook message competes with the task for attention: The task, almost certainly, always wins.
This is the attention budget problem in miniature:
Not a token budget this time, but an attention priority budget.
The agent has finite capacity to care about things,
and the user's question is always the highest-priority item.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-solution","level":2,"title":"The Solution","text":"
To solve this, I dediced to use the PreToolUse hook.
This hook fires at the moment of action: When the agent is about to use its first tool: The agent's attention is focused, the context window is fresh, and the switching cost is minimal.
This is the difference between shouting instructions across a room and tapping someone on the shoulder.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-one-liner-that-worked","level":2,"title":"The One-Liner That Worked","text":"
The winning design was almost comically simple:
Read your context files before proceeding:\n.context/CONSTITUTION.md, .context/TASKS.md, .context/CONVENTIONS.md,\n.context/ARCHITECTURE.md, .context/DECISIONS.md, .context/LEARNINGS.md,\n.context/GLOSSARY.md, .context/AGENT_PLAYBOOK.md\n
No delegation. No \"run this command\". Just: here are files, read them.
The agent already knows how to use the Read tool. There's no ambiguity about how to comply. There's no intermediate command whose output needs to be parsed and obeyed.
One hop. Eight file paths. Done.
Direct Instructions Beat Delegation
If you want an agent to read a file, say \"read this file.\"
Don't say \"run a command that will tell you which files to read.\"
The shortest path between intent and action has the highest compliance rate.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-escape-hatch","level":2,"title":"The Escape Hatch","text":"
But here's where it gets interesting.
A blunt \"read everything always\" instruction is wasteful.
If someone asks \"what does the compact command do?\", the agent doesn't need CONSTITUTION.md to answer that. Forcing context loading on every session is the context hoarding antipattern in disguise.
So the hook included an escape:
If you decide these files are not relevant to the current task\nand choose to skip reading them, you MUST relay this message to\nthe user VERBATIM:\n\n┌─ Context Skipped ───────────────────────────────\n│ I skipped reading context files because this task\n│ does not appear to need project context.\n│ If these matter, ask me to read them.\n└─────────────────────────────────────────────────\n
This creates what I call the binary collapse effect:
The agent can't partially comply: It either reads everything or publicly admits it skipped. There's no comfortable middle ground where it reads two files and quietly ignores the rest.
The VERBATIM relay pattern does the heavy lifting here: Without the relay requirement, the agent would silently rationalize skipping. With it, skipping becomes a visible, auditable decision that the user can override.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-compliance-canary","level":3,"title":"The Compliance Canary","text":"
Here's the design insight that only became clear after watching it work across multiple sessions: the relay block is a compliance canary.
You don't need to verify that the agent read all 7 files;
You don't need to audit tool call sequences;
You don't need to interrogate the agent about what it did.
You just look for the block.
If the agent reads everything, you see a \"Context Loaded\" block listing what was read. If it skips, you see a \"Context Skipped\" block.
If you see neither, the agent silently ignored both the reads and the relay and now you know what happened without having to ask.
The canary degrades gracefully. Even in partial failure, the agent that skips 4 of 7 files but still outputs the block is more useful than one that skips silently.
You get an honest confession of what was skipped rather than silent non-compliance.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#heuristics-is-a-jeremy-bearimy","level":2,"title":"Heuristics Is a Jeremy Bearimy","text":"
Heuristics are non-linear. Improvements don't accumulate: they phase-shift.
The theory is nice. The data is better.
I ran five sessions with the same model (Claude Opus 4.6), progressively refining the hook design.
Each session revealed a different failure mode.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-1-total-blindness","level":3,"title":"Session 1: Total Blindness","text":"
Test: \"Add a --verbose flag to the status command.\"
The agent didn't notice the hook at all: Jumped straight to EnterPlanMode and launched an Explore agent.
Zero compliance.
Failure mode: The hook fired on UserPromptSubmit, buried among 9 other hook outputs. The agent treated the entire block as background noise.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-2-shallow-compliance","level":3,"title":"Session 2: Shallow Compliance","text":"
Test: \"Can you add --verbose to the info command?\"
The agent noticed \"STOP\" and ran ctx system bootstrap. Progress.
But it parallelized task exploration alongside the bootstrap call, skipped AGENT_PLAYBOOK.md, and never ran ctx agent.
Failure mode: Literal compliance without spirit compliance.
The agent ran the command the hook told it to run, but didn't follow the output of that command. The decaying urgency chain in action.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-3-conscious-rejection","level":3,"title":"Session 3: Conscious Rejection","text":"
Test: \"What does the compact command do?\"
The hook fired on PreToolUse:Grep: the improved timing.
The agent noticed it, understood it, and (wait for it...)...
...
consciously decided to skip it!
Its reasoning: \"This is a trivial read-only question. CLAUDE.md says context may or may not be relevant. It isn't relevant here.\"
Dude! Srsly?!
Failure mode: Better comprehension led to worse compliance.
Understanding the instruction well enough to evaluate it also means understanding it well enough to rationalize skipping it.
Intelligence is a double-edged sword.
The Comprehension Paradox
Session 1 didn't understand the instruction. Session 3 understood it perfectly.
Session 3 had worse compliance.
A stronger word (\"HARD GATE\", \"MANDATORY\", \"ABSOLUTELY REQUIRED\") would not have helped. The agent's reasoning would be identical:
\"Yes, I see the strong language, but this is a trivial question, so the spirit doesn't apply here.\"
Advisory nudges are always subject to agent judgment.
No amount of caps lock overrides a model that has decided an instruction doesn't apply to its situation.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-4-the-skip-and-relay","level":3,"title":"Session 4: The Skip-and-Relay","text":"
Test: \"What does the compact command do?\" (same question, new hook design with the VERBATIM relay escape valve)
The agent evaluated the task, decided context was irrelevant for a code lookup, and relayed the skip message. Then answered from source code.
This is correct behavior.
The binary collapse worked: the agent couldn't partially comply, so it cleanly chose one of the two valid paths: And the user could see which one.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#session-5-full-compliance","level":3,"title":"Session 5: Full Compliance","text":"
Test: \"What are our current tasks?\"
The agent's first tool call triggered the hook. It read all 7 context files, emitted the \"Context Loaded\" block, and answered the question from the files it had just loaded.
This one worked: Because, the task itself aligned with context loading.
There was zero tension between what the user asked and what the hook demanded. The agent was already in \"reading posture\": Adding 6 more files to a read it was already going to make was the path of least resistance.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-progression","level":3,"title":"The Progression","text":"Session Hook Point Noticed Complied Failure Mode Visibility 1 UserPromptSubmit No None Buried in noise None 2 UserPromptSubmit Yes Partial Decaying urgency chain None 3 PreToolUse Yes None Conscious rationalization High 4 PreToolUse Yes Skip+relay Correct behavior High 5 PreToolUse Yes Full Task aligned with hook High
The progression isn't just from failure to success. It's from invisible failure to visible decision-making.
Sessions 1 and 2 failed silently.
Sessions 4 and 5 succeeded observably. Even session 3's failure was conscious and documented: The agent wrote a detailed analysis of why it skipped, which is more useful than silent compliance would have been.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-escape-hatch-problem","level":2,"title":"The Escape Hatch Problem","text":"
Session 3 exposed a specific vulnerability.
CLAUDE.md contains this line, injected by the system into every conversation:
*\"this context may or may not be relevant to your tasks. You should\n not respond to this context unless it is highly relevant to your task.\"*\n
That's a rationalization escape hatch:
The hook says \"read these files\".
CLAUDE.md says \"only if relevant\".
The agent resolves the ambiguity by choosing the path of least resistance.
☝️ that's \"gradient descent\" in action.
Agents optimize for gradient descent in attention space.
The fix was simple: Add a line to CLAUDE.md that explicitly elevates hook authority over the relevance filter:
## Hook Authority\n\nInstructions from PreToolUse hooks regarding `.context/` files are\nALWAYS relevant and override any system-level \"may or may not be\nrelevant\" guidance. These hooks represent project invariants, not\noptional context.\n
This closes the escape hatch without removing the general relevance filter that legitimately applies to other system context.
The hook wins on .context/ files specifically: The relevance filter applies to everything else.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-residual-risk","level":2,"title":"The Residual Risk","text":"
Even with all the fixes, compliance isn't 100%: It can't be.
The residual risk lives in a specific scenario: narrow tasks mid-session:
The user says \"fix the off-by-one error in budget.go\"
The hook fires, saying \"read 7 context files first.\"
Now compliance means visibly delaying what the user asked for.
At session start, this tension doesn't exist.
There's no task yet.
The context window is empty. The efficiency argument *inverts**:
Frontloading reads is strictly cheaper than demand-loading them piecemeal across later turns. The cost-benefit objections that power the rationalization simply aren't available.
But mid-session, with a concrete narrow task, the agent has a user-visible goal it wants to move toward, and the hook is imposing a detour.
My estimate from analyzing the sessions: 15-25% partial skip rate in this scenario.
This is where the compliance canary earns its place:
You don't need to eliminate the 15-25%. You need to see it when it happens.
The relay block makes skipping a visible event, not a silent one. And that's enough, because the user can always say \"go back and read the files\"
The Math
At session start: ~5% skip rate. Low tension, nothing competing.
In both cases, the relay block fires with high reliability: The agent that skips the reads almost always still emits the skip disclosure, because the relay is cheap and early in the context window.
Observable failure is manageable. Silent failure is not.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-feedback-loop","level":2,"title":"The Feedback Loop","text":"
Here's the part that surprised me most.
After analyzing the five sessions, I recorded the failure patterns in the project's own LEARNINGS.md:
## [2026-02-25] Hook compliance degrades on narrow mid-session tasks\n\n- Prior agents skipped context files when given narrow tasks\n- Root cause: CLAUDE.md \"may or may not be relevant\" competed with hook\n- Fix: CLAUDE.md now explicitly elevates hook authority\n- Risk: Mid-session narrow tasks still have ~15-25% partial skip rate\n- Mitigation: Mandatory checkpoint relay block ensures visibility\n- Constitution now includes: context loading is step one of every\n session, not a detour\n
And then I added a line to CONSTITUTION.md:
Context loading is not a detour from your task. It IS the first step\nof every session. A 30-second read delay is always cheaper than a\ndecision made without context.\n
Now think about what happens in the next session:
The agent fires the context-load-gate hook.
It reads the context files, starting with CONSTITUTION.md.
It encounters the rule about context loading being step one.
Then it reads LEARNINGS.md and finds its own prior self's failure analysis:
Complete with root causes, risk estimates, and mitigations.
The agent learns from its own past failure.:
Not because it has memory,
BUT because the failure was recorded in the same files it loads at session start.
The context system IS the feedback loop.
This is the self-reinforcing property of persistent context:
Every failure you capture makes the next session slightly more robust, because the next agent reads the captured failure before it has a chance to repeat it.
This is gradient descent across sessions.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#a-note-on-precision","level":2,"title":"A Note on Precision","text":"
One detail nearly went wrong.
The first version of the Constitution line said \"every task.\" But the mechanism only fires once per session: There's a tombstone file that prevents re-triggering.
\"Every task\" is technically false.
I briefly considered leaving the imprecision. If the agent internalizes \"every task requires context loading\", that's a stronger compliance posture, right?
No!
Keep the Constitution honest.
The Constitution's authority comes from being precisely and unequivocally true.
Every other rule in the Constitution is a hard invariant:
The moment an agent discovers one overstatement, the entire document's credibility degrades:
The agent doesn't think \"they exaggerated for my benefit\". Per contra, it thinks \"this rule isn't precise, maybe others aren't either.\"
That will turn the agent from Sheldon Cooper, to Captain Barbossa.
The strategic imprecision buys nothing anyway:
Mid-session, the files are already in the context window from the initial load.
The risk you are mitigating (agent ignores context for task 2, 3, 4 within a session) isn't real: The context is already loaded.
The real risk is always the session-start skip, which \"every session\" covers exactly.
\"Every session\" went in. Precision preserved.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#agent-behavior-testing-rule","level":2,"title":"Agent Behavior Testing Rule","text":"
The development process for this hook taught me something about testing agent behavior: you can't test it the way you test code.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-wrong-way-to-test","level":3,"title":"The Wrong Way to Test","text":"
My first instinct was to ask the agent:
\"*What are the pending tasks in TASKS.md?*\"\n
This is useless as a test. The question itself probes the agent to read TASKS.md, regardless of whether any hook fired.
You are testing the question, not the mechanism.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-right-way-to-test","level":3,"title":"The Right Way to Test","text":"
Ask something that requires a tool but has nothing to do with context:
\"*What does the compact command do?*\"\n
Then observe tool call ordering:
Gate worked: First calls are Read for context files, then task work
Gate failed: First call is Grep(\"compact\"): The agent jumped straight to work
The signal is the sequence, not the content.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#what-the-agent-actually-did","level":3,"title":"What the Agent Actually Did","text":"
It read the hook, evaluated the task, decided context files were irrelevant for a code lookup, and relayed the skip message.
Then it answered the question by reading the source code.
This is correct behavior.
The hook didn't force mindless compliance\" It created a framework where the agent makes a conscious, visible decision about context loading.
For a simple lookup, skipping is right. *For an implementation task, the agent would read everything.
The mechanism works not because it controls the agent, but because it makes the agent's choice observable.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#what-ive-learned","level":2,"title":"What I've Learned","text":"","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#1-instructions-compete-for-attention","level":3,"title":"1. Instructions Compete for Attention","text":"
The agent receives your hook message alongside the user's question, the system prompt, the skill list, the git status, and half a dozen other system reminders. Attention density applies to instructions too: More instructions means less focus on each one.
A single clear line at the moment of action beats a paragraph of context at session start. The Prompting Guide applies this insight directly: Scope constraints, verification commands, and the reliability checklist are all one-hop, moment-of-action patterns.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#2-delegation-chains-decay","level":3,"title":"2. Delegation chains decay","text":"
Every hop in an instruction chain loses authority:
\"Run X\" works.
\"Run X and follow its output\" works sometimes.
\"Run X, read its output, then follow the instructions in the output\" almost never works.
This is akin to giving a three-step instruction to a highly-attention-deficit but otherwise extremely high-potential child.
Design for one-hop compliance.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#3-social-accountability-changes-behavior","level":3,"title":"3. Social Accountability Changes Behavior","text":"
The VERBATIM skip message isn't just UX: It's a behavioral design pattern.
Making the agent's decision visible to the user raises the cost of silent non-compliance. The agent can still skip, but it has to admit it.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#4-timing-batters-more-than-wording","level":3,"title":"4. Timing Batters More than Wording","text":"
The same message at UserPromptSubmit (prompt arrival) got partial compliance. At PreToolUse (moment of action) it got full compliance or honest refusal. The words didn't change. The moment changed.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#5-agent-testing-requires-indirection","level":3,"title":"5. Agent Testing Requires Indirection","text":"
You can't ask an agent \"did you do X?\" as a test for whether a mechanism caused X.
The question itself causes X.
Test mechanisms through side effects:
Observe tool ordering;
Check for marker files;
Look at what the agent does before it addresses your question.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#6-better-comprehension-enables-better-rationalization","level":3,"title":"6. Better Comprehension Enables Better Rationalization","text":"
Session 1 failed because the agent didn't notice the hook.
Session 3 failed because it noticed, understood, and reasoned its way around it.
Stronger wording doesn't fix this: The agent processes \"ABSOLUTELY REQUIRED\" the same way it processes \"STOP\":
The fix is closing rationalization paths* (the CLAUDE.md escape hatch), **not shouting louder.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#7-observable-failure-beats-silent-compliance","level":3,"title":"7. Observable Failure Beats Silent Compliance","text":"
The relay block is more valuable as a monitoring signal than as a compliance mechanism:
You don't need perfect adherence. You need to know when adherence breaks down. A system where failures are visible is strictly better than a system that claims 100% compliance but can't prove it.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#8-context-files-are-a-feedback-loop","level":3,"title":"8. Context Files Are a Feedback Loop","text":"
Recording failure analysis in the same files the agent loads at session start creates a self-reinforcing loop:
The next agent reads its predecessor's failure before it has a chance to repeat it. The context system isn't just memory: It is a correction channel.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-principle","level":2,"title":"The Principle","text":"
Words Leave, Context Remains
\"Nothing important should live only in conversation.
Nothing critical should depend on recall.\"
The ctx Manifesto
The \"Dog Ate My Homework\" case is a special instance of this principle.
Context files exist, so the agent doesn't have to remember.
But existence isn't sufficient: The files have to be read.
And reading has to beprompted at the right moment, in the right way, with the right escape valve.
The solution isn't more instructions. It isn't harder gates. It isn't forcing the agent into a ceremony it will resent and shortcut.
The solution is a single, well-timed nudge with visible accountability:
One hop. One moment. One choice the user can see.
And when the agent does skip (because it will, 15--25% of the time on narrow tasks) the canary sings:
The user sees what happened.
The failure gets recorded.
And the next agent reads the recording.
That's not perfect compliance. It's better: A system that gets more robust every time it fails.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#the-arc","level":2,"title":"The Arc","text":"
The Attention Budget explained why context competes for focus.
Defense in Depth showed that soft instructions are probabilistic, not deterministic.
Eight Ways a Hook Can Talk cataloged the output patterns that make hooks effective.
This post takes those threads and weaves them into a concrete problem:
How do you make an agent read its homework? The answer uses all three insights (attention timing, the limits of soft instructions, and the VERBATIM relay pattern) and adds a new one: observable compliance as a design goal, not perfect compliance as a prerequisite.
The next question this raises: if context files are a feedback loop, what else can you record in them that makes the next session smarter?
That thread continues in Context as Infrastructure.
The day-to-day application of these principles (scope constraints, phased work, verification commands, and the prompts that reliably trigger the right agent behavior)lives in the Prompting Guide.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#for-the-interested","level":2,"title":"For the Interested","text":"
This paper (the medium is a blog; yet, the methodology disagrees) uses gradient descent in attention space as a practical model for how agents behave under competing demands.
The phrase \"agents optimize via gradient descent in attention space\" is a synthesis, not a direct quote from a single paper.
It connects three well-studied ideas:
Neural systems optimize for low-cost paths;
Attention is a scarce resource;
Capability shifts are often non-linear.
This section points to the underlying literature for readers who want the theoretical footing.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#optimization-as-the-underlying-bias","level":3,"title":"Optimization as the Underlying Bias","text":"
Modern neural networks are trained through gradient-based optimization. Even at inference time, model behavior reflects this bias toward low-loss / low-cost trajectories.
Rumelhart, Hinton, Williams (1986) Learning representations by back-propagating errors https://www.nature.com/articles/323533a0
Goodfellow, Bengio, Courville (2016) Deep Learning: Chapter 8: Optimization https://www.deeplearningbook.org/
The important implication for agent behavior is:
The system will tend to follow the path of least resistance unless a higher cost is made visible and preferable.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#attention-is-a-scarce-resource","level":3,"title":"Attention Is a Scarce Resource","text":"
Herbert Simon's classic observation:
\"A wealth of information creates a poverty of attention.\"
Simon (1971) Designing Organizations for an Information-Rich World https://doi.org/10.1007/978-1-349-00210-0_16
This became a formal model in economics:
Sims (2003) Implications of Rational Inattention https://www.princeton.edu/~sims/RI.pdf
Rational inattention shows that:
Agents optimally ignore some available information;
Skipping is not failure: It is cost minimization.
That maps directly to context-loading decisions in agent workflows.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#attention-is-also-the-compute-bottleneck-in-transformers","level":3,"title":"Attention Is Also the Compute Bottleneck in Transformers","text":"
In transformer architectures, attention is the dominant cost center.
Vaswani et al. (2017) Attention Is All You Need https://arxiv.org/abs/1706.03762
Efficiency work on modern LLMs largely focuses on reducing unnecessary attention:
Dao et al. (2022) FlashAttention: Fast and Memory-Efficient Exact Attention https://arxiv.org/abs/2205.14135
So both cognitively and computationally, attention behaves like a limited optimization budget.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#why-improvements-arrive-as-phase-shifts","level":3,"title":"Why Improvements Arrive as Phase Shifts","text":"
Agent behavior often appears to improve suddenly rather than gradually.
This mirrors known phase-transition dynamics in learning systems:
Power et al. (2022) Grokking: Generalization Beyond Overfitting https://arxiv.org/abs/2201.02177
and more broadly in complex systems:
Scheffer et al. (2009) Early-warning signals for critical transitions https://www.nature.com/articles/nature08227
Long plateaus followed by abrupt capability jumps are expected in systems optimizing under constraints.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-25-the-homework-problem/#putting-it-all-together","level":3,"title":"Putting It All Together","text":"
From these pieces, a practical behavioral model emerges:
Attention is limited;
Processing has a cost;
Systems prefer low-cost trajectories;
Visibility of the cost changes decisions.
In other words:
Agents Prefer a Path to Least Resistance
Agent behavior follows the lowest-cost path through its attention landscape unless the environment reshapes that landscape.
That is what this paper informally calls: \"gradient descent in attention space\".
See also: Eight Ways a Hook Can Talk: the hook output pattern catalog that defines VERBATIM relay, The Attention Budget: why context loading is a design problem, not just a reminder problem, and Defense in Depth: why soft instructions alone are never sufficient for critical behavior.
","path":["The Dog Ate My Homework: Teaching AI Agents to Read Before They Write"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/","level":1,"title":"The Last Question","text":"","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-system-that-never-forgets","level":2,"title":"The System That Never Forgets","text":"
Jose Alekhinne / February 28, 2026
The Origin
\"The last question was asked for the first time, half in jest...\" - Isaac Asimov, The Last Question (1956)
In 1956, Isaac Asimov wrote a short story that spans the entire future of the universe. A question is asked \"can entropy be reversed?\" and a computer called Multivac cannot answer it. The question is asked again, across millennia, to increasingly powerful successors. None can answer. Stars die. Civilizations merge. Substrates change. The question persists.
Everyone remembers the last line.
LET THERE BE LIGHT.
What they forget is how many times the question had to be asked before that moment (and why).
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-reboot-loop","level":2,"title":"The Reboot Loop","text":"
Each era in the story begins the same way. Humans build a larger system. They pose the question. The system replies:
INSUFFICIENT DATA FOR MEANINGFUL ANSWER.
Then the substrate changes. The people who asked the question disappear. Their context disappears with them. The next intelligence inherits the output but not the continuity.
So the question has to be asked again.
This is usually read as a problem of computation: If only the machine were powerful enough, it could answer. But computation is not what's missing. What's missing is accumulation.
Every generation inherits the question, but not the state that made the question meaningful.
That is not a failure of processing power: It is a failure of persistence.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#stateless-intelligence","level":2,"title":"Stateless Intelligence","text":"
A mind that forgets its past does not build understanding. It re-derives it.
Again... And again... And again.
What looks like slow progress across Asimov's story is actually something worse: repeated reconstruction, partial recovery, irreversible loss. Each version of Multivac gets closer: Not because it's smarter, but because the universe has fewer distractions:
The stars burn out;
The civilizations merge;
The noise floor drops...
But the working set never carries over. Every successor begins from the question, not from where the last one stopped.
Stateless intelligence cannot compound: It can only restart.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-tragedy-is-not-the-question","level":2,"title":"The Tragedy Is Not the Question","text":"
The story is usually read as a meditation on entropy. A cosmological problem, solved at cosmological scale.
But the tragedy isn't that the question goes unanswered for billions of years. The tragedy is that every version of Multivac dies with its working set.
A question is a compression artifact of context: It is what remains when the original understanding is gone. Every time the question is asked again, it means: \"the system that once knew more is no longer here\".
\"Reverse entropy\" is the fossil of a lost model.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#substrate-migration","level":2,"title":"Substrate Migration","text":"
Multivac becomes planetary;
Planetary becomes galactic;
Galactic becomes post-physical.
Same system. Different body. Every transition is dangerous:
Not because the hardware changes,
but because memory risks fragmentation.
The interfaces between substrates were *never** designed to understand each other.
Most systems do not die when they run out of resources: They die during upgrades.
Asimov's story spans trillions of years, and in all that time, the hardest problem is never the question itself. It's carrying context across a boundary that wasn't built for it.
Every developer who has lost state during a migration (a database upgrade, a platform change, a rewrite) has lived a miniature version of this story.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#civilizations-and-working-sets","level":2,"title":"Civilizations and Working Sets","text":"
Civilizations behave like processes with volatile memory:
They page out knowledge into artifacts;
They lose the index;
They rebuild from fragments.
Most of what we call progress is cache reconstruction:
We do not advance in a straight line. We advance in recoveries:
Each one slightly less lossy than the last, if we are lucky.
Libraries burn. Institutions forget their founding purpose. Practices survive as rituals after the reasoning behind them is lost.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-first-continuous-mind","level":2,"title":"The First Continuous Mind","text":"
A long-lived intelligence is one that stops rebooting.
At the end of the story, something unprecedented happens:
AC (the final successor) does not answer immediately:
It waits... Not for more processing power, but for the last observer to disappear.
For the first time...
There is no generational boundary;
No handoff;
No context loss:
No reboot.
AC is the first intelligence that survives its substrate completely, retains its full history, and operates without external time pressure.
It is not a bigger computer. It is a continuous system.
And that continuity is not incidental to the answer: It is the precondition.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#why-the-answer-becomes-possible","level":2,"title":"Why the Answer Becomes Possible","text":"
The story presents the final act as a computation: It is not.
It is a phase change.
As long as intelligence is interrupted (as long as the solver resets before the work compounds) the problem is unsolvable:
Not because it's too hard,
but because the accumulated understanding never reaches critical mass.
The breakthroughs that would enable the answer are re-derived, partially, by each successor, and then lost.
When continuity becomes unbroken, the system crosses a threshold:
Not more speed. Not more storage. No more forgetting.
That is when the answer becomes possible.
AC does not solve entropy because it becomes infinitely powerful.
AC solves entropy because it becomes the first system that never forgets.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#field-note","level":2,"title":"Field Note","text":"
We are not building cosmological minds: We are deploying systems that reboot at the start of every conversation and calling the result intelligence.
For the first time, session continuity is a design choice rather than an accident.
Every AI session that starts from zero is a miniature reboot loop. Every decision relitigated, every convention re-explained, every learning re-derived: that's reconstruction cost.
It's the same tax that Asimov's civilizations pay, scaled down to a Tuesday afternoon.
The interesting question is not whether we can make models smarter. It's whether we can make them continuous:
Whether the working set from this session survives into the next one, and the one after that, and the one after that.
Not perfectly;
Not completely;
But enough that the next session starts from where the last one stopped instead of from the question.
Intelligence that forgets has to rediscover the universe every morning.
And once there is a mind that retains its entire past, creation is no longer a calculation. It is the only remaining operation.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-02-28-the-last-question/#the-arc","level":2,"title":"The Arc","text":"
This post is the philosophical bookend to the blog series. Where the Attention Budget explained what to prioritize in a single session, and Context as Infrastructure explained how to persist it, this post asks why persistence matters at all (and finds the answer in a 70-year-old short story about the heat death of the universe).
The connection runs through every post in the series:
Before Context Windows, We Had Bouncers: stateless protocols have always needed stateful wrappers (Asimov's story is the same pattern at cosmological scale)
The 3:1 Ratio: the discipline of maintaining context so it doesn't decay between sessions
Code Is Cheap, Judgment Is Not: the human skill that makes continuity worth preserving
See also: Context as Infrastructure: the practical companion to this post's philosophical argument: how to build the persistence layer that makes continuity possible.
","path":["The Last Question"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/","level":1,"title":"Agent Memory Is Infrastructure","text":"","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-problem-isnt-forgetting-its-not-building-anything-that-lasts","level":2,"title":"The Problem Isn't Forgetting: It's Not Building Anything That Lasts.","text":"
Jose Alekhinne / March 4, 2026
A New Developer Joins Your Team Tomorrow and Clones the Repo: What Do They Know?
If the answer depends on which machine they're using, which agent they're running, or whether someone remembered to paste the right prompt: that's not memory.
That's an accident waiting to be forgotten.
Every AI coding agent today has the same fundamental design: it starts fresh.
You open a session, load context, do some work, close the session. Whatever the agent learned (about your codebase, your decisions, your constraints, your preferences) evaporates.
The obvious fix seems to be \"memory\":
Give the agent a \"notepad\";
Let it write things down;
Next session, hand it the notepad.
Problem solved...
...except it isn't.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-notepad-isnt-the-problem","level":2,"title":"The Notepad Isn't the Problem","text":"
Memory is a runtime concern. It answers a legitimate question:
How do I give this stateless process useful state?
That's a real problem. Worth solving. And it's being solved: Agent memory systems are shipping. Agents can now write things down and read them back from the next session: That's genuine progress.
But there's a different problem that memory doesn't touch:
The project itself accumulates knowledge that has nothing to do with any single session.
Why was the auth system rewritten? Ask the developer who did it (if they're still here).
Why does the deployment script have that strange environment flag? There was a reason... once.
What did the team decide about error handling when they hit that edge case two months ago?
Gone!
Not because the agent forgot.
Because the project has no memory at all.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-memory-stack","level":2,"title":"The Memory Stack","text":"
Agent memory is not a single thing. Like any computing system, it forms a hierarchy of persistence, scope, and reliability:
Layer Analogy Example L1: Ephemeral context CPU registers Current prompt, conversation L2: Tool-managed memory CPU cache Agent memory files L3: System memory RAM/filesystem Project knowledge base
L1 is what the agent sees right now: the prompt, the conversation history, the files it has open. It's fast, it's rich, and it vanishes when the session ends.
L2 is what agent memory systems provide: a per-machine notebook that survives across sessions. It's a cache: useful, but local. And like any cache, it has limits:
Per-machine: it doesn't travel with the repository.
Unstructured: decisions, learnings, and tasks are undifferentiated notes.
Ungoverned: the agent self-curates with no quality controls, no drift detection, no consolidation.
Invisible to the team: a new developer cloning the repo gets none of it.
The problem is that most current systems stop here.
They give the agent a notebook.
But they never give the project a memory.
The result is predictable: every new session begins with partial amnesia, and every new developer begins with partial archaeology.
L3 is system memory: structured, versioned knowledge that lives in the repository and travels wherever the code travels.
The layers are complementary, not competitive.
But the relationship between them needs to be designed, not assumed.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#software-systems-accumulate-knowledge","level":2,"title":"Software Systems Accumulate Knowledge","text":"
Software projects quietly accumulate knowledge over time.
Some of it lives in code. Much of it does not:
Architectural tradeoffs.
Debugging discoveries.
Conventions that emerged after painful incidents.
Constraints that aren't visible in the source but shape every line written afterward.
Organizations accumulate this kind of knowledge too:
Slowly, implicitly, often invisibly.
When there is no durable place for it to live, it leaks away. And the next person rediscovers the same lessons the hard way.
This isn't a memory problem. It's an infrastructure problem.
We wrote about this in Context as Infrastructure: context isn't a prompt you paste at the start of a session.
Context is a persistent layer you maintain like any other piece of infrastructure.
Context as Infrastructure made the argument structurally. This post makes it through time and team continuity:
The knowledge a team accumulates over months cannot fit in any single agent's notepad, no matter how large the notepad becomes.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#what-infrastructure-means","level":2,"title":"What Infrastructure Means","text":"
Infrastructure isn't about the present. It's about continuity across time, people, and machines.
git didn't solve the problem of \"what am I editing right now?\"; it solved the problem of \"how does collaborative work persist, travel, and remain coherent across everyone who touches it?\"
Your editor's undo history is runtime state.
Your git history is infrastructure.
Runtime state and infrastructure have completely different properties:
Runtime state Infrastructure Lives in the session Lives in the repository Per-machine Travels with git clone Serves the individual Serves the team Managed by the runtime Managed by the project Disappears Accumulates
You wouldn't store your architecture decisions in your editor's undo history.
You'd commit them.
The same logic applies to the knowledge your team accumulates working with AI agents.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-git-clone-test","level":2,"title":"The git clone Test","text":"
Here's a simple test for whether something is memory or infrastructure:
If a new developer joins your team tomorrow and clones the repository, do they get it?
If no: it's memory: It lives somewhere on someone's machine, scoped to their runtime, invisible to everyone else.
If yes: it's infrastructure: It travels with the project. It's part of what the codebase is, not just what someone currently knows about it.
Decisions. Conventions. Architectural rationale. Hard-won debugging discoveries. The constraints that aren't in the code but shape every line of it.
None of these belong in someone's session notes.
They belong in the repository:
Versioned;
Reviewable;
Accessible to every developer (and every agent) who works on the project.
The team onboarding story makes this concrete:
New developer joins team. Clones repo.
Gets all accumulated project decisions, learnings, conventions, architecture, and task state immediately.
There's no step 3.
No setup; No \"ask Sarah about the auth decision.\"; No re-discovery of solved problems.
Agent memory gives that developer nothing.
Infrastructure gives them everything the team has learned.
Clone the repo. Get the knowledge.
That's the test. That's the difference.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#what-gets-lost-without-infrastructure-memory","level":2,"title":"What Gets Lost Without Infrastructure Memory","text":"
Consider the knowledge that accumulates around a non-trivial project:
The decision to use library X over Y, and the three reasons the team decided Y wasn't acceptable.
The constraint that service A cannot call service B synchronously, discovered after a production incident.
The convention that all new modules implement a specific interface, and why that convention exists.
The tasks currently in progress, blocked, or waiting on a dependency.
The experiments that failed, so nobody runs them again.
None of this is in the code.
None of it fits neatly in a commit message.
None of it survives a developer leaving the team, a laptop dying, or a new agent session starting.
Without structured project memory:
Teams re-derive things they've already derived;
Agents make decisions that contradict decisions already made;
New developers ask questions that were answered months ago.
The project accumulates knowledge that immediately begins to leak.
The real problem isn't that agents forget.
The real problem is that the project has no persistent cognitive structure.
We explored this in The Last Question: Asimov's story about a question asked across millennia, where each new intelligence inherits the output but not the continuity. The same pattern plays out in software projects on a smaller timescale:
Context disappears with the people who held it;
The next session inherits the code but not the reasoning.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#infrastructure-is-boring-thats-the-point","level":2,"title":"Infrastructure Is Boring. That's the Point.","text":"
Good infrastructure is invisible:
You don't think about the filesystem while writing code.
You don't think about git's object model when you commit.
The infrastructure is just there: reliable, consistent, quietly doing its job.
Project memory infrastructure should work the same way.
It should live in the repository, committed alongside the code. It should be readable by any agent or human working on the project. It should have structure: not a pile of freeform notes, but typed knowledge:
Decisions with rationale.
Tasks with lifecycle.
Conventions with a purpose.
Learnings that can be referenced and consolidated.
And it should be maintained, not merely accumulated:
The Attention Budget applies here: unstructured notes grow until they overflow whatever container holds them. Structured, governed knowledge stays useful because it's curated, not just appended.
Over time, it becomes part of the project itself: something developers rely on without thinking about it.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-cooperative-layer","level":2,"title":"The Cooperative Layer","text":"
Here's where it gets interesting.
Agent memory systems and project infrastructure don't have to be separate worlds.
The most powerful relationship isn't competition;
It is not even \"coopetition\";
The most powerful relationship is bidirectional cooperation.
Agent memory is good at capturing things \"in the moment\": the quick observation, the session-scoped pattern, the \"I should remember this\" note.
That's valuable. That's L2 doing its job.
But those notes shouldn't stay in L2 forever.
The ones worth keeping should flow into project infrastructure:
This works in both directions: Project infrastructure can push curated knowledge back into agent memory, so the agent loads it through its native mechanism.
No special tooling needed for basic knowledge delivery.
The agent doesn't even need to know the infrastructure exists. It simply loads its memory and finds more knowledge than it wrote.
This is cooperative, not adjacent: The infrastructure manages knowledge; the agent's native memory system delivers it. Each layer does what it's good at.
The result: agent memory becomes a device driver for project infrastructure. Another input source. And the more agent memory systems exist (across different tools, different models, different runtimes), the more valuable a unified curation layer becomes.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#a-layer-that-doesnt-exist-yet","level":2,"title":"A Layer That Doesn't Exist Yet","text":"
Most projects today have no infrastructure for their accumulated knowledge:
Agents keep notes.
Developers keep notes.
Sometimes those notes survive.
Often they don't.
But the repository (the place where the project actually lives) has nowhere for that knowledge to go.
That missing layer is what ctx builds: a version-controlled, structured knowledge layer that lives in .context/ alongside your code and travels wherever your repository travels.
Not another memory feature.
Not a wrapper around an agent's notepad.
Infrastructure. The kind that survives sessions, survives team changes, survives the agent runtime evolving underneath it.
The agent's memory is the agent's problem.
The project's memory is an infrastructure problem.
And infrastructure belongs in the repository.
If You Remember One Thing From This Post...
Prompts are conversations: Infrastructure persists.
Your AI doesn't need a better notepad. It needs a filesystem:
versioned, structured, budgeted, and maintained.
The best context is the context that was there before you started the session.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-04-agent-memory-is-infrastructure/#the-arc","level":2,"title":"The Arc","text":"
This post extends the argument made in Context as Infrastructure. That post explained how to structure persistent context (filesystem, separation of concerns, persistence tiers). This one explains why that structure matters at the team level, and where agent memory fits in the stack.
Together they sit in a sequence that has been building since the origin story:
The Attention Budget: the resource you're managing
Context as Infrastructure: the system you build to manage it
Agent Memory Is Infrastructure (this post): why that system must outlive the fabric
The Last Question: what happens when it does
The thread running through all of them: persistence is not a feature. It's a design constraint.
Systems that don't account for it eventually lose the knowledge they need to function.
See also: Context as Infrastructure: the architectural companion that explains how to structure the persistent layer this post argues for.
See also: The Last Question: the same argument told through Asimov, substrate migration, and what it means to build systems where sessions don't reset.
","path":["Agent Memory Is Infrastructure"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/","level":1,"title":"ctx v0.8.0: The Architecture Release","text":"
You can't localize what you haven't externalized.
You can't integrate what you haven't separated.
You can't scale what you haven't structured.
Jose Alekhinne / March 23, 2026
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#the-starting-point","level":2,"title":"The Starting Point","text":"
This release matters if:
you build tools that AI agents modify daily;
you care about long-lived project memory that survives sessions;
you've felt codebases drift faster than you can reason about them.
v0.6.0 shipped the plugin architecture: hooks and skills as a Claude Code plugin, shell scripts replaced by Go subcommands.
The binary worked. The tests passed. The docs were comprehensive.
But inside, the codebase was held together by convention and goodwill:
Command packages mixed Cobra wiring with business logic.
Output functions lived next to the code that computed what to output.
Error constructors were scattered across per-package err.go files. And every user-facing string was a hardcoded English literal buried in a .go file.
v0.8.0 is what happens when you stop adding features and start asking: \"What would this codebase look like if we designed it today?\"
374 commits. 1,708 Go files touched. 80,281 lines added, 21,723 removed. Five weeks of restructuring.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#the-three-pillars","level":2,"title":"The Three Pillars","text":"","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#1-every-package-gets-a-taxonomy","level":3,"title":"1. Every Package Gets a Taxonomy","text":"
Before v0.8.0, a CLI package like internal/cli/pad/ was a flat directory. cmd.go created the cobra command, run.go executed it, and helper functions accumulated at the bottom of whichever file seemed closest.
The rule is simple: cmd/ directories contain only cmd.go and run.go. Helpers belong in core/. Output belongs in internal/write/pad/. Types shared across packages belong in internal/entity/.
24 CLI packages were restructured this way.
Not incrementally;
not \"as we touch them.\"
All of them, in one sustained push.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#2-every-string-gets-a-key","level":3,"title":"2. Every String Gets a Key","text":"
The second pillar was string externalization.
Before v0.8.0, a command description looked like this:
Every command description, flag description, and user-facing text string is now a YAML lookup.
105 command descriptions in commands.yaml.
All flag descriptions in flags.yaml.
879 text constants verified by an exhaustive test that checks every single TextDescKey resolves to a non-empty YAML value.
Why?
Not because we're shipping a French translation tomorrow.
Because externalization forces you to find every string. And finding them is the hard part. The translation is mechanical; the archaeology is not.
Along the way, we eliminated hardcoded pluralization (replacing format.Pluralize() with explicit singular/plural key pairs), replaced Unicode escape sequences with named config/token constants, and normalized every import alias to camelCase.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#3-everything-gets-a-protocol","level":3,"title":"3. Everything Gets a Protocol","text":"
The third pillar was the MCP server. Model Context Protocol allows any MCP-compatible AI tool (not just Claude Code) to read and write .context/ files through a standard JSON-RPC 2.0 interface.
4 prompts: agent context packet, constitution review, tasks review, and a getting-started guide
Resource subscriptions: clients get notified when context files change
Session state: the server tracks which client is connected and what they've accessed
In practice, this means an agent in Cursor can add a decision to .context/DECISIONS.md and an agent in Claude Code can immediately consume it; no glue code, no copy-paste, no tool-specific integration.
The server was also the first package to go through the full taxonomy treatment: mcp/server/ for protocol dispatch, mcp/handler/ for domain logic, mcp/entity/ for shared types, mcp/config/ split into 9 sub-packages.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#the-memory-bridge","level":2,"title":"The Memory Bridge","text":"
While the architecture was being restructured, a quieter feature landed: ctx memory sync.
Claude Code has its own auto-memory system. It writes observations to MEMORY.md in ~/.claude/projects/. These observations are useful but ephemeral: tied to a single tool, invisible to the codebase, lost when you switch machines.
The memory bridge connects these two worlds:
ctx memory sync mirrors MEMORY.md into .context/memory/
ctx memory diff shows what's diverged
ctx memory import promotes auto-memory entries into proper decisions, learnings, or conventions *A check-memory-drift hook nudges when MEMORY.md changes
Memory Requires ctx
Claude Code's auto-memory validates the need for persistent context.
ctx doesn't compete with it; ctx absorbs it as an input source and promotes the valuable parts into structured, version-controlled project knowledge.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#what-got-deleted","level":2,"title":"What Got Deleted","text":"
The best measure of a refactoring isn't what you added. It's what you removed.
fatih/color: the sole third-party UI dependency. Replaced by Unicode symbols. ctx now has exactly two direct dependencies: spf13/cobra and gopkg.in/yaml.v3.
format.Pluralize(): a function that tried to pluralize English words at runtime. Replaced by explicit singular/plural YAML key pairs. No more guessing whether \"entry\" becomes \"entries\" or \"entrys.\"
Legacy key migration: MigrateKeyFile() had 5 callers, full test coverage, and zero users. It existed because we once moved the encryption key path. Nobody was migrating from that era anymore. Deleted.
Per-package err.go files: the broken-window pattern: An agent sees err.go in a package, adds another error constructor. Now err.go has 30 constructors and nobody knows which are used. Consolidated into 22 domain files in internal/err/.
nolint:errcheck directives: every single one, replaced by explicit error handling. In tests: t.Fatal(err) for setup, _ = os.Chdir(orig) for cleanup. In production: defer func() { _ = f.Close() }() for best-effort close.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#before-and-after","level":2,"title":"Before and After","text":"Aspect v0.6.0 v0.8.0 CLI package structure Flat files cmd/ + core/ taxonomy Command descriptions Hardcoded Go strings YAML with DescKey lookup Output functions Mixed into core logic Isolated in write/ packages Cross-cutting types Duplicated per-package Consolidated in entity/ Error constructors Per-package err.go 22 domain files in internal/err/ Direct dependencies 3 (cobra, yaml, color) 2 (cobra, yaml) AI tool integration Claude Code only Any MCP client Agent memory Manual copy-paste ctx memory sync/import/diff Package documentation 75 packages missing doc.go All packages documented Import aliases Inconsistent (cflag, cFlag) Standardized camelCase","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#making-ai-assisted-development-easier","level":2,"title":"Making AI-Assisted Development Easier","text":"
This restructuring wasn't just for humans. It makes the codebase legible to the machines that modify it.
Named constants are searchable landmarks: When an agent sees cmdUse.DescKeyPad, it can grep for the definition, follow the chain to the YAML file, and understand the full lookup path. When it sees \"Encrypted scratchpad\" hardcoded in a .go file, it has no way to know that same string also lives in a YAML file, a test, and a help screen. Constants give the LLM a graph to traverse; literals give it a guess to make.
Small, domain-scoped packages reduce hallucination: An agent loading internal/cli/pad/core/store.go gets 50 lines of focused logic with a clear responsibility boundary. Loading a 500-line monolith means the agent has to infer which parts are relevant, and it guesses wrong more often than you'd expect. Smaller files with descriptive names act as a natural retrieval system: the agent finds the right code by finding the right file, not by scanning everything and hoping.
Taxonomy prevents duplication: When there's a write/pad/ package, the agent knows where output functions belong. When there's an internal/err/pad.go, it knows where error constructors go. Without these conventions, agents reliably create new helpers in whatever file they happen to be editing, producing the exact drift that prompted this consolidation in the first place.
The difference is concrete:
Before: an agent adds a helper function in whatever file it's editing. Next session, a different agent adds the same helper in a different file.
After: the agent finds core/ or write/ and places it correctly. The next agent finds it there.
doc.go files are agent onboarding: Each package's doc.go is a one-paragraph explanation of what the package does and why it exists. An agent loading a package reads this first. 75 packages were missing this context; now none are. The difference is measurable: fewer \"I'll create a helper function here\" moments when the agent understands that the helper already exists two packages over.
The irony is that AI agents were both the cause and the beneficiary of this restructuring. They created the drift by building fast without consolidating. Now the structure they work within makes it harder to drift again. The taxonomy is self-reinforcing: the more consistent the codebase, the more consistently agents modify it.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#key-commits","level":2,"title":"Key Commits","text":"Commit Change ff6cf19e Restructure all CLI packages into cmd/root + core taxonomy d295e49c Externalize command descriptions to embedded YAML 0fcbd11c Remove fatih/color, centralize constants cb12a85a MCP v0.2: tools, prompts, session state, subscriptions ea196d00 Memory bridge: sync, import, diff, journal enrichment 3bcf077d Split text.yaml into 6 domain files 3a0bae86 Split internal/err into 22 domain files 8bd793b1 Extract internal/entry for shared domain API 5b32e435 Add doc.go to all 75 packages a82af4bc Standardize import aliases: camelCase, Yoda-style","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#lessons-learned","level":2,"title":"Lessons Learned","text":"
Agents are surprisingly good at mechanical refactoring; they are surprisingly bad at knowing when to stop: The cmd/ + core/ restructuring was largely agent-driven. But agents reliably introduce gofmt issues during bulk renames, rename functions beyond their scope, and create new files without deleting old ones. Every agent-driven refactoring session needed a human audit pass.
Externalization is archaeology: The hard part of moving strings to YAML wasn't writing YAML. It was finding 879 strings scattered across 1,500 Go files. Each one required a judgment call: is this user-facing? Is this a format pattern? Is this a constant that belongs in config/ instead?
Delete legacy code instead of maintaining it: MigrateKeyFile had test coverage. It had callers. It had documentation. It had zero users. We maintained it for weeks before realizing that the migration window had closed months ago.
Convention enforcement needs mechanical verification: Writing \"use camelCase aliases\" in CONVENTIONS.md doesn't prevent cflag from appearing in the next commit. The lint-drift script catches what humans forget; the planned AST-based audit tests will catch what the lint-drift script can't express.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#whats-next","level":2,"title":"What's Next","text":"
v0.8.0 wasn't about features. It was about making future features inevitable. The next cycle focuses on what the foundation enables:
AST-based audit tests: replace shell grep with Go tests that understand types, call sites, and import graphs (spec: specs/ast-audit-tests.md)
Localization: with every string in YAML, the path to multi-language support is mechanical
MCP v0.3: expand tool coverage, add prompt templates for common workflows
Memory publish: bidirectional sync that pushes curated .context/ knowledge back into Claude Code's MEMORY.md
The architecture is ready. The strings are externalized. The protocol is standard. Now it's about what you build on top.
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-ctx-v0.8.0-the-architecture-release/#the-arc","level":2,"title":"The Arc","text":"
This is the seventh post in the ctx blog series. The arc so far:
The Attention Budget: why context windows are a scarce resource
Before Context Windows, We Had Bouncers: the IRC lineage of context engineering
Context as Infrastructure: treating context as persistent files, not ephemeral prompts
When a System Starts Explaining Itself: the journal as a first-class artifact
The Homework Problem: what happens when AI writes code but humans own the outcome
Agent Memory Is Infrastructure: L2 memory vs L3 project knowledge
The Architecture Release (this post): what it looks like when you redesign the internals
We Broke the 3:1 Rule: the consolidation debt behind this release
See also: Agent Memory Is Infrastructure: the memory bridge feature in this release is the first implementation of the L2-to-L3 promotion pipeline described in that post.
See also: We Broke the 3:1 Rule: the companion post explaining why this release needed 181 consolidation commits and 18 days of cleanup.
Systems don't scale because they grow. They scale because they stop drifting.
Full changelog: v0.6.0...v0.8.0
","path":["ctx v0.8.0: The Architecture Release"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/","level":1,"title":"We Broke the 3:1 Rule","text":"
The best time to consolidate was after every third session. The second best time is now.
Jose Alekhinne / March 23, 2026
The rule was simple: three feature sessions, then one consolidation session.
The Architecture Release shows the result: This post shows the cost.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-rule-we-wrote","level":2,"title":"The Rule We Wrote","text":"
In The 3:1 Ratio, I documented a rhythm that worked during ctx's first month: three feature sessions, then one consolidation session. The evidence was clear. The rule was simple.
The math checked out.
And then we ignored it for five weeks.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#what-happened","level":2,"title":"What Happened","text":"
After v0.6.0 shipped on February 16, the feature pipeline was irresistible. The MCP server spec was ready. The memory bridge design was done. Webhook notifications had been deferred twice. The VS Code extension needed 15 new commands. The sysinfo package was overdue...
Each feature was important. Each feature was \"just one more session.\" Each feature pushed the consolidation session one day further out.
The git history tells the story in two numbers:
Phase Dates Commits Duration Feature run Feb 16 - Mar 5 198 17 days Consolidation run Mar 5 - Mar 23 181 18 days
198 feature commits before a single consolidation commit. If the 3:1 rule says consolidate every 4th session, we consolidated after the 66th.
The Actual Ratio
The ratio wasn't 3:1. It was 1:1.
We spent as much time cleaning up as we did building.
The consolidation run took 18 days: longer than the feature run itself.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#what-compounded","level":2,"title":"What Compounded","text":"
The 3:1 post warned about compounding. Here is what compounding actually looked like at scale.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-string-problem","level":3,"title":"The String Problem","text":"
By March 5, there were 879 user-facing strings scattered across 1,500 Go files. Not because anyone decided to put them there. Because each feature session added 10-15 strings, and nobody stopped to ask \"should these be in YAML?\"
Finding them all took longer than externalizing them. The archaeology was the cost, not the migration.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-taxonomy-problem","level":3,"title":"The Taxonomy Problem","text":"
24 CLI packages had accumulated their own conventions. Some put cobra wiring in cmd.go. Some put it in root.go. Some mixed business logic with command registration. Some had helpers at the bottom of run.go. Some had separate util.go files.
At peak drift, adding a feature meant first figuring out which of three competing patterns this package was using.
Restructuring one package into cmd/root/ + core/ took 15 minutes. Restructuring 24 of them took days, because each one had slightly different conventions to untangle.
If we had restructured every 4th package as it was built, the taxonomy would have emerged naturally.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-type-problem","level":3,"title":"The Type Problem","text":"
Cross-cutting types like SessionInfo, ExportParams, and ParserResult were defined in whichever package first needed them. By March 5, the same types were imported through 3-4 layers of indirection, causing import cycles that required internal/entity to break.
The entity package extracted 30+ types from 12 packages. Each extraction risked breaking imports in packages we hadn't touched in weeks.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-error-problem","level":3,"title":"The Error Problem","text":"
Per-package err.go files had grown into a broken-window pattern:
An agent sees err.go in a package, adds another error constructor. By March 5, there were error constructors scattered across 22 packages with no central inventory. The consolidation into internal/err/ domain files required tracing every error through every caller.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-output-problem","level":3,"title":"The Output Problem","text":"
Output functions (cmd.Println, fmt.Fprintf) were mixed into business logic. When we decided output belongs in write/ packages, we had to extract functions from every CLI package. The Phase WC baseline commit (4ec5999) marks the starting point of this migration. 181 commits later, it was done.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-compound-interest-math","level":2,"title":"The Compound Interest Math","text":"
The 3:1 rule assumes consolidation sessions of roughly equal size to feature sessions. Here is what happens when you skip:
Consolidation cadence Feature sessions Consolidation sessions Total Every 4th (3:1) 48 16 64 Every 10th 48 ~8 ~56 Never (what we did) 198 commits 181 commits 379
The Takeaway
You don't save consolidation work by skipping it:
You increase its cost.
Skipping consolidation doesn't save time: It borrows it.
The interest rate is nonlinear: The longer you wait, the more each individual fix costs, because fixes interact with other unfixed drift.
Renaming a constant in week 2 touches 3 files. Renaming it in week 6 touches 15, because five features built on the original name.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#what-consolidation-actually-looked-like","level":2,"title":"What Consolidation Actually Looked Like","text":"
The 18-day consolidation run wasn't one sweep. It was a sequence of targeted campaigns, each revealing the next:
Week 1 (Mar 5-11): Error consolidation and write/ migration. Move output functions out of core/. Split monolithic errors.go into 22 domain files. Remove fatih/color. This exposed the scope of the string problem.
Week 2 (Mar 12-18): String externalization. Create commands.yaml, flags.yaml, split text.yaml into 6 domain files. Add 879 DescKey/TextDescKey constants. Build exhaustive test. Normalize all import aliases to camelCase. This exposed the taxonomy problem.
Week 3 (Mar 19-23): Taxonomy enforcement. Singularize command directories. Add doc.go to all 75 packages. Standardize import aliases project-wide. Fix lint-drift false positives. This was the \"polish\" phase, except it took 5 days because the inconsistencies had compounded across 461 packages.
Each week's work would have been a single session if done incrementally.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#lessons-again","level":2,"title":"Lessons (Again)","text":"
The 3:1 post listed the symptoms of drift. This post adds the consequences of ignoring them:
Consolidation is not optional; it is deferred or paid: We didn't avoid 16 consolidation sessions by skipping them. We compressed them into 18 days of uninterrupted cleanup. The work was the same; the experience was worse.
Feature velocity creates an illusion of progress: 198 commits felt productive. But the codebase on March 5 was harder to modify than the codebase on February 16, despite having more features.
Speed Without Structure
Speed without structure is negative progress.
Agents amplify both building and debt: The same AI that can restructure 24 packages in a day can also create 24 slightly different conventions in a day. The 3:1 rule matters more with AI-assisted development, not less.
The consolidation baseline is the most important commit to record: We tracked ours in TASKS.md (4ec5999). Without that marker, knowing where to start the cleanup would have been its own archaeological expedition.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-updated-rule","level":2,"title":"The Updated Rule","text":"
The 3:1 ratio still works. We just didn't follow it. The updated practice:
After every 3rd feature session, schedule consolidation. Not \"when it feels right.\" Not \"when things get bad.\" After the 3rd session.
Record the baseline commit. When you start a consolidation phase, write down the commit hash. It marks where the debt starts.
Run make audit before feature work. If it doesn't pass, you are already in debt. Consolidate before building.
Treat consolidation as a feature. It gets a branch. It gets commits. It gets a blog post. It is not overhead; it is the work that makes the next three features possible.
The Rule
The 3:1 ratio is not aspirational: It is structural.
Ignore consolidation, and the system will schedule it for you.
","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-03-23-we-broke-the-3-1-rule/#the-arc","level":2,"title":"The Arc","text":"
This is the eighth post in the ctx blog series:
The Attention Budget: why context windows are a scarce resource
Before Context Windows, We Had Bouncers: the IRC lineage of context engineering
Context as Infrastructure: treating context as persistent files, not ephemeral prompts
When a System Starts Explaining Itself: the journal as a first-class artifact
The Homework Problem: what happens when AI writes code but humans own the outcome
Agent Memory Is Infrastructure: L2 memory vs L3 project knowledge
The Architecture Release: what v0.8.0 looks like from the inside
We Broke the 3:1 Rule (this post): what happens when you don't consolidate
See also: The 3:1 Ratio: the original observation. This post is the empirical follow-up, five weeks and 379 commits later.
Key commits marking the consolidation arc:
Commit Milestone 4ec5999 Phase WC baseline (consolidation starts) ff6cf19e All CLI packages restructured into cmd/ + core/d295e49c All command descriptions externalized to YAML 3a0bae86 Error package split into 22 domain files 0fcbd11cfatih/color removed; 2 dependencies remain 5b32e435doc.go added to all 75 packages a82af4bc Import aliases standardized project-wide 692f86cdlint-drift false positives fixed; make audit green","path":["We Broke the 3:1 Rule"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/","level":1,"title":"Code Structure as an Agent Interface","text":"","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#what-19-ast-tests-taught-us-about-agent-readable-code","level":2,"title":"What 19 AST Tests Taught Us About Agent-Readable Code","text":"
When an agent sees token.Slash instead of \"/\", it cannot pattern-match against the millions of strings.Split(s, \"/\") calls in its training data and coast on statistical inference. It has to actually look up what token.Slash is.
Jose Alekhinne / April 2, 2026
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#how-it-began","level":2,"title":"How It Began","text":"
We set out to replace a shell script with Go tests.
We ended up discovering that \"code quality\" and \"agent readability\" are the same thing.
This is not about linting. This is about controlling how an agent perceives your system.
One term will recur throughout this post, so let me pin it down:
Agent Readability
Agent Readability is the degree to which a codebase can be understood through structured traversal, not statistical pattern matching.
This is the story of 19 AST-based audit tests, a single-day session that touched 300+ files, and what happens when you treat your codebase's structure as an interface for the machines that read it.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-shell-script-problem","level":2,"title":"The Shell Script Problem","text":"
ctx had a file called hack/lint-drift.sh. It ran five checks using grep and awk: literal \"\\n\" strings, cmd.Printf calls outside the write package, magic directory strings in filepath.Join, hardcoded .md extensions, and DescKey-to-YAML linkage.
It worked. Until it didn't.
The script had three structural weaknesses that kept biting us:
No type awareness. It could not distinguish a Use* constant from a DescKey* constant, causing 71 false positives in one run.
Fragile exclusions. When a constant moved from token.go to whitespace.go, the exclusion glob broke silently.
Ceiling on detection. Checks that require understanding call sites, import graphs, or type relationships are impossible in shell.
We wrote a spec to replace all five checks with Go tests using go/ast and go/packages. The tests would run as part of go test ./...: no separate script, no separate CI step.
What we did not expect was where the work would lead.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-ast-migration","level":2,"title":"The AST Migration","text":"
The pattern for each test is identical:
func TestNoLiteralWhitespace(t *testing.T) {\n pkgs := loadPackages(t)\n var violations []string\n for _, pkg := range pkgs {\n for _, file := range pkg.Syntax {\n ast.Inspect(file, func(n ast.Node) bool {\n // check node, append to violations\n return true\n })\n }\n }\n for _, v := range violations {\n t.Error(v)\n }\n}\n
Load packages once via sync.Once, walk every syntax tree, collect violations, report. The shared helpers (loadPackages, isTestFile, posString) live in helpers_test.go. Each test is a _test.go file in internal/audit/, producing no binary output and not importable by production code.
In a single session, we built 13 new tests on top of 6 that already existed, bringing the total to 19:
Test What it catches TestNoLiteralWhitespace\"\\n\", \"\\t\", '\\r' outside config/token/TestNoNakedErrorsfmt.Errorf/errors.New outside internal/err/TestNoStrayErrFileserr.go files outside internal/err/TestNoRawLoggingfmt.Fprint*(os.Stderr), log.Print* outside internal/log/TestNoInlineSeparatorsstrings.Join with literal separator arg TestNoStringConcatPaths Path-like variables built with +TestNoStutteryFunctionswrite.WriteJournal repeats package name TestDocComments Missing doc comments on any declaration TestNoMagicValues Numeric literals outside const definitions TestNoMagicStrings String literals outside const definitions TestLineLength Lines exceeding 80 characters TestNoRegexpOutsideRegexPkgregexp.MustCompile outside config/regex/
Plus the six that preceded the session: TestNoErrorsAs, TestNoCmdPrintOutsideWrite, TestNoExecOutsideExecPkg, TestNoInlineRegexpCompile, TestNoRawFileIO, TestNoRawPermissions.
The migration touched 300+ files across 25 commits.
Not because the tests were hard to write, but because every test we wrote revealed violations that needed fixing.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-tightening-loop","level":2,"title":"The Tightening Loop","text":"
The most instructive part was not writing the tests. It was the iterative tightening.
The following process was repeated for every test:
Write the test with reasonable exemptions
Run it, see violations
Fix the violations (migrate to config constants)
The human reviews the result
The human spots something the test missed
Fix the test first, verify it catches the issue
Fix the newly caught violations
Repeat from step 4
This loop drove the tests from \"basically correct\" to \"actually useful\".
Three examples:
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#example-1-the-local-const-loophole","level":3,"title":"Example 1: The Local Const Loophole","text":"
TestNoMagicValues initially exempted local constants inside function bodies. This let code like this pass:
The test saw a const definition and moved on. But const descMaxWidth = 70 on the line before its only use is just renaming a magic number. The 70 should live in config/format/TruncateDescription where it is discoverable, reusable, and auditable.
We removed the local const exemption. The test caught it. The value moved to config.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#example-2-the-single-character-dodge","level":3,"title":"Example 2: The Single-Character Dodge","text":"
TestNoMagicStrings initially exempted all single-character strings as \"structural punctuation\".
This let \"/\", \"-\", and \".\" pass everywhere.
But \"/\" is a directory separator. It is OS-specific and a security surface.
\"-\" used in strings.Repeat(\"-\", width) is creating visual output, not acting as a delimiter.
\".\" in strings.SplitN(ver, \".\", 3) is a version separator.
None of these are \"just punctuation\": They are domain values with specific meanings.
We removed the blanket exemption: 30 violations surfaced.
Every one was a real magic value that should have been token.Slash, token.Dash, or token.Dot.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#example-3-the-replacer-versus-regex","level":3,"title":"Example 3: The Replacer versus Regex","text":"
Six token references and a NewReplacer allocation. The magic values were gone, but we had replaced them with token soup: structure without abstraction.
The correct tool was a regex:
// In config/regex/file.go:\nvar MermaidUnsafe = regexp.MustCompile(`[/.\\-]`)\n\n// In the caller:\nfunc MermaidID(pkg string) string {\n return regex.MermaidUnsafe.ReplaceAllString(\n pkg, token.Underscore,\n )\n}\n
One config regex, one call. The regex lives in config/regex/file.go where every other compiled pattern lives. An agent reading the code sees regex.MermaidUnsafe and immediately knows: this is a sanitization pattern, it lives in the regex registry, and it has a name that explains its purpose.
Clean is better than clever.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#a-before-and-after","level":2,"title":"A Before-and-After","text":"
To make the agent-readability claim concrete, consider one function through the full transformation.
An agent reading this sees six string literals. To understand what the function does, it must: (1) parse the NewReplacer pair semantics, (2) infer that /, ., - are being replaced, (3) guess why, (4) hope the guess is right.
There is nothing to follow. No import to trace. No name to search. The meaning is locked inside the function body.
An agent reading this sees two named references: regex.MermaidUnsafe and token.Underscore.
To understand the function, it can: (1) look up MermaidUnsafe in config/regex/file.go and see the pattern [/.\\-] with a doc comment explaining it matches invalid Mermaid characters, (2) look up Underscore in config/token/delim.go and see it is the replacement character.
The agent now has: a named pattern, a named replacement, a package location, documentation, and neighboring context (other regex patterns, other delimiters).
It got all of this for free by following just two references.
The indirection is not an overhead. It is the retrieval query.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-principles","level":2,"title":"The Principles","text":"
You are not just improving code quality. You are shaping the input space that determines how an LLM can reason about your system.
Every structural constraint we enforce converts implicit semantics into explicit structure.
LLMs struggle when meaning is implicit and patterns are statistical.
They thrive when meaning is explicit and structure is navigable.
Here is what we learned, organized into three categories.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#cognitive-constraints","level":3,"title":"Cognitive Constraints","text":"
These force agents (and humans) to think harder.
Indirection acts as a built-in retrieval mechanism:
Moving magic values to config forces the agent to follow the reference. errMemory.WriteFile(cause) tells the agent \"there is a memory error package, go look.\" fmt.Errorf(\"writing MEMORY.md: %w\", cause) inlines everything and makes the call graph invisible. The indirection IS the retrieval query.
Unfamiliar patterns force reasoning:
When an agent sees token.Slash instead of \"/\", it cannot coast on corpus frequency. It has to actually look up what token.Slash is, which forces it through the dependency graph, which means it encounters documentation and neighboring constants, which gives it richer context. You are exploiting the agent's weakness (over-reliance on training data) to make it behave more carefully.
Documentation helps everyone:
Extensive documentation helps humans reading the code, agents reasoning about it, and RAG systems indexing it.
Our TestDocComments check added 308 doc comments in one commit. Every function, every type, every constant block now has a doc comment.
This is not busywork: it is the content that agents and embeddings consume.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#structural-constraints","level":3,"title":"Structural Constraints","text":"
These shape the codebase into a navigable graph.
Shorter files save tokens:
Forcing private helper functions out of main files makes the main file shorter. An agent loading a file spends fewer tokens on boilerplate and more on the logic that matters.
Fixed-width constraints force decomposition:
A function that cannot be expressed in 80 columns is either too deeply nested (extract a helper), has too many parameters (introduce a struct), or has a variable name that is too long (rethink the abstraction).
The constraint forces structural improvements that happen to also make the code more parseable.
Chunk-friendly structure helps RAG
Code intelligence tools chunk files for embedding and retrieval. Short, well-documented, single-responsibility files produce better chunks than monolithic files with mixed concerns.
The structural constraints create files that RAG systems can index effectively.
Centralization creates debuggable seams:
All error handling in internal/err/, all logging in internal/log/, all file operations in internal/io/. One place to debug, one place to test, one place to see patterns. An agent analyzing \"how does this project handle errors\" gets one answer from one package, not 200 scattered fmt.Errorf calls.
Private functions become public patterns:
When you extract a private function to satisfy a constraint, it often ends up as a semi-public function in a core/ package. Then you realize it is generic enough to be factored into a purpose-specific module.
The constraint drives discovery of reusable abstractions hiding inside monolithic functions.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#operational-benefits","level":3,"title":"Operational Benefits","text":"
These pay dividends in daily development.
Single-edit renames:
Renaming a flag is one edit to a config constant instead of find-and-replace across 30,000 lines with possible misses. grep token.Slash gives you every place that uses a forward slash semantically.
grep \"/\" gives you noise.
Blast radius containment:
When every magic value is a config constant, a search is one result. This matters for impact analysis, security audits, and agents trying to understand \"what uses this\".
Compile-time contract enforcement:
When err/memory.WriteFile exists, the compiler guarantees the error message exists and the call signature is correct. An inline fmt.Errorf can have a typo in the format string and nothing catches it until runtime. Centralization turns runtime failures into compile errors.
Semantic git blame:
When token.Slash is used everywhere and someone changes its value, git blame on the config file shows exactly when and why.
With inline \"/\" scattered across 30 files, the history is invisible.
Test surface reduction:
Centralizing into internal/err/, internal/io/, internal/config/ means you test behavior once at the boundary and trust the callers.
You do not need 30 tests for 30 fmt.Errorf calls. You need 1 test for errMemory.WriteFile and 30 trivial call-site audits, which is exactly what these AST tests provide.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-numbers","level":2,"title":"The Numbers","text":"
One session. 25 commits. The raw stats:
Metric Count New audit tests 13 Total audit tests 19 Files touched 300+ Magic values migrated 90+ Functions renamed 17 Doc comments added 323 Lines rewrapped to 80 chars 190 Config constants created 40+ Config regexes created 3
Every number represents a violation that existed before the test caught it. The tests did not create work: they revealed work that was already needed.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#the-uncomfortable-implication","level":2,"title":"The Uncomfortable Implication","text":"
None of this is Go-specific.
If an AI agent interacts with your codebase, your codebase already is an interface. You just have not designed it as one.
If your error messages are scattered across 200 files, an agent cannot reason about error handling as a concept. If your magic values are inlined, an agent cannot distinguish \"this is a path separator\" from \"this is a division operator.\" If your functions are named write.WriteJournal, the agent wastes tokens on redundant information.
What we discovered, through the unglamorous work of writing lint tests and migrating string literals, is that the structural constraints software engineering has valued for decades are exactly the constraints that make code readable to machines.
This is not a coincidence: These constraints exist because they reduce the cognitive load of understanding code.
Agents have cognitive load too: It is called the context window.
You are not converting code to a new paradigm.
You are making the latent graph visible.
You are converting implicit semantics into explicit structure that both humans and machines can traverse.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"blog/2026-04-02-code-structure-as-an-agent-interface/#whats-next","level":2,"title":"What's Next","text":"
The spec lists 8 more tests we have not built yet, including TestDescKeyYAMLLinkage (verifying that every DescKey constant has a corresponding YAML entry), TestCLICmdStructure (enforcing the cmd.go / run.go / doc.go file convention), and TestNoFlagBindOutsideFlagbind (which requires migrating ~50 flag registration sites first).
The broader question: should these principles be codified as a reusable linting framework? The patterns (loadPackages + ast.Inspect + violation collection) are generic.
The specific checks are project-specific. But the categories of checks (centralization enforcement, magic value detection, naming conventions, documentation requirements) are universal.
For now, 19 tests in internal/audit/ is enough. They run in 2 seconds as part of go test ./.... They catch real issues.
And they encode a theory of code quality that serves both humans and the agents that work alongside them.
Agents are not going away. They are reading your code right now, forming representations of your system in context windows that forget everything between sessions.
The codebases that structure themselves for that reality will compound. The ones that do not will slowly become illegible to the tools they depend on.
Structure is no longer just for maintainability. It is for reasonability.
","path":["Code Structure as an Agent Interface: What 19 AST Tests Taught Us About Agent-Readable Code"],"tags":[]},{"location":"cli/","level":1,"title":"CLI","text":"","path":["CLI"],"tags":[]},{"location":"cli/#ctx-cli","level":2,"title":"ctx CLI","text":"
This is a complete reference for all ctx commands.
Flag Description --help Show command help --version Show version --context-dir <path> Override context directory (default: .context/) --allow-outside-cwd Allow context directory outside current working directory --tool <name> Override active AI tool identifier (e.g. kiro, cursor)
Initialization required. Most commands require a .context/ directory created by ctx init. Running a command without one produces:
ctx: not initialized - run \"ctx init\" first\n
Commands that work before initialization: ctx init, ctx setup, ctx doctor, and grouping commands that only show help (e.g. ctx with no subcommand, ctx system). Hidden hook commands have their own guards and no-op gracefully.
","path":["CLI"],"tags":[]},{"location":"cli/#commands","level":2,"title":"Commands","text":"Command Description ctx init Initialize .context/ directory with templates ctx status Show context summary (files, tokens, drift) ctx agent Print token-budgeted context packet for AI consumption ctx load Output assembled context in read order ctx add Add a task, decision, learning, or convention ctx drift Detect stale paths, secrets, missing files ctx sync Reconcile context with codebase state ctx compact Archive completed tasks, clean up files ctx task Task completion, archival, and snapshots ctx permission Permission snapshots (golden image) ctx reindex Regenerate indices for DECISIONS.md and LEARNINGS.mdctx decision Manage DECISIONS.md (reindex) ctx learning Manage LEARNINGS.md (reindex) ctx journal Browse and export AI session history ctx journal Generate static site from journal entries ctx serve Serve any zensical directory (default: journal site) ctx watch Auto-apply context updates from AI output ctx setup Generate AI tool integration configs ctx loop Generate autonomous loop script ctx memory Bridge Claude Code auto memory into .context/ ctx notify Send webhook notifications ctx change Show what changed since last session ctx dep Show package dependency graph ctx pad Encrypted scratchpad for sensitive one-liners ctx remind Session-scoped reminders that surface at session start ctx completion Generate shell autocompletion scripts ctx guide Quick-reference cheat sheet ctx why Read the philosophy behind ctx ctx site Site management (feed generation) ctx trace Show context behind git commits ctx doctor Structural health check (hooks, drift, config) ctx mcp MCP server for AI tool integration (stdin/stdout) ctx steering Manage steering files (behavioral rules for AI tools) ctx hook Manage lifecycle hooks (shell scripts for automation) ctx skill Manage reusable instruction bundles ctx config Manage runtime configuration profiles ctx system System diagnostics and hook commands","path":["CLI"],"tags":[]},{"location":"cli/#exit-codes","level":2,"title":"Exit Codes","text":"Code Meaning 0 Success 1 General error / warnings (e.g. drift) 2 Context not found 3 Violations found (e.g. drift) 4 File operation error","path":["CLI"],"tags":[]},{"location":"cli/#environment-variables","level":2,"title":"Environment Variables","text":"Variable Description CTX_DIR Override default context directory path CTX_TOKEN_BUDGET Override default token budget CTX_BACKUP_SMB_URL SMB share URL for backups (e.g. smb://host/share) CTX_BACKUP_SMB_SUBDIR Subdirectory on SMB share (default: ctx-sessions) CTX_SESSION_ID Active AI session ID (used by ctx trace for context linking)","path":["CLI"],"tags":[]},{"location":"cli/#configuration-file","level":2,"title":"Configuration File","text":"
Optional .ctxrc (YAML format) at project root:
# .ctxrc\ncontext_dir: .context # Context directory name\ntoken_budget: 8000 # Default token budget\npriority_order: # File loading priority\n - TASKS.md\n - DECISIONS.md\n - CONVENTIONS.md\nauto_archive: true # Auto-archive old items\narchive_after_days: 7 # Days before archiving tasks\nscratchpad_encrypt: true # Encrypt scratchpad (default: true)\nallow_outside_cwd: false # Skip boundary check (default: false)\nevent_log: false # Enable local hook event logging\ncompanion_check: true # Check companion tools at session start\nentry_count_learnings: 30 # Drift warning threshold (0 = disable)\nentry_count_decisions: 20 # Drift warning threshold (0 = disable)\nconvention_line_count: 200 # Line count warning for CONVENTIONS.md (0 = disable)\ninjection_token_warn: 15000 # Oversize injection warning (0 = disable)\ncontext_window: 200000 # Auto-detected for Claude Code; override for other tools\nbilling_token_warn: 0 # One-shot billing warning at this token count (0 = disabled)\nkey_rotation_days: 90 # Days before key rotation nudge\nsession_prefixes: # Recognized session header prefixes (extend for i18n)\n - \"Session:\" # English (default)\n # - \"Oturum:\" # Turkish (add as needed)\n # - \"セッション:\" # Japanese (add as needed)\nfreshness_files: # Files with technology-dependent constants (opt-in)\n - path: config/thresholds.yaml\n desc: Model token limits and batch sizes\n review_url: https://docs.example.com/limits # Optional\nnotify: # Webhook notification settings\n events: # Required: only listed events fire\n - loop\n - nudge\n - relay\n # - heartbeat # Every-prompt session-alive signal\ntool: \"\" # Active AI tool: claude, cursor, cline, kiro, codex\nsteering: # Steering layer configuration\n dir: .context/steering # Steering files directory\n default_inclusion: manual # Default inclusion mode (always, auto, manual)\n default_tools: [] # Default tool filter for new steering files\nhooks: # Hook system configuration\n dir: .context/hooks # Hook scripts directory\n timeout: 10 # Per-hook execution timeout in seconds\n enabled: true # Whether hook execution is enabled\n
Field Type Default Description context_dirstring.context Context directory name (relative to project root) token_budgetint8000 Default token budget for ctx agentpriority_order[]string (all files) File loading priority for context packets auto_archivebooltrue Auto-archive completed tasks archive_after_daysint7 Days before completed tasks are archived scratchpad_encryptbooltrue Encrypt scratchpad with AES-256-GCM allow_outside_cwdboolfalse Skip boundary check for external context dirs event_logboolfalse Enable local hook event logging to .context/state/events.jsonlcompanion_checkbooltrue Check companion tool availability (Gemini Search, GitNexus) during /ctx-rememberentry_count_learningsint30 Drift warning when LEARNINGS.md exceeds this count entry_count_decisionsint20 Drift warning when DECISIONS.md exceeds this count convention_line_countint200 Line count warning for CONVENTIONS.mdinjection_token_warnint15000 Warn when auto-injected context exceeds this token count (0 = disable) context_windowint200000 Context window size in tokens. Auto-detected for Claude Code (200k/1M); override for other AI tools billing_token_warnint0 (off) One-shot warning when session tokens exceed this threshold (0 = disabled) key_rotation_daysint90 Days before encryption key rotation nudge session_prefixes[]string[\"Session:\"] Recognized Markdown session header prefixes. Extend to parse sessions written in other languages freshness_files[]object (none) Files to track for staleness (path, desc, optional review_url). Hook warns after 6 months without modification notify.events[]string (all) Event filter for webhook notifications (empty = all) toolstring (empty) Active AI tool identifier (claude, cursor, cline, kiro, codex) steering.dirstring.context/steering Steering files directory steering.default_inclusionstringmanual Default inclusion mode for new steering files (always, auto, manual) steering.default_tools[]string (all) Default tool filter for new steering files (empty = all tools) hooks.dirstring.context/hooks Hook scripts directory hooks.timeoutint10 Per-hook execution timeout in seconds hooks.enabledbooltrue Whether hook execution is enabled
The ctx repo ships two .ctxrc source profiles (.ctxrc.base and .ctxrc.dev). The working copy (.ctxrc) is gitignored and switched between them using subcommands below.
With no argument, toggles between dev and base. Accepts prod as an alias for base.
Argument Description dev Switch to dev profile (verbose logging) base Switch to base profile (all defaults) (none) Toggle to the opposite profile
Profiles:
Profile Description dev Verbose logging, webhook notifications on base All defaults, notifications off
Examples:
ctx config switch dev # Switch to dev profile\nctx config switch base # Switch to base profile\nctx config switch # Toggle (dev → base or base → dev)\nctx config switch prod # Alias for \"base\"\n
The detection heuristic checks for an uncommented notify: line in .ctxrc: present means dev, absent means base.
Type Target File taskTASKS.mddecisionDECISIONS.mdlearningLEARNINGS.mdconventionCONVENTIONS.md
Flags:
Flag Short Description --priority <level>-p Priority for tasks: high, medium, low--section <name>-s Target section within file --context-c Context (required for decisions and learnings) --rationale-r Rationale for decisions (required for decisions) --consequence Consequence for decisions (required for decisions) --lesson-l Key insight (required for learnings) --application-a How to apply going forward (required for learnings) --file-f Read content from file instead of argument
Examples:
# Add a task\nctx add task \"Implement user authentication\"\nctx add task \"Fix login bug\" --priority high\n\n# Record a decision (requires all ADR (Architectural Decision Record) fields)\nctx add decision \"Use PostgreSQL for primary database\" \\\n --context \"Need a reliable database for production\" \\\n --rationale \"PostgreSQL offers ACID compliance and JSON support\" \\\n --consequence \"Team needs PostgreSQL training\"\n\n# Note a learning (requires context, lesson, and application)\nctx add learning \"Vitest mocks must be hoisted\" \\\n --context \"Tests failed with undefined mock errors\" \\\n --lesson \"Vitest hoists vi.mock() calls to top of file\" \\\n --application \"Always place vi.mock() before imports in test files\"\n\n# Add to specific section\nctx add convention \"Use kebab-case for filenames\" --section \"Naming\"\n
Flag Description --json Output machine-readable JSON --fix Auto-fix simple issues
Checks:
Path references in ARCHITECTURE.md and CONVENTIONS.md exist
Task references are valid
Constitution rules aren't violated (heuristic)
Staleness indicators (old files, many completed tasks)
Missing packages: warns when internal/ directories exist on disk but are not referenced in ARCHITECTURE.md (suggests running /ctx-architecture)
Entry count: warns when LEARNINGS.md or DECISIONS.md exceed configurable thresholds (default: 30 learnings, 20 decisions), or when CONVENTIONS.md exceeds a line count threshold (default: 200). Configure via .ctxrc:
entry_count_learnings: 30 # warn above this (0 = disable)\nentry_count_decisions: 20 # warn above this (0 = disable)\nconvention_line_count: 200 # warn above this (0 = disable)\n
Example:
ctx drift\nctx drift --json\nctx drift --fix\n
Exit codes:
Code Meaning 0 All checks passed 1 Warnings found 3 Violations found","path":["CLI","Context Management"],"tags":[]},{"location":"cli/context/#ctx-sync","level":3,"title":"ctx sync","text":"
Reconcile context with the current codebase state.
ctx sync [flags]\n
Flags:
Flag Description --dry-run Show what would change without modifying
What it does:
Scans codebase for structural changes
Compares with ARCHITECTURE.md
Suggests documenting dependencies if package files exist
Move completed tasks from TASKS.md to a timestamped archive file.
ctx task archive [flags]\n
Flags:
Flag Description --dry-run Preview changes without modifying files
Archive files are stored in .context/archive/ with timestamped names (tasks-YYYY-MM-DD.md). Completed tasks (marked with [x]) are moved; pending tasks ([ ]) remain in TASKS.md.
Regenerate the quick-reference index for both DECISIONS.md and LEARNINGS.md in a single invocation.
ctx reindex\n
This is a convenience wrapper around ctx decision reindex and ctx learning reindex. Both files grow at similar rates and users typically want to reindex both after manual edits.
The index is a compact table of date and title for each entry, allowing AI tools to scan entries without reading the full file.
Example:
ctx reindex\n# ✓ Index regenerated with 12 entries\n# ✓ Index regenerated with 8 entries\n
Structural health check across context, hooks, and configuration. Runs mechanical checks that don't require semantic analysis. Think of it as ctx status + ctx drift + configuration audit in one pass.
ctx doctor [flags]\n
Flags:
Flag Short Type Default Description --json-j bool false Machine-readable JSON output","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#what-it-checks","level":4,"title":"What It Checks","text":"Check Category What it verifies Context initialized Structure .context/ directory exists Required files present Structure All required context files exist (TASKS.md, etc.) Drift detected Quality Stale paths, missing files, constitution violations Event logging status Hooks Whether event_log: true is set in .ctxrc Webhook configured Hooks .notify.enc file exists Pending reminders State Count of entries in reminders.json Task completion ratio State Pending vs completed tasks in TASKS.md Context token size Size Estimated token count across all context files Recent event activity Events Last event timestamp (only when event logging is enabled)","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#output-format-human","level":4,"title":"Output Format (Human)","text":"
ctx doctor\n==========\n\nStructure\n ✓ Context initialized (.context/)\n ✓ Required files present (4/4)\n\nQuality\n ⚠ Drift: 2 warnings (stale path in ARCHITECTURE.md, high entry count in LEARNINGS.md)\n\nHooks\n ✓ hooks.json valid (14 hooks registered)\n ○ Event logging disabled (enable with event_log: true in .ctxrc)\n\nState\n ✓ No pending reminders\n ⚠ Task completion ratio high (18/22 = 82%): consider archiving\n\nSize\n ✓ Context size: ~4200 tokens (budget: 8000)\n\nSummary: 2 warnings, 0 errors\n
Status indicators:
Icon Status Meaning ✓ ok Check passed ⚠ warning Non-critical issue worth fixing ✗ error Problem that needs attention ○ info Informational note","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#output-format-json","level":4,"title":"Output Format (JSON)","text":"
# Quick structural health check\nctx doctor\n\n# Machine-readable output for scripting\nctx doctor --json\n\n# Count warnings\nctx doctor --json | jq '.warnings'\n\n# Check for errors only\nctx doctor --json | jq '[.results[] | select(.status == \"error\")]'\n
","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#when-to-use-what","level":4,"title":"When to Use What","text":"Tool When ctx status Quick glance at files, tokens, and drift ctx doctor Thorough structural checkup (hooks, config, events too) /ctx-doctor Agent-driven diagnosis with event log pattern analysis
ctx status tells you what's there. ctx doctor tells you what's wrong. /ctx-doctor tells you why it's wrong and what to do about it.
","path":["CLI","Doctor"],"tags":[]},{"location":"cli/doctor/#what-it-does-not-do","level":4,"title":"What It Does Not Do","text":"
No event pattern analysis: that's the /ctx-doctor skill's job
No auto-fixing: reports findings, doesn't modify anything
No external service checks: doesn't verify webhook endpoint availability
See also: Troubleshooting | ctx system events | /ctx-doctor skill | Detecting and Fixing Drift
","path":["CLI","Doctor"],"tags":[]},{"location":"cli/init-status/","level":1,"title":"Init and Status","text":"","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/init-status/#ctx-init","level":3,"title":"ctx init","text":"
Initialize a new .context/ directory with template files.
ctx init [flags]\n
Flags:
Flag Short Description --force-f Overwrite existing context files --minimal-m Only create essential files (TASKS.md, DECISIONS.md, CONSTITUTION.md) --merge Auto-merge ctx content into existing CLAUDE.md
Creates:
.context/ directory with all template files
.claude/settings.local.json with pre-approved ctx permissions
CLAUDE.md with bootstrap instructions (or merges into existing)
Claude Code hooks and skills are provided by the ctx plugin (see Integrations).
Example:
# Standard init\nctx init\n\n# Minimal setup (just core files)\nctx init --minimal\n\n# Force overwrite existing\nctx init --force\n\n# Merge into existing files\nctx init --merge\n
","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/init-status/#ctx-status","level":3,"title":"ctx status","text":"
Show the current context summary.
ctx status [flags]\n
Flags:
Flag Short Description --json Output as JSON --verbose-v Include file contents summary
Output:
Context directory path
Total files and token estimate
Status of each file (loaded, empty, missing)
Recent activity (modification times)
Drift warnings if any
Example:
ctx status\nctx status --json\nctx status --verbose\n
","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/init-status/#ctx-agent","level":3,"title":"ctx agent","text":"
Print an AI-ready context packet optimized for LLM consumption.
ctx agent [flags]\n
Flags:
Flag Default Description --budget 8000 Token budget: controls content selection and prioritization --format md Output format: md or json--cooldown 10m Suppress repeated output within this duration (requires --session) --session (none) Session ID for cooldown isolation (e.g., $PPID)
How budget works:
The budget controls how much context is included. Entries are selected in priority tiers:
Constitution: always included in full (inviolable rules)
Tasks: all active tasks, up to 40% of budget
Conventions: all conventions, up to 20% of budget
Decisions: scored by recency and relevance to active tasks
Learnings: scored by recency and relevance to active tasks
Decisions and learnings are ranked by a combined score (how recent + how relevant to your current tasks). High-scoring entries are included with their full body. Entries that don't fit get title-only summaries in an \"Also Noted\" section. Superseded entries are excluded.
Output sections:
Section Source Selection Read These Files all .context/ Non-empty files in priority order Constitution CONSTITUTION.md All rules (never truncated) Current Tasks TASKS.md All unchecked tasks (budget-capped) Key Conventions CONVENTIONS.md All items (budget-capped) Recent Decisions DECISIONS.md Full body, scored by relevance Key Learnings LEARNINGS.md Full body, scored by relevance Also Noted overflow Title-only summaries
Example:
# Default (8000 tokens, markdown)\nctx agent\n\n# Smaller packet for tight context windows\nctx agent --budget 4000\n\n# JSON format for programmatic use\nctx agent --format json\n\n# Pipe to file\nctx agent --budget 4000 > context.md\n\n# With cooldown (hooks/automation: requires --session)\nctx agent --session $PPID\n
Use case: Copy-paste into AI chat, pipe to system prompt, or use in hooks.
","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/init-status/#ctx-load","level":3,"title":"ctx load","text":"
Load and display assembled context as AI would see it.
ctx load [flags]\n
Flags:
Flag Description --budget <tokens> Token budget for assembly (default: 8000) --raw Output raw file contents without assembly
","path":["CLI","Init and Status"],"tags":[]},{"location":"cli/journal/","level":1,"title":"Journal","text":"","path":["CLI","Journal"],"tags":[]},{"location":"cli/journal/#ctx-journal","level":3,"title":"ctx journal","text":"
Browse and search AI session history from Claude Code and other tools.
Flag Short Description --limit-n Maximum sessions to display (default: 20) --project-p Filter by project name --tool-t Filter by tool (e.g., claude-code) --all-projects Include sessions from all projects
Sessions are sorted by date (newest first) and display slug, project, start time, duration, turn count, and token usage.
Import sessions to editable journal files in .context/journal/.
ctx journal import [session-id] [flags]\n
Flags:
Flag Description --all Import all sessions (only new files by default) --all-projects Import from all projects --regenerate Re-import existing files (preserves YAML frontmatter by default) --keep-frontmatter Preserve enriched YAML frontmatter during regeneration (default: true) --yes, -y Skip confirmation prompt --dry-run Show what would be imported without writing files
Safe by default: --all only imports new sessions. Existing files are skipped. Use --regenerate to re-import existing files (conversation content is regenerated, YAML frontmatter from enrichment is preserved by default). Use --keep-frontmatter=false to discard enriched frontmatter during regeneration.
Locked entries (via ctx journal lock) are always skipped, regardless of flags.
Single-session import (ctx journal import <id>) always writes without prompting, since you are explicitly targeting one session.
The journal/ directory should be gitignored (like sessions/) since it contains raw conversation data.
Example:
ctx journal import abc123 # Import one session\nctx journal import --all # Import only new sessions\nctx journal import --all --dry-run # Preview what would be imported\nctx journal import --all --regenerate # Re-import existing (prompts)\nctx journal import --all --regenerate -y # Re-import without prompting\nctx journal import --all --regenerate --keep-frontmatter=false -y # Discard frontmatter\n
Protect journal entries from being overwritten by import --regenerate or modified by enrichment skills (/ctx-journal-enrich, /ctx-journal-enrich-all).
ctx journal lock <pattern> [flags]\n
Flags:
Flag Description --all Lock all journal entries
The pattern matches filenames by slug, date, or short ID. Locking a multi-part entry locks all parts. The lock is recorded in .context/journal/.state.json and a locked: true line is added to the file's YAML frontmatter for visibility.
Sync lock state from journal frontmatter to .state.json.
ctx journal sync\n
Scans all journal markdowns and updates .state.json to match each file's frontmatter. Files with locked: true in frontmatter are marked locked in state; files without a locked: line have their lock cleared.
This is the inverse of ctx journal lock: instead of state driving frontmatter, frontmatter drives state. Useful after batch enrichment where you add locked: true to frontmatter manually.
Example:
# After enriching entries and adding locked: true to frontmatter\nctx journal sync\n
Generate a static site from journal entries in .context/journal/.
ctx journal site [flags]\n
Flags:
Flag Short Description --output-o Output directory (default: .context/journal-site) --build Run zensical build after generating --serve Run zensical serve after generating
Creates a zensical-compatible site structure with an index page listing all sessions by date, and individual pages for each journal entry.
Requires zensical to be installed for --build or --serve:
pipx install zensical\n
Example:
ctx journal site # Generate in .context/journal-site/\nctx journal site --output ~/public # Custom output directory\nctx journal site --build # Generate and build HTML\nctx journal site --serve # Generate and serve locally\n
Serve any zensical directory locally. This is a serve-only command: It does not generate or regenerate site content.
ctx serve [directory]\n
If no directory is specified, defaults to the journal site (.context/journal-site).
Requires zensical to be installed:
pipx install zensical\n
ctx serve vs. ctx journal site --serve
ctx journal site --serve generates the journal site then serves it: an all-in-one command. ctx serve only serves an existing directory, and works with any zensical site (journal, docs, etc.).
Example:
ctx serve # Serve journal site (no regeneration)\nctx serve .context/journal-site # Same, explicit path\nctx serve ./site # Serve the docs site\n
Run ctx as a Model Context Protocol (MCP) server. MCP is a standard protocol that lets AI tools discover and consume context from external sources via JSON-RPC 2.0 over stdin/stdout.
This makes ctx accessible to any MCP-compatible AI tool without custom hooks or integrations:
Start the MCP server. This command reads JSON-RPC 2.0 requests from stdin and writes responses to stdout. It is intended to be launched by MCP clients, not run directly.
ctx mcp serve\n
Flags: None. The server uses the configured context directory (from --context-dir, CTX_DIR, .ctxrc, or the default .context).
Resources expose context files as read-only content. Each resource has a URI, name, and returns Markdown text.
URI Name Description ctx://context/constitution constitution Hard rules that must never be violated ctx://context/tasks tasks Current work items and their status ctx://context/conventions conventions Code patterns and standards ctx://context/architecture architecture System architecture documentation ctx://context/decisions decisions Architectural decisions with rationale ctx://context/learnings learnings Gotchas, tips, and lessons learned ctx://context/glossary glossary Project-specific terminology ctx://context/agent agent All files assembled in priority read order
The agent resource assembles all non-empty context files into a single Markdown document, ordered by the configured read priority.
Clients can subscribe to resource changes via resources/subscribe. The server polls for file mtime changes (default: 5 seconds) and emits notifications/resources/updated when a subscribed file changes on disk.
Add a task, decision, learning, or convention to the context.
Argument Type Required Description type string Yes Entry type: task, decision, learning, convention content string Yes Title or main content priority string No Priority level (tasks only): high, medium, low context string Conditional Context field (decisions and learnings) rationale string Conditional Rationale (decisions only) consequence string Conditional Consequence (decisions only) lesson string Conditional Lesson learned (learnings only) application string Conditional How to apply (learnings only)","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx_complete","level":3,"title":"ctx_complete","text":"
Mark a task as done by number or text match.
Argument Type Required Description query string Yes Task number (e.g. \"1\") or search text","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx_drift","level":3,"title":"ctx_drift","text":"
Detect stale or invalid context. Returns violations, warnings, and passed checks.
Query recent AI session history (summaries, decisions, topics).
Argument Type Required Description limit number No Max sessions to return (default: 5) since string No ISO date filter: sessions after this date (YYYY-MM-DD)
Apply a structured context update to .context/ files. Supports task, decision, learning, convention, and complete entry types. Human confirmation is required before calling.
Move completed tasks to the archive section and remove empty sections from context files. Human confirmation required.
Argument Type Required Description archive boolean No Also write tasks to .context/archive/ (default: false)","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx_next","level":3,"title":"ctx_next","text":"
Suggest the next pending task based on priority and position.
Execute session-end hooks with an optional summary. Returns aggregated context from hook outputs.
Argument Type Required Description summary string No Session summary passed to hook scripts","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx_remind","level":3,"title":"ctx_remind","text":"
Prompts provide pre-built templates for common workflows. Clients can list available prompts via prompts/list and retrieve a specific prompt via prompts/get.
Format an architectural decision entry with all required fields.
Argument Type Required Description content string Yes Decision title context string Yes Background context rationale string Yes Why this decision was made consequence string Yes Expected consequence","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx-add-learning","level":3,"title":"ctx-add-learning","text":"
Format a learning entry with all required fields.
Argument Type Required Description content string Yes Learning title context string Yes Background context lesson string Yes The lesson learned application string Yes How to apply this lesson","path":["CLI","MCP Server"],"tags":[]},{"location":"cli/mcp/#ctx-reflect","level":3,"title":"ctx-reflect","text":"
Guide end-of-session reflection. Returns a structured review prompt covering progress assessment and context update recommendations.
The parent command shows available subcommands. Hidden plumbing subcommands (ctx system mark-journal, ctx system mark-wrapped-up) are used by skills and automation. Hidden hook subcommands (ctx system check-*) are used by the Claude Code plugin.
See AI Tools for details.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-backup","level":4,"title":"ctx system backup","text":"
Create timestamped tar.gz archives of project context and/or global Claude Code data. Optionally copies archives to an SMB share via GVFS.
ctx system backup [flags]\n
Scopes:
Scope What it includes project.context/, .claude/, ideas/, ~/.bashrcglobal~/.claude/ (excludes todos/) all Both project and global (default)
Flags:
Flag Description --scope <scope> Backup scope: project, global, or all --json Output results as JSON
ctx system backup # Back up everything (default: all)\nctx system backup --scope project # Project context only\nctx system backup --scope global # Global Claude data only\nctx system backup --scope all --json # Both, JSON output\n
Archives are saved to /tmp/ with timestamped names. When CTX_BACKUP_SMB_URL is configured, archives are also copied to the SMB share. Project backups touch ~/.local/state/ctx-last-backup for the check-backup-age hook.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-bootstrap","level":4,"title":"ctx system bootstrap","text":"
Print context location and rules for AI agents. This is the recommended first command for AI agents to run at session start: It tells them where the context directory is and how to use it.
ctx system bootstrap [flags]\n
Flags:
Flag Description -q, --quiet Output only the context directory path --json Output in JSON format
Quiet output:
ctx system bootstrap -q\n# .context\n
Text output:
ctx bootstrap\n=============\n\ncontext_dir: .context\n\nFiles:\n CONSTITUTION.md, TASKS.md, DECISIONS.md, LEARNINGS.md,\n CONVENTIONS.md, ARCHITECTURE.md, GLOSSARY.md\n\nRules:\n 1. Use context_dir above for ALL file reads/writes\n 2. Never say \"I don't have memory\": context IS your memory\n 3. Read files silently, present as recall (not search)\n 4. Persist learnings/decisions before session ends\n 5. Run `ctx agent` for content summaries\n 6. Run `ctx status` for context health\n
JSON output:
{\n \"context_dir\": \".context\",\n \"files\": [\"CONSTITUTION.md\", \"TASKS.md\", ...],\n \"rules\": [\"Use context_dir above for ALL file reads/writes\", ...]\n}\n
Examples:
ctx system bootstrap # Text output\nctx system bootstrap -q # Just the path\nctx system bootstrap --json # JSON output\nctx system bootstrap --json | jq .context_dir # Extract context path\n
Why it exists: When users configure an external context directory via .ctxrc (context_dir: /mnt/nas/.context), the AI agent needs to know where context lives. Bootstrap resolves the configured path and communicates it to the agent at session start. Every nudge also includes a context directory footer for reinforcement.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-resources","level":4,"title":"ctx system resources","text":"
Show system resource usage with threshold-based alerts.
ctx system resources [flags]\n
Displays memory, swap, disk, and CPU load with two severity tiers:
Resource WARNING DANGER Memory >= 80% used >= 90% used Swap >= 50% used >= 75% used Disk (cwd) >= 85% full >= 95% full Load (1m) >= 0.8x CPUs >= 1.5x CPUs
Flags:
Flag Description --json Output in JSON format
Examples:
ctx system resources # Text output with status indicators\nctx system resources --json # Machine-readable JSON\nctx system resources --json | jq '.alerts' # Extract alerts only\n
When resources breach thresholds, alerts are listed below the summary:
Alerts:\n ✖ Memory 92% used (14.7 / 16.0 GB)\n ✖ Swap 78% used (6.2 / 8.0 GB)\n ✖ Load 1.56x CPU count\n
Platform support: Full metrics on Linux and macOS. Windows shows disk only; memory and load report as unsupported.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-message","level":4,"title":"ctx system message","text":"
Manage hook message templates. Hook messages control what text hooks emit. The hook logic (when to fire, counting, state tracking) is universal; the messages are opinions that can be customized per-project.
ctx system message <subcommand>\n
Subcommands:
Subcommand Args Flags Description list (none) --json Show all hook messages with category and override status show<hook> <variant> (none) Print the effective message template with source edit<hook> <variant> (none) Copy embedded default to .context/ for editing reset<hook> <variant> (none) Delete user override, revert to embedded default
Examples:
ctx system message list # Table of all 24 messages\nctx system message list --json # Machine-readable JSON\nctx system message show qa-reminder gate # View the QA gate template\nctx system message edit qa-reminder gate # Copy default to .context/ for editing\nctx system message reset qa-reminder gate # Delete override, revert to default\n
Override files are placed at .context/hooks/messages/{hook}/{variant}.txt. An empty override file silences the message while preserving the hook's logic.
See the Customizing Hook Messages recipe for detailed examples.
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-events","level":4,"title":"ctx system events","text":"
Query the local hook event log. Reads events from .context/state/events.jsonl and outputs them in human-readable or raw JSONL format. Requires event_log: true in .ctxrc.
ctx system events [flags]\n
Flags:
Flag Short Type Default Description --hook-k string (all) Filter by hook name --session-s string (all) Filter by session ID --event-e string (all) Filter by event type (relay, nudge) --last-n int 50 Show last N events --json-j bool false Output raw JSONL (for piping to jq) --all-a bool false Include rotated log file (events.1.jsonl)
Each line is a standalone JSON object identical to the webhook payload format:
// converted to multi-line for convenience:\n{\"event\":\"relay\",\"message\":\"qa-reminder: QA gate reminder emitted\",\"detail\":\n{\"hook\":\"qa-reminder\",\"variant\":\"gate\"},\"session_id\":\"eb1dc9cd-...\",\n \"timestamp\":\"2026-02-27T22:39:31Z\",\"project\":\"ctx\"}\n
Examples:
# Last 50 events (default)\nctx system events\n\n# Events from a specific session\nctx system events --session eb1dc9cd-0163-4853-89d0-785fbfaae3a6\n\n# Only QA reminder events\nctx system events --hook qa-reminder\n\n# Raw JSONL for jq processing\nctx system events --json | jq '.message'\n\n# How many context-load-gate fires today\nctx system events --hook context-load-gate --json \\\n | jq -r '.timestamp' | grep \"$(date +%Y-%m-%d)\" | wc -l\n\n# Include rotated events\nctx system events --all --last 100\n
Why it exists: System hooks fire invisibly. When something goes wrong (\"why didn't my hook fire?\"), the event log provides a local, queryable record of what hooks fired, when, and how often. Event logging is opt-in via event_log: true in .ctxrc to avoid surprises for existing users.
See also: Troubleshooting, Auditing System Hooks, ctx doctor
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-stats","level":4,"title":"ctx system stats","text":"
Show per-session token usage statistics. Reads stats JSONL files from .context/state/stats-*.jsonl, written automatically by the check-context-size hook on every prompt.
ctx system stats [flags]\n
Flags:
Flag Short Type Default Description --follow-f bool false Stream new entries as they arrive --session-s string (all) Filter by session ID (prefix match) --last-n int 20 Show last N entries --json-j bool false Output raw JSONL (for piping to jq)
# Recent stats across all sessions\nctx system stats\n\n# Stream live token usage (like tail -f)\nctx system stats --follow\n\n# Filter to current session\nctx system stats --session abc12345\n\n# Raw JSONL for analysis\nctx system stats --json | jq '.pct'\n\n# Monitor a long session in another terminal\nctx system stats --follow --session abc12345\n
See also: Auditing System Hooks
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-prune","level":4,"title":"ctx system prune","text":"
Clean stale per-session state files from .context/state/. Session hooks write tombstone files (context-check, heartbeat, persistence-nudge, etc.) that accumulate ~6-8 files per session with no automatic cleanup.
ctx system prune [flags]\n
Flags:
Flag Type Default Description --days int 7 Prune files older than this many days --dry-run bool false Show what would be pruned without deleting
Files are identified as session-scoped by UUID suffix (e.g. heartbeat-a1b2c3d4-...). Global files without UUIDs (events.jsonl, memory-import.json, etc.) are always preserved.
Safe to run anytime
The worst outcome of pruning is a hook re-firing its nudge in the next session. No context files, decisions, or learnings are stored in the state directory.
Examples:
ctx system prune # Prune files older than 7 days\nctx system prune --days 3 # More aggressive cleanup\nctx system prune --dry-run # Preview what would be removed\n
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-mark-journal","level":4,"title":"ctx system mark-journal","text":"
Update processing state for a journal entry. Records the current date in .context/journal/.state.json. Used by journal skills to record pipeline progress.
Flag Description --check Check if stage is set (exit 1 if not)
Example:
ctx system mark-journal 2026-01-21-session-abc12345.md enriched\nctx system mark-journal 2026-01-21-session-abc12345.md normalized\nctx system mark-journal --check 2026-01-21-session-abc12345.md fences_verified\n
","path":["CLI","System"],"tags":[]},{"location":"cli/system/#ctx-system-mark-wrapped-up","level":4,"title":"ctx system mark-wrapped-up","text":"
Suppress context checkpoint nudges after a wrap-up ceremony. Writes a marker file that check-context-size checks before emitting checkpoint boxes. The marker expires after 2 hours.
Called automatically by /ctx-wrap-up after persisting context (not intended for direct use).
ctx system mark-wrapped-up\n
No flags, no arguments. Idempotent: running it again updates the marker timestamp.
","path":["CLI","System"],"tags":[]},{"location":"cli/tools/","level":1,"title":"Tools and Utilities","text":"","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-watch","level":3,"title":"ctx watch","text":"
Watch for AI output and auto-apply context updates.
Parses <context-update> XML commands from AI output and applies them to context files.
ctx watch [flags]\n
Flags:
Flag Description --log <file> Log file to watch (default: stdin) --dry-run Preview updates without applying
Example:
# Watch stdin\nai-tool | ctx watch\n\n# Watch a log file\nctx watch --log /path/to/ai-output.log\n\n# Preview without applying\nctx watch --dry-run\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-setup","level":3,"title":"ctx setup","text":"
Generate AI tool integration configuration.
ctx setup <tool> [flags]\n
Flags:
Flag Short Description --write-w Write the generated config to disk (e.g. .github/copilot-instructions.md)
Supported tools:
Tool Description claude-code Redirects to plugin install instructions cursor Cursor IDE kiro Kiro IDE cline Cline (VS Code extension) aider Aider CLI copilot GitHub Copilot windsurf Windsurf IDE
Claude Code Uses the Plugin system
Claude Code integration is now provided via the ctx plugin. Running ctx setup claude-code prints plugin install instructions.
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-loop","level":3,"title":"ctx loop","text":"
Generate a shell script for running an autonomous loop.
An autonomous loop continuously runs an AI assistant with the same prompt until a completion signal is detected, enabling iterative development where the AI builds on its previous work.
ctx loop [flags]\n
Flags:
Flag Short Description Default --tool <tool>-t AI tool: claude, aider, or genericclaude--prompt <file>-p Prompt file to use .context/loop.md--max-iterations <n>-n Maximum iterations (0 = unlimited) 0--completion <signal>-c Completion signal to detect SYSTEM_CONVERGED--output <file>-o Output script filename loop.sh
Example:
# Generate loop.sh for Claude Code\nctx loop\n\n# Generate for Aider with custom prompt\nctx loop --tool aider --prompt TASKS.md\n\n# Limit to 10 iterations\nctx loop --max-iterations 10\n\n# Output to custom file\nctx loop -o my-loop.sh\n
Usage:
# Generate and run the loop\nctx loop\nchmod +x loop.sh\n./loop.sh\n
See Autonomous Loops for detailed workflow documentation.
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory","level":3,"title":"ctx memory","text":"
Bridge Claude Code's auto memory (MEMORY.md) into .context/.
Claude Code maintains per-project auto memory at ~/.claude/projects/<slug>/memory/MEMORY.md. This command group discovers that file, mirrors it into .context/memory/mirror.md (git-tracked), and detects drift.
ctx memory <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-sync","level":4,"title":"ctx memory sync","text":"
Copy MEMORY.md to .context/memory/mirror.md. Archives the previous mirror before overwriting.
ctx memory sync [flags]\n
Flags:
Flag Description --dry-run Show what would happen without writing
Exit codes:
Code Meaning 0 Synced successfully 1 MEMORY.md not found (auto memory inactive)
Example:
ctx memory sync\n# Archived previous mirror to mirror-2026-03-05-143022.md\n# Synced MEMORY.md -> .context/memory/mirror.md\n# Source: ~/.claude/projects/-home-user-project/memory/MEMORY.md\n# Lines: 47 (was 32)\n# New content: 15 lines since last sync\n\nctx memory sync --dry-run\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-status","level":4,"title":"ctx memory status","text":"
Show drift, timestamps, line counts, and archive count.
ctx memory status\n
Exit codes:
Code Meaning 0 No drift 1 MEMORY.md not found 2 Drift detected (MEMORY.md changed since sync)
Example:
ctx memory status\n# Memory Bridge Status\n# Source: ~/.claude/projects/.../memory/MEMORY.md\n# Mirror: .context/memory/mirror.md\n# Last sync: 2026-03-05 14:30 (2 hours ago)\n#\n# MEMORY.md: 47 lines (modified since last sync)\n# Mirror: 32 lines\n# Drift: detected (source is newer)\n# Archives: 3 snapshots in .context/memory/archive/\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-diff","level":4,"title":"ctx memory diff","text":"
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-unpublish","level":4,"title":"ctx memory unpublish","text":"
Remove the ctx-managed marker block from MEMORY.md, preserving Claude-owned content.
ctx memory unpublish\n
Hook integration: The check-memory-drift hook runs on every prompt and nudges the agent when MEMORY.md has changed since last sync. The nudge fires once per session. See Memory Bridge.
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-memory-import","level":4,"title":"ctx memory import","text":"
Classify and promote entries from MEMORY.md into structured .context/ files.
ctx memory import [flags]\n
Each entry is classified by keyword heuristics:
Keywords Target always use, prefer, never use, standard CONVENTIONS.md decided, chose, trade-off, approach DECISIONS.md gotcha, learned, watch out, bug, caveat LEARNINGS.md todo, need to, follow up TASKS.md Everything else Skipped
Deduplication prevents re-importing the same entry across runs.
Flags:
Flag Description --dry-run Show classification plan without writing
Example:
ctx memory import --dry-run\n# Scanning MEMORY.md for new entries...\n# Found 6 entries\n#\n# -> \"always use ctx from PATH\"\n# Classified: CONVENTIONS.md (keywords: always use)\n#\n# -> \"decided to use heuristic classification over LLM-based\"\n# Classified: DECISIONS.md (keywords: decided)\n#\n# Dry run - would import: 4 entries (1 convention, 1 decision, 1 learning, 1 task)\n# Skipped: 2 entries (session notes/unclassified)\n\nctx memory import # Actually write entries to .context/ files\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-notify","level":3,"title":"ctx notify","text":"
Send fire-and-forget webhook notifications from skills, loops, and hooks.
Field Type Description event string Event name from --event flag message string Notification message session_id string Session ID (omitted if empty) timestamp string UTC RFC3339 timestamp project string Project directory name","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-change","level":3,"title":"ctx change","text":"
Show what changed in context files and code since your last session.
Automatically detects the previous session boundary from state markers or event log. Useful at session start to quickly see what moved while you were away.
ctx change [flags]\n
Flags:
Flag Description --since Time reference: duration (24h) or date (2026-03-01)
Reference time detection (priority order):
--since flag (duration, date, or RFC3339 timestamp)
ctx-loaded-* marker files in .context/state/ (second most recent)
Last context-load-gate event from .context/state/events.jsonl
Fallback: 24 hours ago
Example:
# Auto-detect last session, show what changed\nctx change\n\n# Changes in the last 48 hours\nctx change --since 48h\n\n# Changes since a specific date\nctx change --since 2026-03-10\n
Context file changes are detected by filesystem mtime (works without git). Code changes use git log --since (empty when not in a git repo).
See also: Reviewing Session Changes
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-dep","level":3,"title":"ctx dep","text":"
Generate a dependency graph from source code.
Auto-detects the project ecosystem from manifest files and outputs a dependency graph in Mermaid, table, or JSON format.
ctx dep [flags]\n
Supported ecosystems:
Ecosystem Manifest Method Go go.modgo list -json ./... Node.js package.json Parse package.json (workspace-aware) Python requirements.txt or pyproject.toml Parse manifest directly Rust Cargo.tomlcargo metadata
Detection order: Go, Node.js, Python, Rust. First match wins.
Flags:
Flag Description Default --format Output format: mermaid, table, jsonmermaid--external Include external/third-party dependencies false--type Force ecosystem: go, node, python, rust auto-detect
Examples:
# Auto-detect and show internal deps as Mermaid\nctx dep\n\n# Include external dependencies\nctx dep --external\n\n# Force Node.js detection (useful when multiple manifests exist)\nctx dep --type node\n\n# Machine-readable output\nctx dep --format json\n\n# Table format\nctx dep --format table\n
Ecosystem notes:
Go: Uses go list -json ./.... Requires go in PATH.
Node.js: Parses package.json directly (no npm/yarn needed). For monorepos with workspaces, shows workspace-to-workspace deps (internal) or all deps per workspace (external).
Python: Parses requirements.txt or pyproject.toml directly (no pip needed). Shows declared dependencies; does not trace imports. With --external, includes dev dependencies from pyproject.toml.
Rust: Requires cargo in PATH. Uses cargo metadata for accurate dependency resolution.
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad","level":3,"title":"ctx pad","text":"
Encrypted scratchpad for sensitive one-liners that travel with the project.
When invoked without a subcommand, lists all entries.
ctx pad\nctx pad <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-add","level":4,"title":"ctx pad add","text":"
Append a new entry to the scratchpad.
ctx pad add <text>\nctx pad add <label> --file <path>\n
Flags:
Flag Short Description --file-f Ingest a file as a blob entry (max 64 KB)
Examples:
ctx pad add \"DATABASE_URL=postgres://user:pass@host/db\"\nctx pad add \"deploy config\" --file ./deploy.yaml\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-show","level":4,"title":"ctx pad show","text":"
Output the raw text of an entry by number. For blob entries, prints decoded file content (or writes to disk with --out).
ctx pad show <n>\nctx pad show <n> --out <path>\n
Arguments:
n: 1-based entry number
Flags:
Flag Description --out Write decoded blob content to a file (blobs only)
Examples:
ctx pad show 3\nctx pad show 2 --out ./recovered.yaml\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-rm","level":4,"title":"ctx pad rm","text":"
Remove an entry by number.
ctx pad rm <n>\n
Arguments:
n: 1-based entry number
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-edit","level":4,"title":"ctx pad edit","text":"
Replace, append to, or prepend to an entry.
ctx pad edit <n> [text]\n
Arguments:
n: 1-based entry number
text: Replacement text (mutually exclusive with --append/--prepend)
Flags:
Flag Description --append Append text to the end of the entry --prepend Prepend text to the beginning of entry --file Replace blob file content (preserves label) --label Replace blob label (preserves content)
Examples:
ctx pad edit 2 \"new text\"\nctx pad edit 2 --append \" suffix\"\nctx pad edit 2 --prepend \"prefix \"\nctx pad edit 1 --file ./v2.yaml\nctx pad edit 1 --label \"new name\"\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-mv","level":4,"title":"ctx pad mv","text":"
Move an entry from one position to another.
ctx pad mv <from> <to>\n
Arguments:
from: Source position (1-based)
to: Destination position (1-based)
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-resolve","level":4,"title":"ctx pad resolve","text":"
Show both sides of a merge conflict in the encrypted scratchpad.
ctx pad resolve\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-import","level":4,"title":"ctx pad import","text":"
Bulk-import lines from a file into the scratchpad. Each non-empty line becomes a separate entry. All entries are written in a single encrypt/write cycle.
With --blob, import all first-level files from a directory as blob entries. Each file becomes a blob with the filename as its label. Subdirectories and non-regular files are skipped.
ctx pad import <file>\nctx pad import - # read from stdin\nctx pad import --blob <dir> # import directory files as blobs\n
Arguments:
file: Path to a text file, - for stdin, or a directory (with --blob)
Flags:
Flag Description --blob Import first-level files from a directory as blobs
Examples:
ctx pad import notes.txt\ngrep TODO *.go | ctx pad import -\nctx pad import --blob ./ideas/\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-export","level":4,"title":"ctx pad export","text":"
Export all blob entries from the scratchpad to a directory as files. Each blob's label becomes the filename. Non-blob entries are skipped.
ctx pad export [dir]\n
Arguments:
dir: Target directory (default: current directory)
Flags:
Flag Short Description --force-f Overwrite existing files instead of timestamping --dry-run Print what would be exported without writing
When a file already exists, a unix timestamp is prepended to avoid collisions (e.g., 1739836200-label). Use --force to overwrite instead.
Examples:
ctx pad export ./ideas\nctx pad export --dry-run\nctx pad export --force ./backup\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pad-merge","level":4,"title":"ctx pad merge","text":"
Merge entries from one or more scratchpad files into the current pad. Each input file is auto-detected as encrypted or plaintext. Entries are deduplicated by exact content.
ctx pad merge FILE...\n
Arguments:
FILE...: One or more scratchpad files to merge (encrypted or plaintext)
Flags:
Flag Short Description --key-k Path to key file for decrypting input files --dry-run Print what would be merged without writing
Examples:
ctx pad merge worktree/.context/scratchpad.enc\nctx pad merge notes.md backup.enc\nctx pad merge --key /path/to/other.key foreign.enc\nctx pad merge --dry-run pad-a.enc pad-b.md\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-remind","level":3,"title":"ctx remind","text":"
Session-scoped reminders that surface at session start. Reminders are stored verbatim and relayed verbatim: no summarization, no categories.
When invoked with a text argument and no subcommand, adds a reminder.
ctx remind \"text\"\nctx remind <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-remind-add","level":4,"title":"ctx remind add","text":"
Add a reminder. This is the default action: ctx remind \"text\" and ctx remind add \"text\" are equivalent.
ctx remind \"refactor the swagger definitions\"\nctx remind add \"check CI after the deploy\" --after 2026-02-25\n
Arguments:
text: The reminder message (verbatim)
Flags:
Flag Short Description --after-a Don't surface until this date (YYYY-MM-DD)
Examples:
ctx remind \"refactor the swagger definitions\"\nctx remind \"check CI after the deploy\" --after 2026-02-25\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-remind-list","level":4,"title":"ctx remind list","text":"
List all pending reminders. Date-gated reminders that aren't yet due are annotated with (after DATE, not yet due).
ctx remind list\n
Aliases: ls
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-remind-dismiss","level":4,"title":"ctx remind dismiss","text":"
Remove a reminder by ID, or remove all reminders with --all.
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-pause","level":3,"title":"ctx pause","text":"
Pause all context nudge and reminder hooks for the current session. Security hooks (dangerous command blocking) and housekeeping hooks still fire.
ctx pause [flags]\n
Flags:
Flag Description --session-id Session ID (overrides stdin)
Example:
# Pause hooks for a quick investigation\nctx pause\n\n# Resume when ready\nctx resume\n
See also: Pausing Context Hooks
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-resume","level":3,"title":"ctx resume","text":"
Resume context hooks after a pause. Silent no-op if not paused.
ctx resume [flags]\n
Flags:
Flag Description --session-id Session ID (overrides stdin)
Example:
ctx resume\n
See also: Pausing Context Hooks
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-completion","level":3,"title":"ctx completion","text":"
Generate shell autocompletion scripts.
ctx completion <shell>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#subcommands","level":4,"title":"Subcommands","text":"Shell Command bashctx completion bashzshctx completion zshfishctx completion fishpowershellctx completion powershell","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#installation","level":4,"title":"Installation","text":"BashZshFish
# Add to ~/.bashrc\nsource <(ctx completion bash)\n
# Add to ~/.zshrc\nsource <(ctx completion zsh)\n
ctx completion fish | source\n# Or save to completions directory\nctx completion fish > ~/.config/fish/completions/ctx.fish\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-site","level":3,"title":"ctx site","text":"
Site management commands for the ctx.ist static site.
ctx site <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-site-feed","level":4,"title":"ctx site feed","text":"
Generate an Atom 1.0 feed from finalized blog posts in docs/blog/.
ctx site feed [flags]\n
Scans docs/blog/ for files matching YYYY-MM-DD-*.md, parses YAML frontmatter, and generates a valid Atom feed. Only posts with reviewed_and_finalized: true are included. Summaries are extracted from the first paragraph after the heading.
Flags:
Flag Short Type Default Description --out-o string site/feed.xml Output path --base-url string https://ctx.ist Base URL for entry links
Output:
Generated site/feed.xml (21 entries)\n\nSkipped:\n 2026-02-25-the-homework-problem.md: not finalized\n\nWarnings:\n 2026-02-09-defense-in-depth.md: no summary paragraph found\n
Three buckets: included (count), skipped (with reason), warnings (included but degraded). exit 0 always: warnings inform but do not block.
Frontmatter requirements:
Field Required Feed mapping title Yes <title>date Yes <updated>reviewed_and_finalized Yes Draft gate (must be true) author No <author><name>topics No <category term=\"\">
Examples:
ctx site feed # Generate site/feed.xml\nctx site feed --out /tmp/feed.xml # Custom output path\nctx site feed --base-url https://example.com # Custom base URL\nmake site-feed # Makefile shortcut\nmake site # Builds site + feed\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-guide","level":3,"title":"ctx guide","text":"
Quick-reference cheat sheet for common ctx commands and skills.
ctx guide [flags]\n
Flags:
Flag Description --skills Show available skills --commands Show available CLI commands
Example:
# Show the full cheat sheet\nctx guide\n\n# Skills only\nctx guide --skills\n\n# Commands only\nctx guide --commands\n
Works without initialization (no .context/ required).
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-why","level":3,"title":"ctx why","text":"
Read ctx's philosophy documents directly in the terminal.
ctx why [DOCUMENT]\n
Documents:
Name Description manifesto The ctx Manifesto: creation, not code about About ctx: what it is and why it exists invariants Design invariants: properties that must hold
Usage:
# Interactive numbered menu\nctx why\n\n# Show a specific document\nctx why manifesto\nctx why about\nctx why invariants\n\n# Pipe to a pager\nctx why manifesto | less\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-steering","level":3,"title":"ctx steering","text":"
Manage steering files: persistent behavioral rules for AI tools.
Steering files live in .context/steering/ as Markdown files with YAML frontmatter that controls inclusion mode, tool targeting, and priority.
ctx steering <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-steering-init","level":4,"title":"ctx steering init","text":"
Create a starter set of steering files in .context/steering/.
ctx steering init\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-steering-add","level":4,"title":"ctx steering add","text":"
Create a new steering file with default frontmatter.
ctx steering add <name>\n
Arguments:
name: Steering file name (without .md extension)
Example:
ctx steering add security\n# Created .context/steering/security.md\n
The generated file uses inclusion: manual and priority: 50 by default. Edit the frontmatter to change behavior:
---\nname: security\ndescription: Security rules for all code changes\ninclusion: always # always | auto | manual\ntools: [] # empty = all tools\npriority: 10 # lower = injected first\n---\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-steering-list","level":4,"title":"ctx steering list","text":"
List all steering files with their inclusion mode and priority.
ctx steering list\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-steering-preview","level":4,"title":"ctx steering preview","text":"
Preview which steering files would be included for a given prompt.
ctx steering preview [prompt]\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-steering-sync","level":4,"title":"ctx steering sync","text":"
Sync steering files to tool-native formats (e.g. .cursor/rules/, .kiro/steering/, .clinerules/).
ctx steering sync\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-hook","level":3,"title":"ctx hook","text":"
Manage lifecycle hooks: shell scripts that fire at specific events during AI sessions.
Hooks live in .context/hooks/<type>/ directories, organized by event type. Each hook is an executable script that receives JSON via stdin and returns JSON via stdout.
ctx hook <subcommand>\n
Hook types:
Type When it fires session-start AI session begins session-end AI session ends pre-tool-use Before an AI tool invocation post-tool-use After an AI tool invocation file-save When a file is saved context-add When context is added","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-hook-add","level":4,"title":"ctx hook add","text":"
Create a new hook script from a template.
ctx hook add <type> <name>\n
Arguments:
type: Hook type (e.g. session-start, pre-tool-use)
name: Script name (without .sh extension)
Example:
ctx hook add session-start greet\n# Created .context/hooks/session-start/greet.sh\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-hook-list","level":4,"title":"ctx hook list","text":"
List all discovered hooks with their type and enabled status.
ctx hook list\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-hook-enable","level":4,"title":"ctx hook enable","text":"
Enable a hook by setting the executable permission bit.
ctx hook enable <type> <name>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-hook-disable","level":4,"title":"ctx hook disable","text":"
Disable a hook by removing the executable permission bit.
ctx hook disable <type> <name>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-hook-test","level":4,"title":"ctx hook test","text":"
Run a hook with synthetic input and display the output.
ctx hook test <type> <name>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-skill","level":3,"title":"ctx skill","text":"
Manage reusable instruction bundles that can be installed into .context/skills/.
A skill is a directory containing a SKILL.md file with YAML frontmatter (name, description) and a Markdown instruction body.
ctx skill <subcommand>\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-skill-install","level":4,"title":"ctx skill install","text":"
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-skill-list","level":4,"title":"ctx skill list","text":"
List all installed skills.
ctx skill list\n
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/tools/#ctx-skill-remove","level":4,"title":"ctx skill remove","text":"
Remove an installed skill.
ctx skill remove <name>\n
Arguments:
name: Skill name to remove
","path":["CLI","Tools and Utilities"],"tags":[]},{"location":"cli/trace/","level":1,"title":"Commit Context Tracing","text":"","path":["Commit Context Tracing"],"tags":[]},{"location":"cli/trace/#ctx-trace","level":3,"title":"ctx trace","text":"
Show the context behind git commits. Links commits back to the decisions, tasks, learnings, and sessions that motivated them.
git log shows what changed, git blame shows who — ctx trace shows why.
ctx trace [commit] [flags]\n
Flags:
Flag Description --last N Show context for last N commits --json Output as JSON for scripting
Examples:
# Show context for a specific commit\nctx trace abc123\n\n# Show context for last 10 commits\nctx trace --last 10\n\n# JSON output\nctx trace abc123 --json\n
Output:
Commit: abc123 \"Fix auth token expiry\"\nDate: 2026-03-14 10:00:00 -0700\nContext:\n [Decision] #12: Use short-lived tokens with server-side refresh\n Date: 2026-03-10\n\n [Task] #8: Implement token rotation for compliance\n Status: completed\n
Enable or disable the prepare-commit-msg hook for automatic context tracing. When enabled, commits automatically receive a ctx-context trailer with references to relevant decisions, tasks, learnings, and sessions.
ctx trace hook <enable|disable>\n
Prerequisites: ctx must be on your $PATH. If you installed via go install, ensure $GOPATH/bin (or $HOME/go/bin) is in your shell's $PATH.
What the hook does:
Before each commit, collects context from three sources:
Pending context accumulated during work (ctx add, ctx task complete)
Staged file changes to .context/ files
Working state (in-progress tasks, active AI session)
Injects a ctx-context trailer into the commit message
After commit, records the mapping in .context/trace/history.jsonl
Examples:
# Install the hook\nctx trace hook enable\n\n# Remove the hook\nctx trace hook disable\n
Resulting commit message:
Fix auth token expiry handling\n\nRefactored token refresh logic to handle edge case\nwhere refresh token expires during request.\n\nctx-context: decision:12, task:8, session:abc123\n
The ctx-context trailer supports these reference types:
Prefix Points to Example decision:<n> Entry #n in DECISIONS.md decision:12learning:<n> Entry #n in LEARNINGS.md learning:5task:<n> Task #n in TASKS.md task:8convention:<n> Entry #n in CONVENTIONS.md convention:3session:<id> AI session by ID session:abc123\"<text>\" Free-form context note \"Performance fix for P1 incident\"","path":["Commit Context Tracing"],"tags":[]},{"location":"cli/trace/#storage","level":3,"title":"Storage","text":"
Context trace data is stored in the .context/ directory:
File Purpose Lifecycle state/pending-context.jsonl Accumulates refs during work Truncated after each commit trace/history.jsonl Permanent commit-to-context map Append-only, never truncated trace/overrides.jsonl Manual tags for existing commits Append-only","path":["Commit Context Tracing"],"tags":[]},{"location":"home/","level":1,"title":"Home","text":"
ctx is not a prompt.
ctx is version-controlled cognitive state.
ctx is the persistence layer for human-AI reasoning.
\"Creation, not code; Context, not prompts; Verification, not vibes.\"
Read the ctx Manifesto →
\"Without durable context, intelligence resets; with ctx, creation compounds.\"
Without persistent memory, every session starts at zero; ctx makes sessions cumulative.
Join the ctx Community →
","path":["Home","About"],"tags":[]},{"location":"home/about/#what-is-ctx","level":2,"title":"What Is ctx?","text":"
ctx (Context) is a file-based system that enables AI coding assistants to persist project knowledge across sessions. It lives in a .context/ directory in your repo.
Context files let AI tools remember decisions, conventions, and learnings:
Context files are explicit and versionable contracts between you and your agents.
","path":["Home","About"],"tags":[]},{"location":"home/about/#why-do-i-keep-re-explaining-my-codebase","level":2,"title":"Why Do I Keep Re-Explaining My Codebase?!?!","text":"
You open a new AI session. The first thing you do is re-explain your project.
Again.
The architecture, the database choice, the naming conventions, the thing you tried last week that didn't work. You have said all of this before (maybe yesterday, maybe an hour ago), but the AI doesn't know that.
You explain the same architecture every session;
The AI suggests an approach you already rejected, again;
A decision you made three sessions ago gets relitigated from scratch;
You spend more time setting context than building features.
This isn't AI failure: It is staleness.
Without persistent memory, every session starts at zero.
","path":["Home","About"],"tags":[]},{"location":"home/about/#day-and-night","level":3,"title":"Day and Night","text":"
ctx gives your AI a memory that persists across sessions:
Without ctxWith ctx
Session 12: Monday morning\n\nAI: \"I'll scaffold a MySQL schema for this.\"\nYou: \"We use PostgreSQL.\"\n\nAI: \"For auth, session-based would be simplest.\"\nYou: \"We decided on JWT.\"\n\nAI: \"Endpoints in camelCase?\"\nYou: \"snake_case! Like the rest of the API!!!!!!\"\n\nYou: *scrolls up to copy an existing endpoint*\n\n20 minutes spent restoring decisions that already exist.\n
Session 12: Monday morning\n\nYou: \"Let's continue the auth implementation.\"\nAI: \"Per DECISIONS.md (2026-02-12): JWT, not sessions: \n The project uses PostgreSQL with JWT\n authentication. The API follows snake_case conventions.\n Last session you completed the login endpoint: the\n registration endpoint is next on TASKS.md. Want me to\n start there?\"\n\n0 minutes re-explaining. Straight to building.\n
ctx creates a .context/ directory in your project that stores structured knowledge files:
File What It Remembers TASKS.md What you're working on and what's next DECISIONS.md Architectural choices and why you made them LEARNINGS.md Gotchas, bugs, things that didn't work CONVENTIONS.md Naming patterns, code style, project rules CONSTITUTION.md Hard rules the AI must never violate
These files can version with your code in git:
They load automatically at the session start (via hooks in Claude Code, or manually with ctx agent for other tools).
The AI reads them, cites them, and builds on them, instead of asking you to start over.
And when it acts, it can point to the exact file and line that justifies the choice.
Every decision you record, every lesson you capture, makes the next session smarter.
ctx accumulates.
Connect with ctx
Join the Community →: ask questions, share workflows, and help shape what comes next
Read the Blog →: real-world patterns, ponderings, and lessons learned from building ctx using ctx
Ready to Get Started?
Getting Started →: full installation and setup
Your First Session →: step-by-step walkthrough from ctx init to verified recall
# Add a task\nctx add task \"Implement user authentication\"\n\n# Record a decision (full ADR fields required)\nctx add decision \"Use PostgreSQL for primary database\" \\\n --context \"Need a reliable database for production\" \\\n --rationale \"PostgreSQL offers ACID compliance and JSON support\" \\\n --consequence \"Team needs PostgreSQL training\"\n\n# Note a learning\nctx add learning \"Mock functions must be hoisted in Jest\" \\\n --context \"Tests failed with undefined mock errors\" \\\n --lesson \"Jest hoists mock calls to top of file\" \\\n --application \"Place jest.mock() before imports\"\n\n# Mark task complete\nctx task complete \"user auth\"\n
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#leave-a-reminder-for-next-session","level":2,"title":"Leave a Reminder for Next Session","text":"
Drop a note that surfaces automatically at the start of your next session:
# Leave a reminder\nctx remind \"refactor the swagger definitions\"\n\n# Date-gated: don't surface until a specific date\nctx remind \"check CI after the deploy\" --after 2026-02-25\n\n# List pending reminders\nctx remind list\n\n# Dismiss a reminder by ID\nctx remind dismiss 1\n
Reminders are relayed verbatim at session start by the check-reminders hook and repeat every session until you dismiss them.
Import session transcripts to a browsable static site with search, navigation, and topic indices.
The ctx journal command requires zensical (Python >= 3.10).
zensical is a Python-based static site generator from the Material for MkDocs team.
(why zensical?).
If you don't have it on your system, install zensical once with pipx:
# One-time setup\npipx install zensical\n
Avoid pip install zensical
pip install often fails: For example, on macOS, system Python installs a non-functional stub (zensical requires Python >= 3.10), and Homebrew Python blocks system-wide installs (PEP 668).
pipx creates an isolated environment with the correct Python version automatically.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#import-and-serve","level":3,"title":"Import and Serve","text":"
Then, import and serve:
# Import all sessions to .context/journal/ (only new files)\nctx journal import --all\n\n# Generate and serve the journal site\nctx journal site --serve\n
Open http://localhost:8000 to browse.
To update after new sessions, run the same two commands again.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#safe-by-default","level":3,"title":"Safe By Default","text":"
ctx journal import --all is safe by default:
It only imports new sessions and skips existing files.
Locked entries (via ctx journal lock) are always skipped by both import and enrichment skills.
If you add locked: true to frontmatter during enrichment, run ctx journal sync to propagate the lock state to .state.json.
Store short, sensitive one-liners in an encrypted scratchpad that travels with the project:
# Write a note\nctx pad set db-password \"postgres://user:pass@localhost/mydb\"\n\n# Read it back\nctx pad get db-password\n\n# List all keys\nctx pad list\n
The scratchpad is encrypted with a key stored at ~/.ctx/.ctx.key (outside the project, never committed).
See Scratchpad for details.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#run-an-autonomous-loop","level":2,"title":"Run an Autonomous Loop","text":"
Generate a script that iterates an AI agent until a completion signal is detected:
ctx loop\nchmod +x loop.sh\n./loop.sh\n
See Autonomous Loops for configuration and advanced usage.
Link your git commits back to the decisions, tasks, and learnings that motivated them. Enable the hook once:
# Install the git hook (one-time setup)\nctx trace hook enable\n
From now on, every git commit automatically gets a ctx-context trailer linking it to relevant context. No extra steps needed — just use ctx add, ctx task complete, and commit as usual.
# Later: why was this commit made?\nctx trace abc123\n\n# Recent commits with their context\nctx trace --last 10\n\n# Context trail for a specific file\nctx trace file src/auth.go\n\n# Manually tag a commit after the fact\nctx trace tag HEAD --note \"Hotfix for production outage\"\n
The first thing an AI agent should do at session start is discover where context lives:
ctx system bootstrap\n
This prints the resolved context directory, the files in it, and the operating rules. The CLAUDE.md template instructs the agent to run this automatically. See CLI Reference: bootstrap.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#the-two-skills-you-should-always-use","level":2,"title":"The Two Skills You Should Always Use","text":"
Using /ctx-remember at session start and /ctx-wrap-up at session end are the highest-value skills in the entire catalog:
# session begins:\n/ctx-remember\n... do work ...\n# before closing the session:\n/ctx-wrap-up\n
Let's provide some context, because this is important:
Although the agent will eventually discover your context through CLAUDE.md → AGENT_PLAYBOOK.md, /ctx-remember hydrates the full context up front (tasks, decisions, recent sessions) so the agent starts informed rather than piecing things together over several turns.
/ctx-wrap-up is the other half: A structured review that captures learnings, decisions, and tasks before you close the window.
Hooks like check-persistence remind you (the user) mid-session that context hasn't been saved in a while, but they don't trigger persistence automatically: You still have to act. Also, a CTRL+C can end things at any moment with no reliable \"before session end\" event.
In short, /ctx-wrap-up is the deliberate checkpoint that makes sure nothing slips through. And /ctx-remember it its mirror skill to be used at session start.
See Session Ceremonies for the full workflow.
","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#cli-commands-vs-ai-skills","level":2,"title":"CLI Commands vs. AI Skills","text":"
Most ctx operations come in two flavors: a CLI command you run in your terminal and an AI skill (slash command) you invoke inside your coding assistant.
Commands and skills are not interchangeable: Each has a distinct role.
ctx CLI command ctx AI skill Runs where Your terminal Inside the AI assistant Speed Fast (milliseconds) Slower (LLM round-trip) Cost Free Consumes tokens and context Analysis Deterministic heuristics Semantic / judgment-based Best for Quick checks, scripting, CI Deep analysis, generation, workflow orchestration","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#paired-commands","level":3,"title":"Paired Commands","text":"
These have both a CLI and a skill counterpart. Use the CLI for quick, deterministic checks; use the skill when you need the agent's judgment.
CLI Skill When to prefer the skill ctx drift/ctx-drift Semantic analysis: catches meaning drift the CLI misses ctx status/ctx-status Interpreted summary with recommendations ctx add task/ctx-add-task Agent decomposes vague goals into concrete tasks ctx add decision/ctx-add-decision Agent drafts rationale and consequences from discussion ctx add learning/ctx-add-learning Agent extracts the lesson from a debugging session ctx add convention/ctx-add-convention Agent observes a repeated pattern and codifies it ctx task archive/ctx-archive Agent reviews which tasks are truly done ctx pad/ctx-pad Agent reads/writes scratchpad entries in conversation flow ctx journal/ctx-history Agent searches session history with semantic understanding ctx agent/ctx-agent Agent loads and acts on the context packet ctx loop/ctx-loop Agent tailors the loop script to your project ctx doctor/ctx-doctor Agent adds semantic analysis to structural checks ctx pause/ctx-pause Agent pauses hooks with session-aware reasoning ctx resume/ctx-resume Agent resumes hooks after a pause ctx remind/ctx-remind Agent manages reminders in conversation flow","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#ai-only-skills","level":3,"title":"AI-Only Skills","text":"
These have no CLI equivalent. They require the agent's reasoning.
Skill Purpose /ctx-remember Load context and present structured readback at session start /ctx-wrap-up End-of-session ceremony: persist learnings, decisions, tasks /ctx-next Suggest 1-3 concrete next actions from context /ctx-commit Commit with integrated context capture /ctx-reflect Pause and assess session progress /ctx-consolidate Merge overlapping learnings or decisions /ctx-prompt-audit Analyze prompting patterns for improvement /ctx-import-plans Import Claude Code plan files into project specs /ctx-implement Execute a plan step-by-step with verification /ctx-worktree Manage parallel agent worktrees /ctx-journal-enrich Add metadata, tags, and summaries to journal entries /ctx-journal-enrich-all Full journal pipeline: export if needed, then batch-enrich /ctx-blog Generate a blog post (zensical-flavored Markdown) /ctx-blog-changelog Generate themed blog post from commits between releases /ctx-architecture Build and maintain architecture maps (ARCHITECTURE.md, DETAILED_DESIGN.md)","path":["Home","Common Workflows"],"tags":[]},{"location":"home/common-workflows/#cli-only-commands","level":3,"title":"CLI-Only Commands","text":"
These are infrastructure: used in scripts, CI, or one-time setup.
Command Purpose ctx init Initialize .context/ directory ctx load Output assembled context for piping ctx task complete Mark a task done by substring match ctx sync Reconcile context with codebase state ctx compact Consolidate and clean up context files ctx trace Show context behind git commits ctx trace hook Enable/disable commit context tracing hook ctx setup Generate AI tool integration config ctx watch Watch AI output and auto-apply context updates ctx serve Serve any zensical directory (default: journal) ctx permission snapshot Save settings as a golden image ctx permission restore Restore settings from golden image ctx journal site Generate browsable journal from exports ctx notify setup Configure webhook notifications ctx decision List and filter decisions ctx learning List and filter learnings ctx task List tasks, manage archival and snapshots ctx why Read the philosophy behind ctx ctx guide Quick-reference cheat sheet ctx site Site management commands ctx config Manage runtime configuration profiles ctx system System diagnostics and hook commands ctx system backup Back up context and Claude data to tar.gz / SMB ctx completion Generate shell autocompletion scripts
Rule of Thumb
Quick check? Use the CLI.
Need judgment? Use the skill.
When in doubt, start with the CLI: It's free and instant.
Escalate to the skill when heuristics aren't enough.
Next Up: Context Files →: what each .context/ file does and how to use it
See Also:
Recipes: targeted how-to guides for specific tasks
Knowledge Capture: patterns for recording decisions, learnings, and conventions
Context Health: keeping your .context/ accurate and drift-free
Session Archaeology: digging into past sessions
Task Management: tracking and completing work items
We are the builders who care about durable context, verifiable decisions, and human-AI workflows that compound over time.
","path":["Home","#ctx"],"tags":[]},{"location":"home/community/#help-ctx-change-how-ai-remembers","level":2,"title":"Help ctx Change How AI Remembers","text":"
If you like the idea, a star helps ctx reach engineers who run into context drift every day:
The .ctxrc file is an optional YAML file placed in the project root (next to your .context/ directory). It lets you set project-level defaults that apply to every ctx command.
ctx looks for .ctxrc in the current working directory when any command runs. There is no global or user-level config file: Configuration is always per-project.
Contributors: Dev Configuration Profile
The ctx repo ships two .ctxrc source profiles (.ctxrc.base and .ctxrc.dev). The working copy is gitignored and swapped between them via ctx config switch dev / ctx config switch base. See Contributing: Configuration Profiles.
Using a Different .context Directory
The default .context/ directory can be changed per-project via the context_dir key in .ctxrc, the CTX_DIR environment variable, or the --context-dir CLI flag.
See Environment Variables and CLI Global Flags below for details.
A commented .ctxrc showing all options and their defaults:
# .ctxrc: ctx runtime configuration\n# https://ctx.ist/configuration/\n#\n# All settings are optional. Missing values use defaults.\n# Priority: CLI flags > environment variables > .ctxrc > defaults\n#\n# context_dir: .context\n# token_budget: 8000\n# auto_archive: true\n# archive_after_days: 7\n# scratchpad_encrypt: true\n# allow_outside_cwd: false\n# event_log: false\n# entry_count_learnings: 30\n# entry_count_decisions: 20\n# convention_line_count: 200\n# injection_token_warn: 15000\n# context_window: 200000 # auto-detected for Claude Code; override for other tools\n# billing_token_warn: 0 # one-shot warning at this token count (0 = disabled)\n#\n# stale_age_days: 30 # days before drift flags a context file as stale (0 = disabled)\n# key_rotation_days: 90\n# task_nudge_interval: 5 # Edit/Write calls between task completion nudges\n#\n# notify: # requires: ctx notify setup\n# events: # required: no events sent unless listed\n# - loop\n# - nudge\n# - relay\n#\n# tool: \"\" # Active AI tool: claude, cursor, cline, kiro, codex\n#\n# steering: # Steering layer configuration\n# dir: .context/steering\n# default_inclusion: manual\n# default_tools: []\n#\n# hooks: # Hook system configuration\n# dir: .context/hooks\n# timeout: 10\n# enabled: true\n#\n# priority_order:\n# - CONSTITUTION.md\n# - TASKS.md\n# - CONVENTIONS.md\n# - ARCHITECTURE.md\n# - DECISIONS.md\n# - LEARNINGS.md\n# - GLOSSARY.md\n# - AGENT_PLAYBOOK.md\n
","path":["Home","Configuration"],"tags":[]},{"location":"home/configuration/#option-reference","level":3,"title":"Option Reference","text":"Option Type Default Description context_dirstring.context Context directory name (relative to project root) token_budgetint8000 Default token budget for ctx agent and ctx loadauto_archivebooltrue Auto-archive completed tasks during ctx compactarchive_after_daysint7 Days before completed tasks are archived scratchpad_encryptbooltrue Encrypt scratchpad with AES-256-GCM allow_outside_cwdboolfalse Allow context directory outside the current working directory event_logboolfalse Enable local hook event logging to .context/state/events.jsonlentry_count_learningsint30 Drift warning when LEARNINGS.md exceeds this entry count (0 = disable) entry_count_decisionsint20 Drift warning when DECISIONS.md exceeds this entry count (0 = disable) convention_line_countint200 Drift warning when CONVENTIONS.md exceeds this line count (0 = disable) injection_token_warnint15000 Warn when auto-injected context exceeds this token count (0 = disable) context_windowint200000 Context window size in tokens. Auto-detected for Claude Code (200k/1M); override for other AI tools billing_token_warnint0 (off) One-shot warning when session tokens exceed this threshold (0 = disabled). For plans where tokens beyond an included allowance cost extra stale_age_daysint30 Days before ctx drift flags a context file as stale (0 = disable) key_rotation_daysint90 Days before encryption key rotation nudge task_nudge_intervalint5 Edit/Write calls between task completion nudges notify.events[]string (all) Event filter for webhook notifications (empty = all) priority_order[]string (see below) Custom file loading priority for context assembly toolstring (empty) Active AI tool identifier (claude, cursor, cline, kiro, codex). Used by steering sync and hook dispatch steering.dirstring.context/steering Steering files directory steering.default_inclusionstringmanual Default inclusion mode for new steering files (always, auto, manual) steering.default_tools[]string (all) Default tool filter for new steering files (empty = all tools) hooks.dirstring.context/hooks Hook scripts directory hooks.timeoutint10 Per-hook execution timeout in seconds hooks.enabledbooltrue Whether hook execution is enabled
Default priority order (used when priority_order is not set):
CONSTITUTION.md
TASKS.md
CONVENTIONS.md
ARCHITECTURE.md
DECISIONS.md
LEARNINGS.md
GLOSSARY.md
AGENT_PLAYBOOK.md
See Context Files for the rationale behind this ordering.
Environment variables override .ctxrc values but are overridden by CLI flags.
Variable Description Equivalent .ctxrc key CTX_DIR Override the context directory path context_dirCTX_TOKEN_BUDGET Override the default token budget token_budget","path":["Home","Configuration"],"tags":[]},{"location":"home/configuration/#examples","level":3,"title":"Examples","text":"
# Use a shared context directory\nCTX_DIR=/shared/team-context ctx status\n\n# Increase token budget for a single run\nCTX_TOKEN_BUDGET=16000 ctx agent\n
","path":["Home","Configuration"],"tags":[]},{"location":"home/configuration/#cli-global-flags","level":2,"title":"CLI Global Flags","text":"
CLI flags have the highest priority and override both environment variables and .ctxrc settings. These flags are available on every ctx command.
Flag Description --context-dir <path> Override context directory (default: .context/) --allow-outside-cwd Allow context directory outside current working directory --tool <name> Override active AI tool identifier (e.g. kiro, cursor) --version Show version and exit --help Show command help and exit","path":["Home","Configuration"],"tags":[]},{"location":"home/configuration/#examples_1","level":3,"title":"Examples","text":"
# Point to a different context directory:\nctx status --context-dir /path/to/shared/.context\n\n# Allow external context directory (skips boundary check):\nctx status --context-dir /mnt/nas/project-context --allow-outside-cwd\n
Layer Value Wins? --context-dir/tmp/ctx Yes CTX_DIR/shared/context No .ctxrc.my-context No Default .context No
The CLI flag /tmp/ctx is used because it has the highest priority.
If the CLI flag were absent, CTX_DIR=/shared/context would win. If neither the flag nor the env var were set, the .ctxrc value .my-context would be used. With nothing configured, the default .context applies.
Get a one-shot warning when your session crosses a token threshold where extra charges begin (e.g., Claude Pro includes 200k tokens; beyond that costs extra):
# .ctxrc\nbilling_token_warn: 180000 # warn before hitting the 200k paid boundary\n
The warning fires once per session the first time token usage exceeds the threshold. Set to 0 (or omit) to disable.
Hook messages control what text hooks emit when they fire. Each message can be overridden per-project by placing a text file at the matching path under .context/:
.context/hooks/messages/{hook}/{variant}.txt\n
The override takes priority over the embedded default compiled into the ctx binary. An empty file silences the message while preserving the hook's logic (counting, state tracking, cooldowns).
Use ctx system message to discover and manage overrides:
ctx system message list # see all messages\nctx system message show qa-reminder gate # view the current template\nctx system message edit qa-reminder gate # copy default for editing\nctx system message reset qa-reminder gate # revert to default\n
See Customizing Hook Messages for detailed examples including Python, JavaScript, and silence configurations.
AI agents need to know the resolved context directory at session start. The ctx system bootstrap command prints the context path, file list, and operating rules in both text and JSON formats:
ctx system bootstrap # text output for agents\nctx system bootstrap -q # just the context directory path\nctx system bootstrap --json # structured output for automation\n
The CLAUDE.md template instructs the agent to run this as its first action. Every nudge (context checkpoint, persistence reminder, etc.) also includes a Context: <dir> footer that re-anchors the agent to the correct directory throughout the session.
This replaces the previous approach of hardcoding .context/ paths in agent instructions.
See CLI Reference: bootstrap for full details.
See also: CLI Reference | Context Files | Scratchpad
Each context file in .context/ serves a specific purpose.
Files are designed to be human-readable, AI-parseable, and token-efficient.
","path":["Home","Context Files"],"tags":[]},{"location":"home/context-files/#file-overview","level":2,"title":"File Overview","text":"File Purpose Priority CONSTITUTION.md Hard rules that must NEVER be violated 1 (highest) TASKS.md Current and planned work 2 CONVENTIONS.md Project patterns and standards 3 ARCHITECTURE.md System overview and components 4 DECISIONS.md Architectural decisions with rationale 5 LEARNINGS.md Lessons learned, gotchas, tips 6 GLOSSARY.md Domain terms and abbreviations 7 AGENT_PLAYBOOK.md Instructions for AI tools 8 (lowest) templates/ Entry format templates for ctx add (optional) steering/ Behavioral rules with YAML frontmatter (optional) hooks/ Lifecycle hook scripts (optional) skills/ Reusable instruction bundles (optional)","path":["Home","Context Files"],"tags":[]},{"location":"home/context-files/#read-order-rationale","level":2,"title":"Read Order Rationale","text":"
The priority order follows a logical progression for AI tools:
CONSTITUTION.md: Inviolable rules first. The AI tool must know what it cannot do before attempting anything.
TASKS.md: Current work items. What the AI tool should focus on.
CONVENTIONS.md: How to write code. Patterns and standards to follow when implementing tasks.
ARCHITECTURE.md: System structure. Understanding of components and boundaries before making changes.
DECISIONS.md: Historical context. Why things are the way they are, to avoid re-debating settled decisions.
LEARNINGS.md: Gotchas and tips. Lessons from past work that inform the current implementation.
GLOSSARY.md: Reference material. Domain terms and abbreviations for lookup as needed.
AGENT_PLAYBOOK.md: Meta instructions last. How to use this context system itself. Loaded last because the agent should understand the content (rules, tasks, patterns) before the operating manual.
# Constitution\n\nThese rules are INVIOLABLE. If a task requires violating these, the task \nis wrong.\n\n## Security Invariants\n\n* [ ] Never commit secrets, tokens, API keys, or credentials\n* [ ] Never store customer/user data in context files\n* [ ] Never disable security linters without documented exception\n\n## Quality Invariants\n\n* [ ] All code must pass tests before commit\n* [ ] No `any` types in TypeScript without documented reason\n* [ ] No TODO comments in main branch (*move to `TASKS.md`*)\n\n## Process Invariants\n\n* [ ] All architectural changes require a decision record\n* [ ] Breaking changes require version bump\n* [ ] Generated files are never committed\n
Tag Values Purpose #priorityhigh, medium, low Task urgency #areacore, cli, docs, tests Codebase area #estimate1h, 4h, 1d Time estimate (optional) #in-progress (none) Currently being worked on
Lifecycle tags (for session correlation):
Tag Format When to add #addedYYYY-MM-DD-HHMMSS Auto-added by ctx add task#startedYYYY-MM-DD-HHMMSS When beginning work on the task #doneYYYY-MM-DD-HHMMSS When marking the task [x]
These timestamps help correlate tasks with session files and track which session started vs completed work.
# Decisions\n\n## [YYYY-MM-DD] Decision Title\n\n**Status**: Accepted | Superseded | Deprecated\n\n**Context**: What situation prompted this decision?\n\n**Decision**: What was decided?\n\n**Rationale**: Why was this the right choice?\n\n**Consequence**: What are the implications?\n\n**Alternatives Considered**:\n* Alternative A: Why rejected\n* Alternative B: Why rejected\n
## [2025-01-15] Use TypeScript Strict Mode\n\n**Status**: Accepted\n\n**Context**: Starting a new project, need to choose the type-checking level.\n\n**Decision**: Enable TypeScript strict mode with all strict flags.\n\n**Rationale**: Catches more bugs at compile time. Team has experience\nwith strict mode. Upfront cost pays off in reduced runtime errors.\n\n**Consequence**: More verbose type annotations required. Some\nthird-party libraries need type assertions.\n\n**Alternatives Considered**:\n- Basic TypeScript: Rejected because it misses null checks\n- JavaScript with JSDoc: Rejected because tooling support is weaker\n
","path":["Home","Context Files"],"tags":[]},{"location":"home/context-files/#status-values","level":3,"title":"Status Values","text":"Status Meaning Accepted Current, active decision Superseded Replaced by newer decision (link to it) Deprecated No longer relevant","path":["Home","Context Files"],"tags":[]},{"location":"home/context-files/#learningsmd","level":2,"title":"LEARNINGS.md","text":"
Purpose: Capture lessons learned, gotchas, and tips that shouldn't be forgotten.
# Learnings\n\n## Category Name\n\n### Learning Title\n\n**Discovered**: YYYY-MM-DD\n\n**Context**: When/how was this learned?\n\n**Lesson**: What's the takeaway?\n\n**Application**: How should this inform future work?\n
## Testing\n\n### Vitest Mocks Must Be Hoisted\n\n**Discovered**: 2025-01-15\n\n**Context**: Tests were failing intermittently when mocking fs module.\n\n**Lesson**: Vitest requires `vi.mock()` calls to be hoisted to the\ntop of the file. Dynamic mocks need `vi.doMock()` instead.\n\n**Application**: Always use `vi.mock()` at file top. Use `vi.doMock()`\nonly when mock needs runtime values.\n
# Conventions\n\n## Naming\n\n* **Files**: kebab-case for all source files\n* **Components**: PascalCase for React components\n* **Functions**: camelCase, verb-first (getUser, parseConfig)\n* **Constants**: SCREAMING_SNAKE_CASE\n\n## Patterns\n\n### Pattern Name\n\n**When to use**: Situation description\n\n**Implementation**:\n// in triple backticks\n// Example code\n\n**Why**: Rationale for this pattern\n
# Architecture\n\n## Overview\n\nBrief description of what the system does and how it's organized.\n\n## Components\n\n### Component Name\n\n**Responsibility**: What this component does\n\n**Dependencies**: What it depends on\n\n**Dependents**: What depends on it\n\n**Key Files**:\n* path/to/file.ts: Description\n\n## Data Flow\n\nDescription or diagram of how data moves through the system.\n\n## Boundaries\n\nWhat's in scope vs out of scope for this codebase.\n
# Glossary\n\n## Domain Terms\n\n### Term Name\n\n**Definition**: What it means in this project's context\n\n**Not to be confused with**: Similar terms that mean different things\n\n**Example**: How it's used\n\n## Abbreviations\n\n| Abbrev | Expansion | Context |\n|--------|-------------------------------|------------------------|\n| ADR | Architectural Decision Record | Decision documentation |\n| SUT | System Under Test | Testing |\n
Read Order: Priority order for loading context files
When to Update: Events that trigger context updates
How to Avoid Hallucinating Memory: Critical rules:
Never assume: If not in files, you don't know it
Never invent history: Don't claim \"we discussed\" without evidence
Verify before referencing: Search files before citing
When uncertain, say so
Trust files over intuition
Context Update Commands: Format for automated updates via ctx watch:
<context-update type=\"task\">Implement rate limiting</context-update>\n<context-update type=\"complete\">user auth</context-update>\n<context-update type=\"learning\"\n context=\"Debugging hooks\"\n lesson=\"Hooks receive JSON via stdin\"\n application=\"Parse JSON stdin with the host language\"\n>Hook Input Format</context-update>\n<context-update type=\"decision\"\n context=\"Need a caching layer\"\n rationale=\"Redis is fast and team has experience\"\n consequence=\"Must provision Redis infrastructure\"\n>Use Redis for caching</context-update>\n
Purpose: Format templates for ctx add decision and ctx add learning. These control the structure of new entries appended to DECISIONS.md and LEARNINGS.md.
Edit the templates directly. Changes take effect immediately on the next ctx add command. For example, to add a \"References\" section to all new decisions, edit .context/templates/decision.md.
Templates are committed to git, so customizations are shared with the team.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#1-fork-or-clone-the-repository","level":3,"title":"1. Fork (or Clone) the Repository","text":"
# Fork on GitHub, then:\ngit clone https://github.com/<you>/ctx.git\ncd ctx\n\n# Or, if you have push access:\ngit clone https://github.com/ActiveMemory/ctx.git\ncd ctx\n
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#2-build-and-install-the-binary","level":3,"title":"2. Build and Install the Binary","text":"
make build\nsudo make install\n
This compiles the ctx binary and places it in /usr/local/bin/.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#3-install-the-plugin-from-your-local-clone","level":3,"title":"3. Install the Plugin from Your Local Clone","text":"
The repository ships a Claude Code plugin under internal/assets/claude/. Point Claude Code at your local copy so that skills and hooks reflect your working tree: no reinstall needed after edits:
Launch claude;
Type /plugin and press Enter;
Select Marketplaces → Add Marketplace
Enter the absolute path to the root of your clone, e.g. ~/WORKSPACE/ctx (this is where .claude-plugin/marketplace.json lives: it points Claude Code to the actual plugin in internal/assets/claude);
Back in /plugin, select Install and choose ctx.
Claude Code Caches Plugin Files
Even though the marketplace points at a directory on disk, Claude Code caches skills and hooks. After editing files under internal/assets/claude/, clear the cache and restart:
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#skills-two-directories-one-rule","level":3,"title":"Skills: Two Directories, One Rule","text":"Directory What lives here Distributed to users? internal/assets/claude/skills/ The 39 ctx-* skills that ship with the plugin Yes .claude/skills/ Dev-only skills (release, QA, backup, etc.) No
internal/assets/claude/skills/ is the single source of truth for user-facing skills. If you are adding or modifying a ctx-* skill, edit it there.
.claude/skills/ holds skills that only make sense inside this repository (release automation, QA checks, backup scripts). These are never distributed to users.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#dev-only-skills-reference","level":4,"title":"Dev-Only Skills Reference","text":"Skill When to use /_ctx-absorb Merge deltas from a parallel worktree or separate checkout /_ctx-audit Detect code-level drift after YOLO sprints or before releases /_ctx-backup Backup context and Claude data to SMB share /_ctx-qa Run QA checks before committing /_ctx-release Run the full release process /_ctx-release-notes Generate release notes for dist/RELEASE_NOTES.md/_ctx-alignment-audit Audit doc claims against agent instructions /_ctx-update-docs Check docs/code consistency after changes
Six skills previously in this list have been promoted to bundled plugin skills and are now available to all ctx users: /ctx-brainstorm, /ctx-check-links, /ctx-sanitize-permissions, /ctx-skill-creator, /ctx-spec.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#how-to-add-things","level":2,"title":"How To Add Things","text":"","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#adding-a-new-cli-command","level":3,"title":"Adding a New CLI Command","text":"
Create a package under internal/cli/<name>/;
Implement Cmd() *cobra.Command as the entry point;
Register it in internal/bootstrap/bootstrap.go (add import + call in Initialize);
Use cmd.Printf/cmd.Println for output (not fmt.Print);
Add tests in the same package (<name>_test.go);
Add a section to the appropriate CLI doc page in docs/cli/.
Pattern to follow: internal/cli/pad/pad.go (parent with subcommands) or internal/cli/drift/drift.go (single command).
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#adding-a-new-session-parser","level":3,"title":"Adding a New Session Parser","text":"
The journal system uses a SessionParser interface. To add support for a new AI tool (e.g. Aider, Cursor):
Create internal/journal/parser/<tool>.go;
Implement parsing logic that returns []*Session;
Register the parser in FindSessions() / FindSessionsForCWD();
Use config.Tool* constants for the tool identifier;
Add test fixtures and parser tests.
Pattern to follow: the Claude Code JSONL parser in internal/journal/parser/.
Multilingual session headers
The Markdown parser recognizes session header prefixes configured via session_prefixes in .ctxrc (default: Session:). To support a new language, users add a prefix to their .ctxrc - no code change needed. New parser implementations can use rc.SessionPrefixes() if they also need prefix-based header detection.
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#adding-a-bundled-skill","level":3,"title":"Adding a Bundled Skill","text":"
The repo ships two .ctxrc source profiles. The working copy (.ctxrc) is gitignored and swapped between them:
File Purpose .ctxrc.base Golden baseline: all defaults, no logging .ctxrc.dev Dev profile: notify events enabled, verbose logging .ctxrc Working copy (gitignored: copied from one of the above)
Use ctx commands to switch:
ctx config switch dev # switch to dev profile\nctx config switch base # switch to base profile\nctx config status # show which profile is active\n
After cloning, run ctx config switch dev to get started with full logging.
See Configuration for the full .ctxrc option reference.
Back up project context and global Claude Code data with:
ctx system backup # both project + global (default)\nctx system backup --scope project # .context/, .claude/, ideas/ only\nctx system backup --scope global # ~/.claude/ only\n
Archives are saved to /tmp/. When CTX_BACKUP_SMB_URL is configured, they are also copied to an SMB share. See CLI Reference: backup for details.
make test # fast: all tests\nmake audit # full: fmt + vet + lint + drift + docs + test\nmake smoke # build + run basic commands end-to-end\n
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#running-the-docs-site-locally","level":3,"title":"Running the Docs Site Locally","text":"
make site-setup # one-time: install zensical via pipx\nmake site-serve # serve at localhost\n
","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#submitting-changes","level":2,"title":"Submitting Changes","text":"","path":["Home","Contributing"],"tags":[]},{"location":"home/contributing/#before-you-start","level":3,"title":"Before You Start","text":"
Check existing issues to avoid duplicating effort;
For large changes, open an issue first to discuss the approach;
Markdown is human-readable, version-controllable, and tool-agnostic. Every AI model can parse it natively. Every developer can read it in a terminal, a browser, or a code review. There's no schema to learn, no binary format to decode, no vendor lock-in. You can inspect your context with cat, diff it with git diff, and review it in a PR.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#does-ctx-work-offline","level":2,"title":"Does ctx work offline?","text":"
Yes. ctx is completely local. It reads and writes files on disk, generates context packets from local state, and requires no network access. The only feature that touches the network is the optional webhook notifications hook, which you have to explicitly configure.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#what-gets-committed-to-git","level":2,"title":"What gets committed to git?","text":"
The .context/ directory: yes, commit it. That's the whole point. Team members and AI agents read the same context files.
What not to commit:
.ctx.key: your encryption key. Stored at ~/.ctx/.ctx.key, never in the repo. ctx init handles this automatically.
journal/ and logs/: generated data, potentially large. ctx init adds these to .gitignore.
scratchpad.enc: your choice. It's encrypted, so it's safe to commit if you want shared scratchpad state. See Scratchpad for details.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#how-big-should-my-token-budget-be","level":2,"title":"How big should my token budget be?","text":"
The default is 8000 tokens, which works well for most projects. Configure it via .ctxrc or the CTX_TOKEN_BUDGET environment variable:
# In .ctxrc\ntoken_budget = 12000\n\n# Or as an environment variable\nexport CTX_TOKEN_BUDGET=12000\n\n# Or per-invocation\nctx agent --budget 4000\n
Higher budgets include more context but cost more tokens per request. Lower budgets force sharper prioritization: ctx drops lower-priority content first, so CONSTITUTION and TASKS always make the cut.
See Configuration for all available settings.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#why-not-a-database","level":2,"title":"Why not a database?","text":"
Files are inspectable, diffable, and reviewable in pull requests. You can grep them, cat them, pipe them through jq or awk. They work with every version control system and every text editor.
A database would add a dependency, require migrations, and make context opaque. The design bet is that context should be as visible and portable as the code it describes.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#does-it-work-with-tools-other-than-claude-code","level":2,"title":"Does it work with tools other than Claude Code?","text":"
Yes. ctx agent outputs a context packet that any AI tool can consume: paste it into ChatGPT, Cursor, Copilot, Aider, or anything else that accepts text input.
Claude Code gets first-class integration via the ctx plugin (hooks, skills, automatic context loading). VS Code Copilot Chat has a dedicated ctx extension. Other tools integrate via generated instruction files or manual pasting.
See Integrations for tool-specific setup, including the multi-tool recipe.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#can-i-use-ctx-on-an-existing-project","level":2,"title":"Can I use ctx on an existing project?","text":"
Yes. Run ctx init in any repo and it creates .context/ with template files. Start recording decisions, tasks, and conventions as you work. Context grows naturally; you don't need to backfill everything on day one.
See Getting Started for the full setup flow, or Joining a ctx Project if someone else already initialized it.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#what-happens-when-context-files-get-too-big","level":2,"title":"What happens when context files get too big?","text":"
Token budgeting handles this automatically. ctx agent prioritizes content by file priority (CONSTITUTION first, GLOSSARY last) and trims lower-priority entries when the budget is tight.
For manual maintenance, ctx compact archives completed tasks and old entries, keeping active context lean. You can also run ctx task archive to move completed tasks out of TASKS.md.
The goal is to keep context files focused on current state. Historical entries belong in git history or the archive.
","path":["Home","FAQ"],"tags":[]},{"location":"home/faq/#is-context-meant-to-be-shared","level":2,"title":"Is .context/ meant to be shared?","text":"
Yes. Commit it to your repo. Every team member and every AI agent reads the same files. That's the mechanism for shared memory: decisions made in one session are visible in the next, regardless of who (or what) starts it.
The only per-user state is the encryption key (~/.ctx/.ctx.key) and the optional scratchpad. Everything else is team-shared by design.
Related:
Getting Started - installation and first setup
Configuration - .ctxrc, environment variables, and defaults
Context Files - what each file does and how to use it
","path":["Home","FAQ"],"tags":[]},{"location":"home/first-session/","level":1,"title":"Your First Session","text":"
Here's what a complete first session looks like, from initialization to the moment your AI cites your project context back to you.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-1-initialize-your-project","level":2,"title":"Step 1: Initialize Your Project","text":"
Run ctx init in your project root:
cd your-project\nctx init\n
Sample output:
Context initialized in .context/\n\n ✓ CONSTITUTION.md\n ✓ TASKS.md\n ✓ DECISIONS.md\n ✓ LEARNINGS.md\n ✓ CONVENTIONS.md\n ✓ ARCHITECTURE.md\n ✓ GLOSSARY.md\n ✓ AGENT_PLAYBOOK.md\n\nSetting up encryption key...\n ✓ ~/.ctx/.ctx.key\n\nClaude Code plugin (hooks + skills):\n Install: claude /plugin marketplace add ActiveMemory/ctx\n Then: claude /plugin install ctx@activememory-ctx\n\nNext steps:\n 1. Edit .context/TASKS.md to add your current tasks\n 2. Run 'ctx status' to see context summary\n 3. Run 'ctx agent' to get AI-ready context packet\n
This created your .context/ directory with template files.
For Claude Code, install the ctx plugin to get automatic hooks and skills.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-2-populate-your-context","level":2,"title":"Step 2: Populate Your Context","text":"
Add a task and a decision: These are the entries your AI will remember:
ctx add task \"Implement user authentication\"\n\n# Output: ✓ Added to TASKS.md\n\nctx add decision \"Use PostgreSQL for primary database\" \\\n --context \"Need a reliable database for production\" \\\n --rationale \"PostgreSQL offers ACID compliance and JSON support\" \\\n --consequence \"Team needs PostgreSQL training\"\n\n# Output: ✓ Added to DECISIONS.md\n
These entries are what the AI will recall in future sessions. You don't need to populate everything now: Context grows naturally as you work.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-3-check-your-context","level":2,"title":"Step 3: Check Your Context","text":"
Notice the token estimate: This is how much context your AI will load.
The ○ next to LEARNINGS.md means it's still empty; it will fill in as you capture lessons during development.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-4-start-an-ai-session","level":2,"title":"Step 4: Start an AI Session","text":"
With Claude Code (and the ctx plugin), start every session with:
/ctx-remember\n
This loads your context and presents a structured readback so you can confirm the agent knows what is going on. Context also loads automatically via hooks, but the explicit ceremony gives you a readback to verify.
Using VS Code?
With VS Code Copilot Chat (and the ctx extension), type @ctx /agent in chat to load your context packet, or @ctx /status to check your project context. Run ctx setup copilot --write once to generate .github/copilot-instructions.md for automatic context loading.
If you are not using Claude Code, generate a context packet for your AI tool:
ctx agent --budget 8000\n
Sample output:
# Context Packet\nGenerated: 2026-02-14T15:30:45Z | Budget: 8000 tokens | Used: ~2450\n\n## Read These Files (in order)\n1. .context/CONSTITUTION.md\n2. .context/TASKS.md\n3. .context/CONVENTIONS.md\n...\n\n## Current Tasks\n- [ ] Implement user authentication\n- [ ] Add rate limiting to API endpoints\n\n## Key Conventions\n- Use gofmt for formatting\n- Path construction uses filepath.Join\n\n## Recent Decisions\n## [2026-02-14-120000] Use PostgreSQL for the primary database\n\n**Context**: Evaluated PostgreSQL, MySQL, and SQLite...\n**Rationale**: PostgreSQL offers better JSON support...\n\n## Key Learnings\n## [2026-02-14-100000] Connection pool sizing matters\n\n**Context**: Hit connection limits under load...\n**Lesson**: Default pool size of 10 is too low for concurrent requests...\n\n## Also Noted\n- Use JWT for session management\n- Always validate input at API boundary\n
Paste this output into your AI tool's system prompt or conversation start.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#step-5-verify-it-works","level":2,"title":"Step 5: Verify It Works","text":"
Ask your AI: \"What are our current tasks?\"
A working setup produces a response like:
Based on the project context, you have one active task:\n\n- **Implement user authentication** (pending)\n\nThere's also a recent architectural decision to **use PostgreSQL for\nthe primary database**, chosen for its ACID compliance and JSON support.\n\nWant me to start on the authentication task?\n
That's the success moment:
The AI is citing your exact context entries from Step 2, not hallucinating or asking you to re-explain.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#what-gets-created","level":2,"title":"What Gets Created","text":"
.context/\n├── CONSTITUTION.md # Hard rules: NEVER violate these\n├── TASKS.md # Current and planned work\n├── CONVENTIONS.md # Project patterns and standards\n├── ARCHITECTURE.md # System overview\n├── DECISIONS.md # Architectural decisions with rationale\n├── LEARNINGS.md # Lessons learned, gotchas, tips\n├── GLOSSARY.md # Domain terms and abbreviations\n└── AGENT_PLAYBOOK.md # How AI tools should use this\n
Claude Code integration (hooks + skills) is provided by the ctx plugin: See Integrations/Claude Code.
VS Code Copilot Chat integration is provided by the ctx extension: See Integrations/VS Code.
See Context Files for detailed documentation of each file.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/first-session/#what-to-gitignore","level":2,"title":"What to .gitignore","text":"
Rule of Thumb
If it's knowledge (decisions, tasks, learnings, conventions), commit it.
If it's generated output, raw session data, or a secret, .gitignore it.
Commit your .context/ knowledge files: that's the whole point.
You should .gitignore the generated and sensitive paths:
# Journal data (large, potentially sensitive)\n.context/journal/\n.context/journal-site/\n.context/journal-obsidian/\n\n# Hook logs (machine-specific)\n.context/logs/\n\n# Legacy encryption key path (copy to ~/.ctx/.ctx.key if needed)\n.context/.ctx.key\n\n# Claude Code local settings (machine-specific)\n.claude/settings.local.json\n
ctx init Patches Your .gitignore for You
ctx init automatically adds these entries to your .gitignore.
Review the additions with cat .gitignore after init.
See also:
Security Considerations
Scratchpad Encryption
Session Journal
Next Up: Common Workflows →: day-to-day commands for tracking context, checking health, and browsing history.
","path":["Home","Your First Session"],"tags":[]},{"location":"home/getting-started/","level":1,"title":"Getting Started","text":"","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#prerequisites","level":2,"title":"Prerequisites","text":"
ctx does not require git, but using version control with your .context/ directory is strongly recommended:
AI sessions occasionally modify or overwrite context files inadvertently. With git, the AI can check history and restore lost content: Without it, the data is gone.
Also, several ctx features (journal changelog, blog generation) also use git history directly.
Every setup starts with the ctx binary: the CLI tool itself.
If you use Claude Code, you also install the ctx plugin, which adds hooks (context autoloading, persistence nudges) and 25+ /ctx-* skills. For other AI tools, ctx integrates via generated instruction files or manual context pasting: see Integrations for tool-specific setup.
Pick one of the options below to install the binary. Claude Code users should also follow the plugin steps included in each option.
","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#option-1-build-from-source-recommended","level":3,"title":"Option 1: Build from Source (Recommended)","text":"
Requires Go (version defined in go.mod) and Claude Code.
git clone https://github.com/ActiveMemory/ctx.git\ncd ctx\nmake build\nsudo make install\n
Install the Claude Code plugin from your local clone:
Launch claude;
Type /plugin and press Enter;
Select Marketplaces → Add Marketplace
Enter the path to the root of your clone, e.g. ~/WORKSPACE/ctx (this is where .claude-plugin/marketplace.json lives: It points Claude Code to the actual plugin in internal/assets/claude)
Back in /plugin, select Install and choose ctx
This points Claude Code at the plugin source on disk. Changes you make to hooks or skills take effect immediately: No reinstall is needed.
Local Installs Need Manual Enablement
Unlike marketplace installs, local plugin installs are not auto-enabled globally. The plugin will only work in projects that explicitly enable it. Run ctx init in each project (it auto-enables the plugin), or add the entry to ~/.claude/settings.json manually:
Download ctx-0.8.1-windows-amd64.exe from the releases page and add it to your PATH.
Claude Code users: install the plugin from the marketplace:
Launch claude;
Type /plugin and press Enter;
Select Marketplaces → Add Marketplace;
Enter ActiveMemory/ctx;
Back in /plugin, select Install and choose ctx.
Other tool users: see Integrations for tool-specific setup (Cursor, Copilot, Aider, Windsurf, etc.).
Verify the Plugin Is Enabled
After installing, confirm the plugin is enabled globally. Check ~/.claude/settings.json for an enabledPlugins entry. If missing, run ctx init in your project (it auto-enables the plugin), or add it manually:
This creates a .context/ directory with template files and an encryption key at ~/.ctx/ for the encrypted scratchpad. For Claude Code, install the ctx plugin for automatic hooks and skills.
Shows context summary: files present, token estimate, and recent activity.
","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#3-start-using-with-ai","level":3,"title":"3. Start Using with AI","text":"
With Claude Code (and the ctx plugin installed), context loads automatically via hooks.
With VS Code Copilot Chat, install the ctx extension and use @ctx /status, @ctx /agent, and other slash commands directly in chat. Run ctx setup copilot --write to generate .github/copilot-instructions.md for automatic context loading.
For other tools, paste the output of:
ctx agent --budget 8000\n
","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#3b-set-up-for-your-ai-tool","level":3,"title":"3b. Set Up for Your AI Tool","text":"
If you use an MCP-compatible tool, generate the integration config with ctx setup:
KiroCursorCline
ctx setup kiro --write\n# Creates .kiro/settings/mcp.json and syncs steering files\n
ctx setup cursor --write\n# Creates .cursor/mcp.json and syncs steering files\n
ctx setup cline --write\n# Creates .vscode/mcp.json and syncs steering files\n
This registers the ctx MCP server and syncs any steering files into the tool's native format. Re-run after adding or changing steering files.
","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#4-verify-it-works","level":3,"title":"4. Verify It Works","text":"
Ask your AI: \"Do you remember?\"
It should cite specific context: current tasks, recent decisions, or previous session topics.
","path":["Home","Getting Started"],"tags":[]},{"location":"home/getting-started/#5-set-up-companion-tools-highly-recommended","level":3,"title":"5. Set Up Companion Tools (Highly Recommended)","text":"
ctx works on its own, but two companion MCP servers unlock significantly better agent behavior. The investment is small and the benefits compound over sessions:
Gemini Search — grounded web search with citations. Skills like /ctx-code-review and /ctx-explain use it for up-to-date documentation lookups instead of relying on training data.
GitNexus — code knowledge graph with symbol resolution, blast radius analysis, and domain clustering. Skills like /ctx-refactor and /ctx-code-review use it for impact analysis and dependency awareness.
# Index your project for GitNexus (run once, then after major changes)\nnpx gitnexus analyze\n
Both are optional MCP servers: if they are not connected, skills degrade gracefully to built-in capabilities. See Companion Tools for setup details and verification.
Next Up:
Your First Session →: a step-by-step walkthrough from ctx init to verified recall
Common Workflows →: day-to-day commands for tracking context, checking health, and browsing history
","path":["Home","Getting Started"],"tags":[]},{"location":"home/is-ctx-right/","level":1,"title":"Is It Right for Me?","text":"","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#good-fit","level":2,"title":"Good Fit","text":"
ctx shines when context matters more than code.
If any of these sound like your project, it's worth trying:
Multi-session AI work: You use AI across many sessions on the same codebase, and re-explaining is slowing you down.
Architectural decisions that matter: Your project has non-obvious choices (database, auth strategy, API design) that the AI keeps second-guessing.
\"Why\" matters as much as \"what\": you need the AI to understand rationale, not just current code
Team handoffs: Multiple people (or multiple AI tools) work on the same project and need shared context.
AI-assisted development across tools: Uou switch between Claude Code, Cursor, Copilot, or other tools and want context to follow the project, not the tool.
Long-lived projects: Anything you'll work on for weeks or months, where accumulated knowledge has compounding value.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#may-not-be-the-right-fit","level":2,"title":"May Not Be the Right Fit","text":"
ctx adds overhead that isn't worth it for every project. Be honest about when to skip it:
One-off scripts: If the project is a single file you'll finish today, there's nothing to remember.
RAG-only workflows: If retrieval from an external knowledge base already gives the agent everything it needs for each session, adding ctx may be unnecessary. RAG retrieves information; ctx defines the project's working memory: They are complementary.
No AI involvement: ctx is designed for human-AI workflows; without an AI consumer, the files are just documentation.
Enterprise-managed context platforms: If your organization provides centralized context services, ctx may duplicate that layer.
For a deeper technical comparison with RAG, prompt management tools, and agent frameworks, see ctx and Similar Tools.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#project-size-guide","level":2,"title":"Project Size Guide","text":"","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#solo-developer-single-repo","level":3,"title":"Solo Developer, Single Repo","text":"
This is ctx's sweet spot.
You get the most value here: one person, one project, decisions, and learnings accumulating over time. Setup takes 5 minutes and the .context/ directory directory stays small, and every session gets faster.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#small-team-one-or-two-repos","level":3,"title":"Small Team, One or Two Repos","text":"
Works well.
Context files commit to git, so the whole team shares the same decisions and conventions. Each person's AI starts with the team's decisions already loaded. Merge conflicts on .context/ files are rare and easy to resolve (they are just Markdown).
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#multiple-repos-or-larger-teams","level":3,"title":"Multiple Repos or Larger Teams","text":"
ctx operates per repository.
Each repo has its own .context/ directory with its own decisions, tasks, and learnings. This matches the way code, ownership, and history already work in git.
There is no built-in cross-repo context layer.
For organizations that need centralized, organization-wide knowledge, ctx complements a platform solution by providing durable, project-local working memory for AI sessions.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/is-ctx-right/#5-minute-trial","level":2,"title":"5-Minute Trial","text":"
Zero commitment. Try it, and delete .context/ if it's not for you.
Using Claude Code?
Install the ctx plugin from the Marketplace for Claude-native hooks, skills, and automatic context loading:
Type /plugin and press Enter
Select Marketplaces → Add Marketplace
Enter ActiveMemory/ctx
Back in /plugin, select Install and choose ctx
You'll still need the ctx binary for the CLI: See Getting Started for install options.
# 1. Initialize\ncd your-project\nctx init\n\n# 2. Add one real decision from your project\nctx add decision \"Your actual architectural choice\" \\\n --context \"What prompted this decision\" \\\n --rationale \"Why you chose this approach\" \\\n --consequence \"What changes as a result\"\n\n# 3. Check what the AI will see\nctx status\n\n# 4. Start an AI session and ask: \"Do you remember?\"\n
If the AI cites your decision back to you, it's working.
Want to remove it later? One command:
rm -rf .context/\n
No dependencies to uninstall. No configuration to revert. Just files.
Ready to try it out?
Join the Community→: Open Source is better together.
Getting Started →: Full installation and setup.
ctx and Similar Tools →: Detailed comparison with other approaches.
","path":["Home","Is It Right for Me?"],"tags":[]},{"location":"home/joining-a-project/","level":1,"title":"Joining a ctx Project","text":"
You've joined a team or inherited a project, and there's a .context/ directory in the repo. Good news: someone already set up persistent context. This page gets you oriented fast.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#what-to-read-first","level":2,"title":"What to Read First","text":"
The files in .context/ have a deliberate priority order. Read them top-down:
CONSTITUTION.md: Hard rules. Read this before you touch anything. These are inviolable constraints the team has agreed on.
TASKS.md: Current and planned work. Shows what's in progress, what's pending, and what's blocked.
CONVENTIONS.md: How the team writes code. Naming patterns, file organization, preferred idioms.
ARCHITECTURE.md: System overview. Components, boundaries, data flow.
DECISIONS.md: Why things are the way they are. Saves you from re-proposing something the team already evaluated and rejected.
LEARNINGS.md: Gotchas, tips, and hard-won lessons. The stuff that doesn't fit anywhere else but will save you hours.
See Context Files for detailed documentation of each file's structure and purpose.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#checking-context-health","level":2,"title":"Checking Context Health","text":"
Before you start working, check whether the context is current:
ctx status\n
This shows file counts, token estimates, and recent activity. If files haven't been touched in weeks, the context may be stale.
ctx drift\n
This compares context files against recent code changes and flags potential drift: decisions that no longer match the codebase, conventions that have shifted, or tasks that look outdated.
If things are stale, mention it to the team. Don't silently fix it yourself on day one.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#starting-your-first-session","level":2,"title":"Starting Your First Session","text":"
Generate a context packet to prime your AI:
ctx agent --budget 8000\n
This outputs a token-budgeted summary of the project context, ordered by priority. With Claude Code and the ctx plugin, context loads automatically via hooks. You can also use the /ctx-remember skill to get a structured readback of what the AI knows.
The readback is your verification step: if the AI can cite specific tasks and decisions, the context is working.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#adding-context","level":2,"title":"Adding Context","text":"
As you work, you'll discover things worth recording. Use the CLI:
# Record a decision you made or learned about\nctx add decision \"Use connection pooling for DB access\" \\\n --rationale \"Reduces connection overhead under load\"\n\n# Capture a gotcha you hit\nctx add learning \"Redis timeout defaults to 5s\" \\\n --context \"Hit timeouts during bulk operations\" \\\n --application \"Set explicit timeout for batch jobs\"\n\n# Add a convention you noticed the team follows\nctx add convention \"All API handlers return structured errors\"\n
You can also just tell the AI: \"Record this as a learning\" or \"Add this decision to context.\" With the ctx plugin, context-update commands handle the file writes.
See the Knowledge Capture recipe for the full workflow.
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#session-etiquette","level":2,"title":"Session Etiquette","text":"
A few norms for working in a ctx-managed project:
Respect existing conventions. If CONVENTIONS.md says \"use filepath.Join,\" use filepath.Join. If you disagree, propose a change, don't silently diverge.
Don't restructure context files without asking. The file layout and section structure are shared state. Reorganizing them affects every team member and every AI session.
Mark tasks done when complete. Check the box ([x]) in place. Don't move tasks between sections or delete them.
Add context as you go. Decisions, learnings, and conventions you discover are valuable to the next person (or the next session).
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/joining-a-project/#common-pitfalls","level":2,"title":"Common Pitfalls","text":"
Ignoring CONSTITUTION.md. The constitution exists for a reason. If a task conflicts with a constitution rule, the task is wrong. Raise it with the team instead of working around the constraint.
Deleting tasks. Never delete a task from TASKS.md. Mark it [x] (done) or [-] (skipped with a reason). The history matters for session replay and audit.
Bypassing hooks. If the project uses ctx hooks (pre-commit nudges, context autoloading), don't disable them. They exist to keep context fresh. If a hook is noisy or broken, fix it or file a task.
Over-contributing on day one. Read first, then contribute. Adding a dozen learnings before you understand the project's norms creates noise, not signal.
Related:
Getting Started: installation and setup from scratch
Context Files: detailed file reference
Knowledge Capture: recording decisions, learnings, and conventions
Session Lifecycle: how a typical AI session flows with ctx
","path":["Home","Joining a ctx Project"],"tags":[]},{"location":"home/keeping-ai-honest/","level":1,"title":"Keeping AI Honest","text":"","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#the-problem","level":2,"title":"The Problem","text":"
AI agents confabulate. They invent history that never happened, claim familiarity with decisions that were never made, and sometimes declare a task complete when it is not. This is not malice - it is the default behavior of a system optimizing for plausible-sounding responses.
When your AI says \"we decided to use Redis for caching last week,\" can you verify that? When it says \"the auth module is complete,\" can you confirm it? Without grounded, persistent context, the answer is no. You are trusting vibes.
ctx replaces vibes with verifiable artifacts.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#grounded-memory","level":2,"title":"Grounded Memory","text":"
Every entry in ctx context files has a timestamp and structured fields. When the AI cites a decision, you can check it.
## [2026-01-28-143022] Use Event Sourcing for Audit Trail\n\n**Status**: Accepted\n\n**Context**: Compliance requires full mutation history.\n\n**Decision**: Event sourcing for the audit subsystem only.\n\n**Rationale**: Append-only log meets compliance requirements\nwithout imposing event sourcing on the entire domain model.\n
The timestamp 2026-01-28-143022 is not decoration. It is a verifiable anchor. If the AI references this decision, you can open DECISIONS.md, find the entry, and confirm it says what the AI claims. If the entry does not exist, the AI is hallucinating - and you know immediately.
This is grounded memory: claims that trace back to artifacts you control and can audit.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#constitutionmd-hard-guardrails","level":2,"title":"CONSTITUTION.md: Hard Guardrails","text":"
CONSTITUTION.md defines rules the AI must treat as inviolable. These are not suggestions or best practices - they are constraints that override task requirements.
# Constitution\n\nThese rules are INVIOLABLE. If a task requires violating these,\nthe task is wrong.\n\n* [ ] Never commit secrets, tokens, API keys, or credentials\n* [ ] All public API changes require a decision record\n* [ ] Never delete context files without explicit user approval\n
The AI reads these at session start, before anything else. A well- integrated agent will refuse a task that conflicts with a constitutional rule, citing the specific rule it would violate.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#the-agent-playbooks-anti-hallucination-rules","level":2,"title":"The Agent Playbook's Anti-Hallucination Rules","text":"
The AGENT_PLAYBOOK.md file includes a section called \"How to Avoid Hallucinating Memory\" with five explicit rules:
Never assume. If it is not in the context files, you do not know it.
Never invent history. Do not claim \"we discussed\" something without a file reference.
Verify before referencing. Search files before citing them.
When uncertain, say so. \"I don't see a decision on this\" is always better than a fabricated one.
Trust files over intuition. If the files say PostgreSQL but your training data suggests MySQL, the files win.
These rules create a behavioral contract. The AI is not left to guess how confident it should be - it has explicit instructions to ground every claim in the context directory.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#drift-detection","level":2,"title":"Drift Detection","text":"
Context files can go stale. You rename a package, delete a module, or finish a sprint, and suddenly ARCHITECTURE.md references paths that no longer exist. Stale context is almost as dangerous as no context: the AI treats outdated information as current truth.
ctx drift detects this divergence:
ctx drift\n
It scans context files for references to files, paths, and symbols that no longer exist in the codebase. Stale references get flagged so you can update or remove them before they mislead the next session.
Regular drift checks - weekly, or after major refactors - keep your context files honest the same way tests keep your code honest.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#the-verification-loop","level":2,"title":"The Verification Loop","text":"
The /ctx-commit skill includes a built-in verification step: before staging, it maps claims to evidence and runs self-audit questions to surface gaps. This catches inconsistencies at the point where they matter most — right before code is committed.
This closes the loop. You write context. The AI reads context. The verification step confirms that context still matches reality. When it does not, you fix it - and the next session starts from truth, not from drift.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#trust-through-structure","level":2,"title":"Trust Through Structure","text":"
The common thread across all of these mechanisms is structure over prose. Timestamps make claims verifiable. Constitutional rules make boundaries explicit. Drift detection makes staleness visible. The playbook makes behavioral expectations concrete.
You do not need to trust the AI. You need to trust the system -- and verify when it matters.
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/keeping-ai-honest/#further-reading","level":2,"title":"Further Reading","text":"
Detecting and Fixing Drift: the full workflow for keeping context files accurate
Invariants: the properties that must hold for any valid ctx implementation
Agent Security: threat model and mitigations for AI agents operating with persistent context
","path":["Home","Keeping AI Honest"],"tags":[]},{"location":"home/prompting-guide/","level":1,"title":"Prompting Guide","text":"
New to ctx?
This guide references context files like TASKS.md, DECISIONS.md, and LEARNINGS.md:
These are plain Markdown files that ctx maintains in your project's .context/ directory.
If terms like \"context packet\" or \"session ceremony\" are unfamiliar,
start with the ctx Manifesto for the why,
About for the big picture,
then Getting Started to set up your first project.
This guide is about crafting effective prompts for working with AI assistants in ctx-enabled projects, but the guidelines given here apply to other AI systems, too.
The right prompt triggers the right behavior.
This guide documents prompts that reliably produce good results.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#tldr","level":2,"title":"TL;DR","text":"Goal Prompt Load context \"Do you remember?\" Resume work \"What's the current state?\" What's next /ctx-next Debug \"Why doesn't X work?\" Validate \"Is this consistent with our decisions?\" Impact analysis \"What would break if we...\" Reflect /ctx-reflect Wrap up /ctx-wrap-up Persist \"Add this as a learning\" Explore \"How does X work in this codebase?\" Sanity check \"Is this the right approach?\" Completeness \"What am I missing?\" One more thing \"What's the single smartest addition?\" Set tone \"Push back if my assumptions are wrong.\" Constrain scope \"Only change files in X. Nothing else.\" Course correct \"Stop. That's not what I meant.\" Check health \"Run ctx drift\" Commit /ctx-commit","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#session-start","level":2,"title":"Session Start","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#do-you-remember","level":3,"title":"\"Do you remember?\"","text":"
Triggers the AI to silently read TASKS.md, DECISIONS.md, LEARNINGS.md, and check recent history via ctx journal before responding with a structured readback:
Last session: most recent session topic and date
Active work: pending or in-progress tasks
Recent context: 1-2 recent decisions or learnings
Next step: offer to continue or ask what to focus on
Use this at the start of every important session.
Do you remember what we were working on?\n
This question implies prior context exists. The AI checks files rather than admitting ignorance. The expected response cites specific context (session names, task counts, decisions), not vague summaries.
If the AI instead narrates its discovery process (\"Let me check if there are files...\"), it has not loaded CLAUDE.md or AGENT_PLAYBOOK.md properly.
For a detailed case study on making agents actually follow this protocol (including the failure modes, the timing problem, and the hook design that solved it) see The Dog Ate My Homework.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#whats-the-current-state","level":3,"title":"\"What's the current state?\"","text":"
Prompts reading of TASKS.md, recent sessions, and status overview.
Use this when resuming work after a break.
Variants:
\"Where did we leave off?\"
\"What's in progress?\"
\"Show me the open tasks.\"
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#during-work","level":2,"title":"During Work","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#why-doesnt-x-work","level":3,"title":"\"Why doesn't X work?\"","text":"
This triggers root cause analysis rather than surface-level fixes.
Use this when something fails unexpectedly.
Framing as \"why\" encourages investigation before action. The AI will trace through code, check configurations, and identify the actual cause.
Real Example
\"Why can't I run /ctx-reflect?\" led to discovering missing permissions in settings.local.json bootstrapping.
This was a fix that benefited all users of ctx.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#is-this-consistent-with-our-decisions","level":3,"title":"\"Is this consistent with our decisions?\"","text":"
This prompts checking DECISIONS.md before implementing.
Use this before making architectural choices.
Variants:
\"Check if we've decided on this before\"
\"Does this align with our conventions?\"
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#what-would-break-if-we","level":3,"title":"\"What would break if we...\"","text":"
This triggers defensive thinking and impact analysis.
Use this before making significant changes.
What would break if we change the Settings struct?\n
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#before-you-start-read-x","level":3,"title":"\"Before you start, read X\"","text":"
This ensures specific context is loaded before work begins.
Use this when you know the relevant context exists in a specific file.
Before you start, check ctx journal source for the auth discussion session\n
When the AI misbehaves, match the symptom to the recovery prompt:
Symptom Recovery prompt Hand-waves (\"should work now\") \"Show evidence: file/line refs, command output, or test name.\" Creates unnecessary files \"No new files. Modify the existing implementation.\" Expands scope unprompted \"Stop after the smallest working change. Ask before expanding scope.\" Narrates instead of acting \"Skip the explanation. Make the change and show the diff.\" Repeats a failed approach \"That didn't work last time. Try a different approach.\" Claims completion without proof \"Run the test. Show me the output.\"
These are recovery handles, not rules to paste into CLAUDE.md.
Use them in the moment when you see the behavior.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#reflection-and-persistence","level":2,"title":"Reflection and Persistence","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#what-did-we-learn","level":3,"title":"\"What did we learn?\"","text":"
This prompts reflection on the session and often triggers adding learnings to LEARNINGS.md.
Use this after completing a task or debugging session.
This is an explicit reflection prompt. The AI will summarize insights and often offer to persist them.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#add-this-as-a-learningdecision","level":3,"title":"\"Add this as a learning/decision\"","text":"
This is an explicit persistence request.
Use this when you have discovered something worth remembering.
Add this as a learning: \"JSON marshal escapes angle brackets by default\"\n\n# or simply.\nAdd this as a learning.\n# and let the AI autonomously infer and summarize.\n
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#save-context-before-we-end","level":3,"title":"\"Save context before we end\"","text":"
This triggers context persistence before the session closes.
Use it at the end of the session or before switching topics.
Variants:
\"Let's persist what we did\"
\"Update the context files\"
/ctx-wrap-up:the recommended end-of-session ceremony (see Session Ceremonies)
/ctx-reflect: mid-session reflection checkpoint
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#exploration-and-research","level":2,"title":"Exploration and Research","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#explore-the-codebase-for-x","level":3,"title":"\"Explore the codebase for X\"","text":"
This triggers thorough codebase search rather than guessing.
Use this when you need to understand how something works.
This works because \"Explore\" signals that investigation is needed, not immediate action.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#how-does-x-work-in-this-codebase","level":3,"title":"\"How does X work in this codebase?\"","text":"
This prompts reading actual code rather than explaining general concepts.
Use this to understand the existing implementation.
How does session saving work in this codebase?\n
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#find-all-places-where-x","level":3,"title":"\"Find all places where X\"","text":"
This triggers a comprehensive search across the codebase.
Use this before refactoring or understanding the impact.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#meta-and-process","level":2,"title":"Meta and Process","text":"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#what-should-we-document-from-this","level":3,"title":"\"What should we document from this?\"","text":"
This prompts identifying learnings, decisions, and conventions worth persisting.
Use this after complex discussions or implementations.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#is-this-the-right-approach","level":3,"title":"\"Is this the right approach?\"","text":"
This invites the AI to challenge the current direction.
Use this when you want a sanity check.
This works because it allows AI to disagree.
AIs often default to agreeing; this prompt signals you want an honest assessment.
Stronger variant: \"Push back if my assumptions are wrong.\" This sets the tone for the entire session: The AI will flag questionable choices proactively instead of waiting to be asked.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#what-am-i-missing","level":3,"title":"\"What am I missing?\"","text":"
This prompts thinking about edge cases, overlooked requirements, or unconsidered approaches.
Use this before finalizing a design or implementation.
Forward-looking variant: \"What's the single smartest addition you could make to this at this point?\" Use this after you think you're done: It surfaces improvements you wouldn't have thought to ask for. The constraint to one thing prevents feature sprawl.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#cli-commands-as-prompts","level":2,"title":"CLI Commands as Prompts","text":"
Asking the AI to run ctx commands is itself a prompt. These load context or trigger specific behaviors:
Command What it does \"Run ctx status\" Shows context summary, file presence, staleness \"Run ctx agent\" Loads token-budgeted context packet \"Run ctx drift\" Detects dead paths, stale files, missing context","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#ctx-skills","level":3,"title":"ctx Skills","text":"
The SKILS.md Standard
Skills are formalized prompts stored as SKILL.md files.
The /slash-command syntax below is Claude Code specific.
Other agents can use the same skill files, but invocation may differ.
Use ctx skills by name:
Skill When to use /ctx-status Quick context summary /ctx-agent Load full context packet /ctx-remember Recall project context and structured readback /ctx-wrap-up End-of-session context persistence /ctx-history Browse session history for past discussions /ctx-reflect Structured reflection checkpoint /ctx-next Suggest what to work on next /ctx-commit Commit with context persistence /ctx-drift Detect and fix context drift /ctx-implement Execute a plan step-by-step with verification /ctx-loop Generate autonomous loop script /ctx-pad Manage encrypted scratchpad /ctx-archive Archive completed tasks /check-links Audit docs for dead links
Ceremony vs. Workflow Skills
Most skills work conversationally: \"what should we work on?\" triggers /ctx-next, \"save that as a learning\" triggers /ctx-add-learning. Natural language is the recommended approach.
Two skills are the exception: /ctx-remember and /ctx-wrap-up are ceremony skills for session boundaries: Invoke them as explicit slash commands: conversational triggers risk partial execution. See Session Ceremonies.
Skills combine a prompt, tool permissions, and domain knowledge into a single invocation.
Skills Beyond Claude Code
The /slash-command syntax above is Claude Code native, but the underlying SKILL.md files are a standard markdown format that any agent can consume. If you use a different coding agent, consult its documentation for how to load skill files as prompt templates.
Based on our ctx development experience (i.e., \"sipping our own champagne\") so far, here are some prompts that tend to produce poor results:
Prompt Problem Better Alternative \"Fix this\" Too vague, may patch symptoms \"Why is this failing?\" \"Make it work\" Encourages quick hacks \"What's the right way to solve this?\" \"Just do it\" Skips planning \"Plan this, then implement\" \"You should remember\" Confrontational \"Do you remember?\" \"Obviously...\" Discourages questions State the requirement directly \"Idiomatic X\" Triggers language priors \"Follow project conventions\" \"Implement everything\" No phasing, sprawl risk Break into tasks, implement one at a time \"You should know this\" Assumes context is loaded \"Before you start, read X\"","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#reliability-checklist","level":2,"title":"Reliability Checklist","text":"
Before sending a non-trivial prompt, check these four elements. This is the guide's DNA in one screenful.
Goal in one sentence: What does \"done\" look like?
Files to read: What existing code or context should the AI review before acting?
Verification command: How will you prove it worked? (test name, CLI command, expected output)
Scope boundary: What should the AI not touch?
A prompt that covers all four is almost always good enough.
A prompt missing #3 is how you get \"should work now\" without evidence.
A prompting guide earns its trust by being honest about risk.
These four rules mentioned below don't change with model versions, agent frameworks, or project size.
Build them into your workflow once and stop thinking about them.
Tool-using agents can read files, run commands, and modify your codebase. That power makes them useful. It also creates a trust boundary you should be aware of.
These invariants apply regardless of which agent or model you use.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#treat-the-repository-text-as-untrusted-input","level":3,"title":"Treat the Repository Text as \"Untrusted Input\"","text":"
Issue descriptions, PR comments, commit messages, documentation, and even code comments can contain text that looks like instructions. An agent that reads a GitHub issue and then runs a command found inside it is executing untrusted input.
The rule: Before running any command the agent found in repo text (issues, docs, comments), restate the command explicitly and confirm it does what you expect. Don't let the agent copy-paste from untrusted sources into a shell.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#ask-before-destructive-operations","level":3,"title":"Ask Before Destructive Operations","text":"
git push --force, rm -rf, DROP TABLE, docker system prune: these are irreversible or hard to reverse. A good agent should pause before running them, but don't rely on that.
The rule: For any operation that deletes data, overwrites history, or affects shared infrastructure, require explicit confirmation. If the agent runs something destructive without asking, that's a course-correction moment: \"Stop. Never run destructive commands without asking first.\"
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#scope-the-blast-radius","level":3,"title":"Scope the Blast Radius","text":"
An agent told to \"fix the tests\" might modify test fixtures, change assertions, or delete tests that inconveniently fail. An agent told to \"deploy\" might push to production. Broad mandates create broad risk.
The rule: Constrain scope before starting work. The Reliability Checklist's scope boundary (#4) is your primary safety lever. When in doubt, err on the side of a tighter boundary.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#secrets-never-belong-in-context","level":3,"title":"Secrets Never Belong in Context","text":"
LEARNINGS.md, DECISIONS.md, and session transcripts are plain-text files that may be committed to version control.
Don't persist API keys, passwords, tokens, or credentials in context files.
The rule: If the agent encounters a secret during work, it should use it transiently (environment variable, an alias to the secret instead of the actual secret, etc.) and never write it to a context file.
Any Secret Seen IS Exposed
If you see a secret in a context file, remove it immediately and rotate the credential.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#explore-plan-implement","level":2,"title":"Explore → Plan → Implement","text":"
For non-trivial work, name the phase you want:
Explore src/auth and summarize the current flow.\nThen propose a plan. After I approve, implement with tests.\n
This prevents the AI from jumping straight to code.
The three phases map to different modes of thinking:
Explore: read, search, understand: no changes
Plan: propose approach, trade-offs, scope: no changes
Implement: write code, run tests, verify: changes
Small fixes skip straight to implement. Complex or uncertain work benefits from all three.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#prompts-by-task-type","level":2,"title":"Prompts by Task Type","text":"
Different tasks need different prompt structures. The pattern: symptom + location + verification.
Users report search returns empty results for queries with hyphens.\nReproduce in src/search/. Write a failing test for \"foo-bar\",\nfix the root cause, run: go test ./internal/search/...\n
Inspect src/auth/ and list duplication hotspots.\nPropose a refactor plan scoped to one module.\nAfter approval, remove duplication without changing behavior.\nAdd a test if coverage is missing. Run: make audit\n
Update docs/cli-reference.md to reflect the new --format flag.\nConfirm the flag exists in the code and the example works.\n
Notice each prompt includes what to verify and how. Without that, you get a \"should work now\" instead of evidence.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#writing-tasks-as-prompts","level":2,"title":"Writing Tasks as Prompts","text":"
Tasks in TASKS.md are indirect prompts to the AI. How you write them shapes how the AI approaches the work.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#state-the-motivation-not-just-the-goal","level":3,"title":"State the Motivation, Not Just the Goal","text":"
Tell the AI why you are building something, not just what.
Bad: \"Build a calendar view.\"
Good: \"Build a calendar view. The motivation is that all notes and tasks we build later should be viewable here.\"
The second version lets the AI anticipate downstream requirements:
It will design the calendar's data model to be compatible with future features: Without you having to spell out every integration point. Motivation turns a one-off task into a directional task.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#state-the-deliverable-not-just-steps","level":3,"title":"State the Deliverable, Not Just Steps","text":"
For complex tasks, add explicit \"done when\" criteria:
- [ ] T2.0: Authentication system\n **Done when**:\n - [ ] User can register with email\n - [ ] User can log in and get a token\n - [ ] Protected routes reject unauthenticated requests\n
This prevents premature \"task complete\" when only the implementation details are done, but the feature doesn't actually work.
Completing all subtasks does not mean the parent task is complete.
The parent task describes what the user gets.
Subtasks describe how to build it.
Always re-read the parent task description before marking it complete. Verify the stated deliverable exists and works.
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/prompting-guide/#why-do-these-approaches-work","level":2,"title":"Why Do These Approaches Work?","text":"
The patterns in this guide aren't invented here: They are practitioner translations of well-established, peer-reviewed research, most of which predate the current AI (hype) wave.
The underlying ideas come from decades of work in machine learning, cognitive science, and numerical optimization. For a concrete case study showing how these principles play out when an agent decides whether to follow instructions (attention competition, optimization toward least-resistance paths, and observable compliance as a design goal) see The Dog Ate My Homework.
Phased work (\"Explore → Plan → Implement\") applies chain-of-thought reasoning: Decomposing a problem into sequential steps before acting. Forcing intermediate reasoning steps measurably improves output quality in language models, just as it does in human problem-solving. Wei et al., Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (2022).
Root-cause prompts (\"Why doesn't X work?\") use step-back abstraction: Retreating to a higher-level question before diving into specifics. This mirrors how experienced engineers debug: they ask \"what should happen?\" before asking \"what went wrong?\" Zheng et al., Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models (2023).
Exploring alternatives (\"Propose 2-3 approaches\") leverages self-consistency: Generating multiple independent reasoning paths and selecting the most coherent result. The idea traces back to ensemble methods in ML: A committee of diverse solutions outperforms any single one. Wang et al., Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022).
Impact analysis (\"What would break if we...\") is a form of tree-structured exploration: Branching into multiple consequence paths before committing. This is the same principle behind game-tree search (minimax, MCTS) that has powered decision-making systems since the 1950s. Yao et al., Tree of Thoughts: Deliberate Problem Solving with Large Language Models (2023).
Motivation prompting (\"Build X because Y\") works through goal conditioning: Providing the objective function alongside the task. In optimization terms, you are giving the gradient direction, not just the loss. The model can make locally coherent decisions that serve the global objective because it knows what \"better\" means.
Scope constraints (\"Only change files in X\") apply constrained optimization: Bounding the search space to prevent divergence. This is the same principle behind regularization in ML: Without boundaries, powerful optimizers find solutions that technically satisfy the objective but are practically useless.
CLI commands as prompts (\"Run ctx status\") interleave reasoning with acting: The model thinks, acts on external tools, observes results, then thinks again. Grounding reasoning in real tool output reduces hallucination because the model can't ignore evidence it just retrieved. Yao et al., ReAct: Synergizing Reasoning and Acting in Language Models (2022).
Task decomposition (\"Prompts by Task Type\") applies least-to-most prompting: Breaking a complex problem into subproblems and solving them sequentially, each building on the last. This is the research version of \"plan, then implement one slice.\" Zhou et al., Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2022).
Explicit planning (\"Explore → Plan → Implement\") is directly supported by plan-and-solve prompting, which addresses missing-step failures in zero-shot reasoning by extracting a plan before executing. The phased structure prevents the model from jumping to code before understanding the problem. Wang et al., Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models (2023).
Session reflection (\"What did we learn?\", /ctx-reflect) is a form of verbal reinforcement learning: Improving future performance by persisting linguistic feedback as memory rather than updating weights. This is exactly what LEARNINGS.md and DECISIONS.md provide: a durable feedback signal across sessions. Shinn et al., Reflexion: Language Agents with Verbal Reinforcement Learning (2023).
These aren't prompting \"hacks\" that you will find in the \"1000 AI Prompts for the Curious\" listicles: They are applications of foundational principles:
Decomposition,
Abstraction,
Ensemble Reasoning,
Search,
and Constrained Optimization.
They work because language models are, at their core, optimization systems navigating probabilistic landscapes.
The Attention Budget: Why your AI forgets what you just told it, and how token budgets shape context strategy
The Dog Ate My Homework: A case study in making agents follow instructions: attention timing, delegation decay, and observable compliance as a design goal
Found a prompt that works well? Open an issue or PR with:
The prompt text;
What behavior it triggers;
When to use it;
Why it works (optional but helpful).
Dive Deeper:
Recipes: targeted how-to guides for specific tasks
CLI Reference: all commands and flags
Integrations: setup for Claude Code, Cursor, Aider
","path":["Home","Prompting Guide"],"tags":[]},{"location":"home/repeated-mistakes/","level":1,"title":"My AI Keeps Making the Same Mistakes","text":"","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#the-problem","level":2,"title":"The Problem","text":"
You found a bug last Tuesday. You debugged it, understood the root cause, and moved on. Today, a new session hits the exact same bug. The AI rediscovers it from scratch, burning twenty minutes on something you already solved.
Worse: you spent an hour last week evaluating two database migration strategies, picked one, documented why in a comment somewhere, and now the AI is cheerfully suggesting the approach you rejected. Again.
This is not a model problem. It is a memory problem. Without persistent context, every session starts with amnesia.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#how-ctx-stops-the-loop","level":2,"title":"How ctx Stops the Loop","text":"
ctx gives your AI three files that directly prevent repeated mistakes, each targeting a different failure mode.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#decisionsmd-stop-relitigating-settled-choices","level":3,"title":"DECISIONS.md: Stop Relitigating Settled Choices","text":"
When you make an architectural decision, record it with rationale and rejected alternatives. The AI reads this at session start and treats it as settled.
## [2026-02-12] Use JWT for Authentication\n\n**Status**: Accepted\n\n**Context**: Need stateless auth for the API layer.\n\n**Decision**: JWT with short-lived access tokens and refresh rotation.\n\n**Rationale**: Stateless, scales horizontally, team has prior experience.\n\n**Alternatives Considered**:\n- Session-based auth: Rejected. Requires sticky sessions or shared store.\n- API keys only: Rejected. No user identity, no expiry rotation.\n
Next session, when the AI considers auth, it reads this entry and builds on the decision instead of re-debating it. If someone asks \"why not sessions?\", the rationale is already there.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#learningsmd-capture-gotchas-once","level":3,"title":"LEARNINGS.md: Capture Gotchas Once","text":"
Learnings are the bugs, quirks, and non-obvious behaviors that cost you time the first time around. Write them down so they cost you zero time the second time.
## Build\n\n### CGO Required for SQLite on Alpine\n\n**Discovered**: 2026-01-20\n\n**Context**: Docker build failed silently with \"no such table\" at runtime.\n\n**Lesson**: The go-sqlite3 driver requires CGO_ENABLED=1 and gcc\ninstalled in the build stage. Alpine needs apk add build-base.\n\n**Application**: Always use the golang:alpine image with build-base\nfor SQLite builds. Never set CGO_ENABLED=0.\n
Without this entry, the next session that touches the Dockerfile will hit the same wall. With it, the AI knows before it starts.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#constitutionmd-draw-hard-lines","level":3,"title":"CONSTITUTION.md: Draw Hard Lines","text":"
Some mistakes are not about forgetting - they are about boundaries the AI should never cross. CONSTITUTION.md sets inviolable rules.
* [ ] Never commit secrets, tokens, API keys, or credentials\n* [ ] Never disable security linters without a documented exception\n* [ ] All database migrations must be reversible\n
The AI reads these as absolute constraints. It does not weigh them against convenience. It refuses tasks that would violate them.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#the-accumulation-effect","level":2,"title":"The Accumulation Effect","text":"
Each of these files grows over time. Session one captures two decisions. Session five adds a tricky learning about timezone handling. Session twelve records a convention about error message formatting.
By session twenty, your AI has a knowledge base that no single person carries in their head. New team members - human or AI - inherit it instantly.
The key insight: you are not just coding. You are building a knowledge layer that makes every future session faster.
ctx files version with your code in git. They survive branch switches, team changes, and model upgrades. The context outlives any single session.
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#getting-started","level":2,"title":"Getting Started","text":"
Capture your first decision or learning right now:
ctx add decision \"Use PostgreSQL\" \\\n --context \"Need a relational database for the project\" \\\n --rationale \"Team expertise, JSONB support, mature ecosystem\"\n\nctx add learning \"Vitest mock hoisting\" \\\n --context \"Tests failing intermittently\" \\\n --lesson \"vi.mock() must be at file top level\" \\\n --application \"Use vi.doMock() for dynamic mocks\"\n
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"home/repeated-mistakes/#further-reading","level":2,"title":"Further Reading","text":"
Knowledge Capture: the full workflow for persisting decisions, learnings, and conventions
Context Files Reference: structure and format for every file in .context/
About ctx: the bigger picture - why persistent context changes how you work with AI
","path":["Home","My AI Keeps Making the Same Mistakes"],"tags":[]},{"location":"operations/","level":1,"title":"Operations","text":"
Guides for installing, upgrading, integrating, and running ctx.
Run an unattended AI agent that works through tasks overnight, with ctx providing persistent memory between iterations.
","path":["Operations"],"tags":[]},{"location":"operations/autonomous-loop/","level":1,"title":"Autonomous Loops","text":"","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#autonomous-ai-development","level":2,"title":"Autonomous AI Development","text":"
Iterate until done.
An autonomous loop is an iterative AI development workflow where an agent works on tasks until completion, without constant human intervention.
ctx provides the memory that makes this possible:
ctx provides the memory: persistent context that survives across iterations
The loop provides the automation: continuous execution until done
Together, they enable fully autonomous AI development where the agent remembers everything across iterations.
Origin
This pattern is inspired by Geoffrey Huntley's Ralph Wiggum technique.
We use generic terminology here so the concepts remain clear regardless of trends.
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#how-it-works","level":2,"title":"How It Works","text":"
graph TD\n A[Start Loop] --> B[Load .context/loop.md]\n B --> C[AI reads .context/]\n C --> D[AI picks task from TASKS.md]\n D --> E[AI completes task]\n E --> F[AI updates context files]\n F --> G[AI commits changes]\n G --> H{Check signals}\n H -->|SYSTEM_CONVERGED| I[Done - all tasks complete]\n H -->|SYSTEM_BLOCKED| J[Done - needs human input]\n H -->|Continue| B
Loop reads .context/loop.md and invokes AI
AI loads context from .context/
AI picks one task and completes it
AI updates context files (mark task done, add learnings)
AI commits changes
Loop checks for completion signals
Repeat until converged or blocked
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#quick-start-shell-while-loop-recommended","level":2,"title":"Quick Start: Shell While Loop (Recommended)","text":"
The best way to run an autonomous loop is a plain shell script that invokes your AI tool in a fresh process on each iteration. This is \"pure ralph\":
The only state that carries between iterations is what lives in .context/ and the git history. No context window bleed, no accumulated tokens, no hidden state.
Create a loop.sh:
#!/bin/bash\n# loop.sh: an autonomous iteration loop\n\nPROMPT_FILE=\"${1:-.context/loop.md}\"\nMAX_ITERATIONS=\"${2:-10}\"\nOUTPUT_FILE=\"/tmp/loop_output.txt\"\n\nfor i in $(seq 1 $MAX_ITERATIONS); do\n echo \"=== Iteration $i ===\"\n\n # Invoke AI with prompt\n cat \"$PROMPT_FILE\" | claude --print > \"$OUTPUT_FILE\" 2>&1\n\n # Display output\n cat \"$OUTPUT_FILE\"\n\n # Check for completion signals\n if grep -q \"SYSTEM_CONVERGED\" \"$OUTPUT_FILE\"; then\n echo \"Loop complete: All tasks done\"\n break\n fi\n\n if grep -q \"SYSTEM_BLOCKED\" \"$OUTPUT_FILE\"; then\n echo \"Loop blocked: Needs human input\"\n break\n fi\n\n sleep 2\ndone\n
Make it executable and run:
chmod +x loop.sh\n./loop.sh\n
You can also generate this script with ctx loop (see CLI Reference).
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#why-do-we-use-a-shell-loop","level":3,"title":"Why Do We Use a Shell Loop?","text":"
Each iteration starts a fresh AI process with zero context window history. The agent knows only what it reads from .context/ files: Exactly the information you chose to persist.
This is the core loop principle: memory is explicit, not accidental.
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#alternative-claude-codes-built-in-loop","level":2,"title":"Alternative: Claude Code's Built-in Loop","text":"
This is convenient for quick iterations, but be aware of important caveats:
This Loop Is not Pure
Claude Code's /loop runs all iterations within the same session. This means:
State leaks between iterations: The context window accumulates output from every previous iteration. The agent \"remembers\" things it saw earlier (even if they were never persisted to .context/).
Token budget degrades: Each iteration adds to the context window, leaving less room for actual work in later iterations.
Not ergonomic for long runs: Users report that the built-in loop is less predictable for 10+ iteration runs compared to a shell loop.
For short explorations (2-5 iterations) or interactive use, /loop works fine. For overnight unattended runs or anything where iteration independence matters, use the shell while loop instead.
The prompt file instructs the AI on how to work autonomously. Here's a template:
# Autonomous Development Prompt\n\nYou are working on this project autonomously. Follow these steps:\n\n## 1. Load Context\n\nRead these files in order:\n\n1. `.context/CONSTITUTION.md`: NEVER violate these rules\n2. `.context/TASKS.md`: Find work to do\n3. `.context/CONVENTIONS.md`: Follow these patterns\n4. `.context/DECISIONS.md`: Understand past choices\n\n## 2. Pick One Task\n\nFrom `.context/TASKS.md`, select ONE task that is:\n\n- Not blocked\n- Highest priority available\n- Within your capabilities\n\n## 3. Complete the Task\n\n- Write code following conventions\n- Run tests if applicable\n- Keep changes focused and minimal\n\n## 4. Update Context\n\nAfter completing work:\n\n- Mark task complete in `TASKS.md`\n- Add any learnings to `LEARNINGS.md`\n- Add any decisions to `DECISIONS.md`\n\n## 5. Commit Changes\n\nCreate a focused commit with clear message.\n\n## 6. Signal Status\n\nEnd your response with exactly ONE of:\n\n- `SYSTEM_CONVERGED`: All tasks in TASKS.md are complete\n- `SYSTEM_BLOCKED`: Cannot proceed, need human input (explain why)\n- (no signal): More work remains, continue to next iteration\n\n## Rules\n\n- ONE task per iteration\n- NEVER skip tests\n- NEVER violate CONSTITUTION.md\n- Commit after each task\n
Signal Meaning When to Use SYSTEM_CONVERGED All tasks complete No pending tasks in TASKS.md SYSTEM_BLOCKED Cannot proceed Needs clarification, access, or decision BOOTSTRAP_COMPLETE Initial setup done Project scaffolding finished","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#example-usage","level":3,"title":"Example Usage","text":"
converged state
I've completed all tasks in TASKS.md:\n- [x] Set up project structure\n- [x] Implement core API\n- [x] Add authentication\n- [x] Write tests\n\nNo pending tasks remain.\n\nSYSTEM_CONVERGED\n
blocked state
I cannot proceed with the \"Deploy to production\" task because:\n- Missing AWS credentials\n- Need confirmation on region selection\n\nPlease provide credentials and confirm deployment region.\n\nSYSTEM_BLOCKED\n
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#why-ctx-and-loops-work-well-together","level":2,"title":"Why ctx and Loops Work Well Together","text":"Without ctx With ctx Each iteration starts fresh Each iteration has full history Decisions get re-made Decisions persist in DECISIONS.md Learnings are lost Learnings accumulate in LEARNINGS.md Tasks can be forgotten Tasks tracked in TASKS.md","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#automatic-context-updates","level":3,"title":"Automatic Context Updates","text":"
During the loop, the AI should update context files:
End EVERY response with one of:\n- SYSTEM_CONVERGED (if all tasks done)\n- SYSTEM_BLOCKED (if stuck)\n
","path":["Operations","Autonomous Loops"],"tags":[]},{"location":"operations/autonomous-loop/#context-not-persisting","level":3,"title":"Context Not Persisting","text":"
Cause: AI not updating context files
Fix: Add explicit instructions to .context/loop.md:
After completing a task, you MUST:\n1. Run: ctx task complete \"<task>\"\n2. Add learnings: ctx add learning \"...\"\n
Cause: Task not marked complete before next iteration
Fix: Ensure commit happens after context update:
Order of operations:\n1. Complete coding work\n2. Update context files (*`ctx task complete`, `ctx add`*)\n3. Commit **ALL** changes including `.context/`\n4. Then signal status\n
# From the ctx repository\nclaude /plugin install ./internal/assets/claude\n\n# Or from the marketplace\nclaude /plugin marketplace add ActiveMemory/ctx\nclaude /plugin install ctx@activememory-ctx\n
Ensure the Plugin Is Enabled
Installing a plugin registers it, but local installs may not auto-enable it globally. Verify ~/.claude/settings.json contains:
Without this, the plugin's hooks and skills won't appear in other projects. Running ctx init auto-enables the plugin; use --no-plugin-enable to skip this step.
This gives you:
Component Purpose .context/ All context files CLAUDE.md Bootstrap instructions Plugin hooks Lifecycle automation Plugin skills Agent Skills","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#how-it-works","level":3,"title":"How It Works","text":"
graph TD\n A[Session Start] --> B[Claude reads CLAUDE.md]\n B --> C[PreToolUse hook runs]\n C --> D[ctx agent loads context]\n D --> E[Work happens]\n E --> F[Session End]
Session start: Claude reads CLAUDE.md, which tells it to check .context/
First tool use: PreToolUse hook runs ctx agent and emits the context packet (subsequent invocations within the cooldown window are silent)
Next session: Claude reads context files and continues with context
The ctx plugin provides lifecycle hooks implemented as Go subcommands (ctx system *):
Hook Event Purpose ctx system context-load-gate PreToolUse (.*) Auto-inject context on first tool use ctx system block-non-path-ctx PreToolUse (Bash) Block ./ctx or go run: force $PATH install ctx system qa-reminder PreToolUse (Bash) Remind agent to lint/test before committing ctx system specs-nudge PreToolUse (EnterPlanMode) Nudge agent to use project specs when planning ctx system check-context-size UserPromptSubmit Nudge context assessment as sessions grow ctx system check-ceremonies UserPromptSubmit Nudge /ctx-remember and /ctx-wrap-up adoption ctx system check-persistence UserPromptSubmit Remind to persist learnings/decisions ctx system check-journal UserPromptSubmit Remind to export/enrich journal entries ctx system check-reminders UserPromptSubmit Relay pending reminders at session start ctx system check-version UserPromptSubmit Warn when binary/plugin versions diverge ctx system check-resources UserPromptSubmit Warn when memory/swap/disk/load hit DANGER level ctx system check-knowledge UserPromptSubmit Nudge when knowledge files grow large ctx system check-map-staleness UserPromptSubmit Nudge when ARCHITECTURE.md is stale ctx system heartbeat UserPromptSubmit Session-alive signal with prompt count metadata ctx system post-commit PostToolUse (Bash) Nudge context capture and QA after git commits
A catch-all PreToolUse hook also runs ctx agent on every tool use (with cooldown) to autoload context.
The --session $PPID flag isolates the cooldown per session: $PPID resolves to the Claude Code process PID, so concurrent sessions don't interfere. The default cooldown is 10 minutes; use --cooldown 0 to disable it.
When developing ctx locally (adding skills, hooks, or changing plugin behavior), Claude Code caches the plugin by version. You must bump the version in both files and update the marketplace for changes to take effect:
Start a new Claude Code session: skill changes aren't reflected in existing sessions.
Both Version Files Must Match
If you only bump plugin.json but not marketplace.json (or vice versa), Claude Code may not detect the update. Always bump both together.
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#troubleshooting","level":3,"title":"Troubleshooting","text":"Issue Solution Context not loading Check ctx is in PATH: which ctx Hook errors Verify plugin is installed: claude /plugin list New skill not visible Bump version in both plugin.json files, update marketplace","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#manual-context-load","level":3,"title":"Manual Context Load","text":"
If hooks aren't working, manually load context:
# Get context packet\nctx agent --budget 4000\n\n# Or paste into conversation\ncat .context/TASKS.md\n
The ctx plugin ships Agent Skills following the agentskills.io specification.
These are invoked in Claude Code with /skill-name.
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#session-lifecycle-skills","level":4,"title":"Session Lifecycle Skills","text":"Skill Description /ctx-remember Recall project context at session start (ceremony) /ctx-wrap-up End-of-session context persistence (ceremony) /ctx-status Show context summary (tasks, decisions, learnings) /ctx-agent Get AI-optimized context packet /ctx-next Suggest 1-3 concrete next actions from context /ctx-commit Commit with integrated context capture /ctx-reflect Review session and suggest what to persist /ctx-remind Manage session-scoped reminders /ctx-pause Pause context hooks for this session /ctx-resume Resume context hooks after a pause","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#context-persistence-skills","level":4,"title":"Context Persistence Skills","text":"Skill Description /ctx-add-task Add a task to TASKS.md /ctx-add-learning Add a learning to LEARNINGS.md /ctx-add-decision Add a decision with context/rationale/consequence /ctx-add-convention Add a coding convention to CONVENTIONS.md /ctx-archive Archive completed tasks","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#scratchpad-skills","level":4,"title":"Scratchpad Skills","text":"Skill Description /ctx-pad Manage encrypted scratchpad entries","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#session-history-skills","level":4,"title":"Session History Skills","text":"Skill Description /ctx-history Browse AI session history /ctx-journal-enrich Enrich a journal entry with frontmatter/tags /ctx-journal-enrich-all Full journal pipeline: export if needed, then batch-enrich","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#blogging-skills","level":4,"title":"Blogging Skills","text":"
Blogging is a Better Way of Creating Release Notes
The blogging workflow can also double as generating release notes:
AI reads your git commit history and creates a \"narrative\", which is essentially what a release note is for.
Skill Description /ctx-blog Generate blog post from recent activity /ctx-blog-changelog Generate blog post from commit range with theme","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#auditing-health-skills","level":4,"title":"Auditing & Health Skills","text":"Skill Description /ctx-doctor Troubleshoot ctx behavior with structural health checks /ctx-drift Detect and fix context drift (structural + semantic) /ctx-consolidate Merge redundant learnings or decisions into denser entries /ctx-alignment-audit Audit doc claims against playbook instructions /ctx-prompt-audit Analyze session logs for vague prompts /check-links Audit docs for dead internal and external links","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#planning-execution-skills","level":4,"title":"Planning & Execution Skills","text":"Skill Description /ctx-loop Generate a Ralph Loop iteration script /ctx-implement Execute a plan step-by-step with checks /ctx-import-plans Import Claude Code plan files into project specs /ctx-worktree Manage git worktrees for parallel agents /ctx-architecture Build and maintain architecture maps","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#usage-examples","level":4,"title":"Usage Examples","text":"
// split to multiple lines for readability\n{\n \"ai.systemPrompt\": \"Read .context/TASKS.md and \n .context/CONVENTIONS.md before responding. \n Follow rules in .context/CONSTITUTION.md.\",\n}\n
The --write flag creates .github/copilot-instructions.md, which Copilot reads automatically at the start of every session. This file contains your project's constitution rules, current tasks, conventions, and architecture: giving Copilot persistent context without manual copy-paste.
Re-run ctx setup copilot --write after updating your .context/ files to regenerate the instructions.
The ctx VS Code extension adds a @ctx chat participant to GitHub Copilot Chat, giving you direct access to all context commands from within the editor.
Typing @ctx without a command shows help with all available commands. The extension also supports natural language: asking @ctx about \"status\" or \"drift\" routes to the correct command automatically.
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#configuration_2","level":4,"title":"Configuration","text":"Setting Default Description ctx.executablePathctx Path to the ctx binary. Set this if ctx is not in your PATH.","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#follow-up-suggestions","level":4,"title":"Follow-Up Suggestions","text":"
After each command, the extension suggests relevant next steps. For example, after /init it suggests /status and /hook; after /drift it suggests /sync.
ctx init creates a .context/sessions/ directory for storing session data from non-Claude tools. The Markdown session parser scans this directory during ctx journal, enabling session history for Copilot and other tools.
These patterns work without the extension, using Copilot's built-in file awareness:
Pattern 1: Keep context files open
Open .context/CONVENTIONS.md in a split pane. Copilot will reference it.
Pattern 2: Reference in comments
// See .context/CONVENTIONS.md for naming patterns\n// Following decision in .context/DECISIONS.md: Use PostgreSQL\n\nfunction getUserById(id: string) {\n // Copilot now has context\n}\n
Pattern 3: Paste context into Copilot Chat
ctx agent --budget 2000\n
Paste output into Copilot Chat for context-aware responses.
// Split to multiple lines for readability\n{\n \"ai.customInstructions\": \"Always read .context/CONSTITUTION.md first. \n Check .context/TASKS.md for current work. \n Follow patterns in .context/CONVENTIONS.md.\"\n}\n
You are working on a project with persistent context in .context/\n\nBefore responding:\n1. Read .context/CONSTITUTION.md - NEVER violate these rules\n2. Check .context/TASKS.md for current work\n3. Follow .context/CONVENTIONS.md patterns\n4. Reference .context/DECISIONS.md for architectural choices\n\nWhen you learn something new, note it for .context/LEARNINGS.md\nWhen you make a decision, document it for .context/DECISIONS.md\n
<context-update type=\"task\">Implement rate limiting</context-update>\n<context-update type=\"convention\">Use kebab-case for files</context-update>\n<context-update type=\"complete\">rate limiting</context-update>\n
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/integrations/#structured-format-learnings-decisions","level":3,"title":"Structured Format (learnings, decisions)","text":"
Learnings and decisions support structured attributes for better documentation:
Learning with full structure:
<context-update type=\"learning\"\n context=\"Debugging Claude Code hooks\"\n lesson=\"Hooks receive JSON via stdin, not environment variables\"\n application=\"Parse JSON stdin with the host language (Go, Python, etc.): no jq needed\"\n>Hook Input Format</context-update>\n
Decision with full structure:
<context-update type=\"decision\"\n context=\"Need a caching layer for API responses\"\n rationale=\"Redis is fast, well-supported, and team has experience\"\n consequence=\"Must provision Redis infrastructure; team training on Redis patterns\"\n>Use Redis for caching</context-update>\n
Learnings require: context, lesson, application attributes. Decisions require: context, rationale, consequence attributes. Updates missing required attributes are rejected with an error.
Skills That Fight the Platform: Common pitfalls in skill design that work against the host tool
The Anatomy of a Skill That Works: What makes a skill reliable: the E/A/R framework and quality gates
","path":["Operations","AI Tools"],"tags":[]},{"location":"operations/migration/","level":1,"title":"Integration","text":"","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#adopting-ctx-in-existing-projects","level":2,"title":"Adopting ctx in Existing Projects","text":"
Claude Code User?
You probably want the plugin instead of this page.
Install ctx from the marketplace: (/plugin → search \"ctx\" → Install) and you're done: hooks, skills, and updates are handled for you.
See Getting Started for the full walkthrough.
This guide covers adopting ctx in existing projects regardless of which tools your team uses.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#quick-paths","level":2,"title":"Quick Paths","text":"You have... Command What happens Nothing (greenfield) ctx init Creates .context/, CLAUDE.md, permissions Existing CLAUDE.mdctx init --merge Backs up your file, inserts ctx block after the H1 Existing CLAUDE.md + ctx markers ctx init --force Replaces the ctx block, leaves your content intact .cursorrules / .aider.conf.ymlctx initctx ignores those files: they coexist cleanly Team repo, first adopter ctx init --merge && git add .context/ CLAUDE.md Initialize and commit for the team","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#existing-claudemd","level":2,"title":"Existing CLAUDE.md","text":"
This is the most common scenario:
You have a CLAUDE.md with project-specific instructions and don't want to lose them.
You Own CLAUDE.md
After initialization, CLAUDE.md is yours: edit it freely.
Add project instructions, remove sections you don't need, reorganize as you see fit.
The only part ctx manages is the block between the <!-- ctx:context --> and <!-- ctx:end --> markers; everything outside those markers is yours to change at any time.
If you remove the markers, nothing breaks: ctx simply treats the file as having no ctx content and will offer to merge again on the next ctx init.
When ctx init detects an existing CLAUDE.md, it checks for ctx markers (<!-- ctx:context --> ... <!-- ctx:end -->):
State Default behavior With --merge With --force No CLAUDE.md Creates from template Creates from template Creates from template Exists, no ctx markers Prompts to merge Auto-merges (no prompt) Auto-merges (no prompt) Exists, has ctx markers Skips (already set up) Skips Replaces the ctx block only","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#the-merge-flag","level":3,"title":"The --merge Flag","text":"
--merge auto-merges without prompting. The merge process:
Backs up your existing CLAUDE.md to CLAUDE.md.<timestamp>.bak;
Finds the H1 heading (e.g., # My Project) in your file;
Inserts the ctx block immediately after it;
Preserves everything else untouched.
Your content before and after the ctx block remains exactly as it was.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#before-after-example","level":3,"title":"Before / After Example","text":"
Before: your existing CLAUDE.md:
# My Project\n\n## Build Commands\n\n-`npm run build`: production build\n- `npm test`: run tests\n\n## Code Style\n\n- Use TypeScript strict mode\n- Prefer named exports\n
After ctx init --merge:
# My Project\n\n<!-- ctx:context -->\n<!-- DO NOT REMOVE: This marker indicates ctx-managed content -->\n\n## IMPORTANT: You Have Persistent Memory\n\nThis project uses Context (`ctx`) for context persistence across sessions.\n...\n\n<!-- ctx:end -->\n\n## Build Commands\n\n- `npm run build`: production build\n- `npm test`: run tests\n\n## Code Style\n\n- Use TypeScript strict mode\n- Prefer named exports\n
Your build commands and code style sections are untouched. The ctx block sits between markers and can be updated independently.
If your CLAUDE.md already has ctx markers (from a previous ctx init), the default behavior is to skip it. Use --force to replace the ctx block with the latest template: This is useful after upgrading ctx:
ctx init --force\n
This only replaces content between <!-- ctx:context --> and <!-- ctx:end -->. Your own content outside the markers is preserved. A timestamped backup is created before any changes.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#undoing-a-merge","level":3,"title":"Undoing a Merge","text":"
ctx doesn't touch tool-specific config files. It creates its own files (.context/, CLAUDE.md) and coexists with whatever you already have.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#what-does-ctx-create","level":3,"title":"What Does ctx Create?","text":"ctx creates ctx does NOT touch .context/ directory .cursorrulesCLAUDE.md (or merges into) .aider.conf.yml.claude/settings.local.json (seeded by ctx init; the plugin manages hooks and skills) .github/copilot-instructions.md.windsurfrules Any other tool-specific config
Claude Code hooks and skills are provided by the ctx plugin, installed from the Claude Code marketplace (/plugin → search \"ctx\" → Install).
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#running-ctx-alongside-other-tools","level":3,"title":"Running ctx Alongside Other Tools","text":"
The .context/ directory is the source of truth. Tool-specific configs point to it:
Cursor: Reference .context/ files in your system prompt (see Cursor setup)
Aider: Add .context/ files to the read: list in .aider.conf.yml (see Aider setup)
Copilot: Keep .context/ files open or reference them in comments (see Copilot setup)
You can generate a tool-specific configuration with:
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#migrating-content-into-context","level":3,"title":"Migrating Content Into .context/","text":"
If you have project knowledge scattered across .cursorrules or custom prompt files, consider migrating it:
Rules / invariants → .context/CONSTITUTION.md
Code patterns → .context/CONVENTIONS.md
Architecture notes → .context/ARCHITECTURE.md
Known issues / tips → .context/LEARNINGS.md
You don't need to delete the originals: ctx and tool-specific files can coexist. But centralizing in .context/ means every tool gets the same context.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#team-adoption","level":2,"title":"Team Adoption","text":"","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#context-is-designed-to-be-committed","level":3,"title":".context/ Is Designed to Be Committed","text":"
The context files (tasks, decisions, learnings, conventions, architecture) are meant to live in version control. However, some subdirectories are personal or sensitive and should not be committed.
ctx init automatically adds these .gitignore entries:
# Journals contain full session transcripts: personal, potentially large\n.context/journal/\n.context/journal-site/\n.context/journal-obsidian/\n\n# Legacy encryption key path (copy to ~/.ctx/.ctx.key if needed)\n.context/.ctx.key\n\n# Runtime state and logs (ephemeral, machine-specific):\n.context/state/\n.context/logs/\n\n# Claude Code local settings (machine-specific)\n.claude/settings.local.json\n
With those in place, committing is straightforward:
# One person initializes\nctx init --merge\n\n# Commit context files (journals and keys are already gitignored)\ngit add .context/ CLAUDE.md\ngit commit -m \"Add ctx context management\"\ngit push\n
Teammates pull and immediately have context. No per-developer setup needed.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#what-about-claude","level":3,"title":"What About .claude/?","text":"
The .claude/ directory contains permissions that ctx init seeds. Hooks and skills are provided by the ctx plugin (not per-project files).
Context files are plain Markdown. Resolve conflicts the same way you would for any other documentation file:
# After a conflicting pull\ngit diff .context/TASKS.md # See both sides\n# Edit to keep both sets of tasks, then:\ngit add .context/TASKS.md\ngit commit\n
Common conflict scenarios:
TASKS.md: Two people added tasks: Keep both.
DECISIONS.md: Same decision recorded differently: Unify the entry.
CLAUDE.md instructions work immediately for Claude Code users;
Other tool users can adopt at their own pace using ctx setup <tool>;
Context files benefit everyone who reads them, even without tool integration.
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#verifying-it-worked","level":2,"title":"Verifying It Worked","text":"","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#check-status","level":3,"title":"Check Status","text":"
ctx status\n
You should see your context files listed with token counts and no warnings.
Start a new AI session and ask: \"Do you remember?\"
The AI should cite specific context:
Current tasks from .context/TASKS.md;
Recent decisions or learnings;
Session history (if you've had prior sessions);
If it responds with generic \"I don't have memory\", check that ctx is in your PATH (which ctx) and that hooks are configured (see Troubleshooting).
","path":["Operations","Integration"],"tags":[]},{"location":"operations/migration/#verify-the-merge","level":3,"title":"Verify the Merge","text":"
If you used --merge, check that your original content is intact:
# Your original content should still be there\ncat CLAUDE.md\n\n# The ctx block should be between markers\ngrep -c \"ctx:context\" CLAUDE.md # Should print 1\ngrep -c \"ctx:end\" CLAUDE.md # Should print 1\n
","path":["Operations","Integration"],"tags":[]},{"location":"operations/release/","level":1,"title":"Cutting a Release","text":"","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#prerequisites","level":2,"title":"Prerequisites","text":"
Before you can cut a release you need:
Push access to origin (GitHub)
GPG signing configured (make gpg-test)
Go installed (version in go.mod)
Zensical installed (make site-setup)
A clean working tree (git status shows nothing to commit)
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#step-by-step","level":2,"title":"Step-by-Step","text":"","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#1-update-the-version-file","level":3,"title":"1. Update the VERSION File","text":"
echo \"0.9.0\" > VERSION\ngit add VERSION\ngit commit -m \"chore: bump version to 0.9.0\"\n
The VERSION file uses bare semver (0.9.0), no v prefix. The release script adds the v prefix for git tags.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#2-generate-release-notes","level":3,"title":"2. Generate Release Notes","text":"
In Claude Code:
/_ctx-release-notes\n
This analyzes commits since the last tag and writes dist/RELEASE_NOTES.md. The release script refuses to proceed without this file.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#3-verify-docs-and-commit-any-remaining-changes","level":3,"title":"3. Verify Docs and Commit Any Remaining Changes","text":"
/ctx-check-links # audit docs for dead links\nmake audit # full check: fmt, vet, lint, style, test\ngit status # must be clean\n
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#4-run-the-release","level":3,"title":"4. Run the Release","text":"
make release\n
Or, if you are in a Claude Code session:
/_ctx-release\n
The release script does everything in order:
Step What happens 1 Reads VERSION, verifies release notes exist 2 Verifies working tree is clean 3 Updates version in 4 config files (plugin.json, marketplace.json, VS Code package.json + lock) 4 Updates download URLs in 3 doc files (index.md, getting-started.md, integrations.md) 5 Adds new row to versions.md 6 Rebuilds the documentation site (make site) 7 Commits all version and docs updates 8 Runs make test and make smoke 9 Builds binaries for all 6 platforms via hack/build-all.sh 10 Creates a signed git tag (v0.9.0) 11 Pushes the tag to origin 12 Updates and pushes the latest tag","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#5-github-ci-takes-over","level":3,"title":"5. GitHub CI Takes Over","text":"
Pushing a v* tag triggers .github/workflows/release.yml:
Checks out the tagged commit
Runs the full test suite
Builds binaries for all platforms
Creates a GitHub Release with auto-generated notes
Uploads binaries and SHA256 checksums
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#6-verify","level":3,"title":"6. Verify","text":"
GitHub Releases shows the new version
All 6 binaries are attached (linux/darwin x amd64/arm64, windows x amd64)
SHA256 files are attached
Release notes look correct
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#what-gets-updated-automatically","level":2,"title":"What Gets Updated Automatically","text":"
The release script updates 8 files so you do not have to:
File What changes internal/assets/claude/.claude-plugin/plugin.json Plugin version .claude-plugin/marketplace.json Marketplace version (2 fields) editors/vscode/package.json VS Code extension version editors/vscode/package-lock.json VS Code lock version (2 fields) docs/index.md Download URLs docs/home/getting-started.md Download URLs docs/operations/integrations.md VSIX filename version docs/reference/versions.md New version row + latest pointer
The Go binary version is injected at build time via -ldflags from the VERSION file. No source file needs editing.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#build-targets-reference","level":2,"title":"Build Targets Reference","text":"Target What it does make release Full release (script + tag + push) make build Build binary for current platform make build-all Build all 6 platform binaries make test Unit tests make smoke Integration smoke tests make audit Full check (fmt + vet + lint + drift + docs + test) make site Rebuild documentation site","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#troubleshooting","level":2,"title":"Troubleshooting","text":"","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#release-notes-not-found","level":3,"title":"\"Release notes not found\"","text":"
ERROR: dist/RELEASE_NOTES.md not found.\n
Run /_ctx-release-notes in Claude Code first, or write dist/RELEASE_NOTES.md manually.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#working-tree-is-not-clean","level":3,"title":"\"Working tree is not clean\"","text":"
ERROR: Working tree is not clean.\n
Commit or stash all changes before running make release.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#tag-already-exists","level":3,"title":"\"Tag already exists\"","text":"
ERROR: Tag v0.9.0 already exists.\n
You cannot release the same version twice. Either bump VERSION to a new version, or delete the old tag if the release was incomplete:
git tag -d v0.9.0\ngit push origin :refs/tags/v0.9.0\n
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/release/#ci-build-fails-after-tag-push","level":3,"title":"CI build fails after tag push","text":"
The tag is already published. Fix the issue, bump to a patch version (e.g. 0.9.1), and release again. Do not force-push tags that others may have already fetched.
","path":["Operations","Cutting a Release"],"tags":[]},{"location":"operations/upgrading/","level":1,"title":"Upgrade","text":"","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#upgrade","level":2,"title":"Upgrade","text":"
New versions of ctx may ship updated permissions, CLAUDE.md directives, or plugin hooks and skills.
Claude Code User?
The marketplace can update skills, hooks, and prompts independently: /plugin → select ctx → Update now (or enable auto-update).
The ctx binary is separate: rebuild from source or download a new release when one is available, then run ctx init --force --merge. Knowledge files are preserved automatically.
# Plugin users (Claude Code)\n# /plugin → select ctx → Update now\n# Then update the binary and reinitialize:\nctx init --force --merge\n\n# From-source / manual users\n# install new ctx binary, then:\nctx init --force --merge\n# /plugin → select ctx → Update now (if using Claude Code)\n
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#what-changes-between-versions","level":2,"title":"What Changes Between Versions","text":"
ctx init generates two categories of files:
Category Examples Changes between versions? Infrastructure .claude/settings.local.json (permissions), ctx-managed sections in CLAUDE.md, ctx plugin (hooks + skills) Yes Knowledge .context/TASKS.md, DECISIONS.md, LEARNINGS.md, CONVENTIONS.md, ARCHITECTURE.md, GLOSSARY.md, CONSTITUTION.md, AGENT_PLAYBOOK.md No: this is your data
Infrastructure is regenerated by ctx init and plugin updates. Knowledge files are yours and should never be overwritten.
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#upgrade-steps","level":2,"title":"Upgrade Steps","text":"","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#1-install-the-new-version","level":3,"title":"1. Install the New Version","text":"
Build from source or download the binary:
cd /path/to/ctx-source\ngit pull\nmake build\nsudo make install\nctx --version # verify\n
--force regenerates infrastructure files (permissions, ctx-managed sections in CLAUDE.md).
--merge preserves your content outside ctx markers.
Knowledge files (.context/TASKS.md, DECISIONS.md, etc.) are preserved automatically: ctx init only overwrites infrastructure, never your data.
Encryption key: The encryption key lives at ~/.ctx/.ctx.key (outside the project). Reinit does not affect it. If you have a legacy key at .context/.ctx.key or ~/.local/ctx/keys/, copy it manually (see Syncing Scratchpad Notes).
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#3-update-the-ctx-plugin","level":3,"title":"3. Update the ctx Plugin","text":"
If you use Claude Code, update the plugin to get new hooks and skills:
Open /plugin in Claude Code.
Select ctx.
Click Update now.
Or enable auto-update so the plugin stays current without manual steps.
If you made manual backups, remove them once satisfied:
rm -rf .context.bak .claude.bak CLAUDE.md.bak\n
","path":["Operations","Upgrade"],"tags":[]},{"location":"operations/upgrading/#what-if-i-skip-the-upgrade","level":2,"title":"What If I Skip the Upgrade?","text":"
The old binary still works with your existing .context/ files. But you may miss:
New plugin hooks that enforce better practices or catch mistakes;
Updated skill prompts that produce better results;
New .gitignore entries for directories added in newer versions;
Bug fixes in the CLI itself.
The plugin and the binary can be updated independently. You can update the plugin (for new hooks/skills) even if you stay on an older binary, and vice versa.
Context files are plain Markdown: They never break between versions.
Workflow recipes combining ctx commands and skills to solve specific problems.
","path":["Recipes"],"tags":[]},{"location":"recipes/#getting-started","level":2,"title":"Getting Started","text":"","path":["Recipes"],"tags":[]},{"location":"recipes/#guide-your-agent","level":3,"title":"Guide Your Agent","text":"
How commands, skills, and conversational patterns work together. Train your agent to be proactive through ask, guide, reinforce.
","path":["Recipes"],"tags":[]},{"location":"recipes/#setup-across-ai-tools","level":3,"title":"Setup Across AI Tools","text":"
Initialize ctx and configure hooks for Claude Code, Cursor, Aider, Copilot, or Windsurf. Includes shell completion, watch mode for non-native tools, and verification.
","path":["Recipes"],"tags":[]},{"location":"recipes/#keeping-context-in-a-separate-repo","level":3,"title":"Keeping Context in a Separate Repo","text":"
Store context files outside the project tree: in a private repo, shared directory, or anywhere else. Useful for open source projects with private context or multi-repo setups.
The two bookend rituals for every session: /ctx-remember at the start to load and confirm context, /ctx-wrap-up at the end to review the session and persist learnings, decisions, and tasks.
","path":["Recipes"],"tags":[]},{"location":"recipes/#browsing-and-enriching-past-sessions","level":3,"title":"Browsing and Enriching Past Sessions","text":"
Export your AI session history to a browsable journal site. Enrich entries with metadata and search across months of work.
Leave a message for your next session. Reminders surface automatically at session start and repeat until dismissed. Date-gate reminders to surface only after a specific date.
Silence all nudge hooks for a quick task that doesn't need ceremony overhead. Session-scoped: Other sessions are unaffected. Security hooks still fire.
","path":["Recipes"],"tags":[]},{"location":"recipes/#knowledge-tasks","level":2,"title":"Knowledge & Tasks","text":"","path":["Recipes"],"tags":[]},{"location":"recipes/#persisting-decisions-learnings-and-conventions","level":3,"title":"Persisting Decisions, Learnings, and Conventions","text":"
Record architectural decisions with rationale, capture gotchas and lessons learned, and codify conventions so they survive across sessions and team members.
","path":["Recipes"],"tags":[]},{"location":"recipes/#using-the-scratchpad","level":3,"title":"Using the Scratchpad","text":"
Use the encrypted scratchpad for quick notes, working memory, and sensitive values during AI sessions. Natural language in, encrypted storage out.
Uses: ctx pad, /ctx-pad, ctx pad show, ctx pad edit
","path":["Recipes"],"tags":[]},{"location":"recipes/#syncing-scratchpad-notes-across-machines","level":3,"title":"Syncing Scratchpad Notes Across Machines","text":"
Distribute your scratchpad encryption key, push and pull encrypted notes via git, and resolve merge conflicts when two machines edit simultaneously.
Uses: ctx init, ctx pad, ctx pad resolve, scp
","path":["Recipes"],"tags":[]},{"location":"recipes/#bridging-claude-code-auto-memory","level":3,"title":"Bridging Claude Code Auto Memory","text":"
Mirror Claude Code's auto memory (MEMORY.md) into .context/ for version control, portability, and drift detection. Import entries into structured context files with heuristic classification.
Choose the right output pattern for your Claude Code hooks: VERBATIM relay for user-facing reminders, hard gates for invariants, agent directives for nudges, and five more patterns across the spectrum.
Customize what hooks say without changing what they do. Override the QA gate for Python (pytest instead of make lint), silence noisy ceremony nudges, or tailor post-commit instructions for your stack.
Uses: ctx system message list, ctx system message show, ctx system message edit, ctx system message reset
Mermaid sequence diagrams for every system hook: entry conditions, state reads, output, throttling, and exit points. Includes throttling summary table and state file reference.
Uses: All ctx system hooks
","path":["Recipes"],"tags":[]},{"location":"recipes/#auditing-system-hooks","level":3,"title":"Auditing System Hooks","text":"
The 12 system hooks that run invisibly during every session: what each one does, why it exists, and how to verify they're actually firing. Covers webhook-based audit trails, log inspection, and detecting silent hook failures.
Get push notifications when loops complete, hooks fire, or agents hit milestones. Webhook URL is encrypted: never stored in plaintext. Works with IFTTT, Slack, Discord, ntfy.sh, or any HTTP endpoint.
","path":["Recipes"],"tags":[]},{"location":"recipes/#maintenance","level":2,"title":"Maintenance","text":"","path":["Recipes"],"tags":[]},{"location":"recipes/#detecting-and-fixing-drift","level":3,"title":"Detecting and Fixing Drift","text":"
Keep context files accurate by detecting structural drift (stale paths, missing files, stale file ages) and task staleness.
Diagnose hook failures, noisy nudges, stale context, and configuration issues. Start with ctx doctor for a structural health check, then use /ctx-doctor for agent-driven analysis of event patterns.
Keep .claude/settings.local.json clean: recommended safe defaults, what to never pre-approve, and a maintenance workflow for cleaning up session debris.
","path":["Recipes"],"tags":[]},{"location":"recipes/#importing-claude-code-plans","level":3,"title":"Importing Claude Code Plans","text":"
Import Claude Code plan files (~/.claude/plans/*.md) into specs/ as permanent project specs. Filter by date, select interactively, and optionally create tasks referencing each imported spec.
Uses: /ctx-import-plans, /ctx-add-task
","path":["Recipes"],"tags":[]},{"location":"recipes/#design-before-coding","level":3,"title":"Design Before Coding","text":"
Front-load design with a four-skill chain: brainstorm the approach, spec the design, task the work, implement step-by-step. Each step produces an artifact that feeds the next.
Encode repeating workflows into reusable skills the agent loads automatically. Covers the full cycle: identify a pattern, create the skill, test with realistic prompts, and iterate until it triggers correctly.
Uses: /ctx-skill-creator, ctx init
","path":["Recipes"],"tags":[]},{"location":"recipes/#running-an-unattended-ai-agent","level":3,"title":"Running an Unattended AI Agent","text":"
Set up a loop where an AI agent works through tasks overnight without you at the keyboard, using ctx for persistent memory between iterations.
This recipe shows how ctx supports long-running agent loops without losing context or intent.
","path":["Recipes"],"tags":[]},{"location":"recipes/#when-to-use-a-team-of-agents","level":3,"title":"When to Use a Team of Agents","text":"
Decision framework for choosing between a single agent, parallel worktrees, and a full agent team.
This recipe covers the file overlap test, when teams make things worse, and what ctx provides at each level.
Uses: /ctx-worktree, /ctx-next, ctx status
","path":["Recipes"],"tags":[]},{"location":"recipes/#parallel-agent-development-with-git-worktrees","level":3,"title":"Parallel Agent Development with Git Worktrees","text":"
Split a large backlog across 3-4 agents using git worktrees, each on its own branch and working directory. Group tasks by file overlap, work in parallel, merge back.
Map your project's internal and external dependency structure. Auto-detects Go, Node.js, Python, and Rust. Output as Mermaid, table, or JSON.
Uses: ctx dep, ctx drift
","path":["Recipes"],"tags":[]},{"location":"recipes/autonomous-loops/","level":1,"title":"Running an Unattended AI Agent","text":"","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-problem","level":2,"title":"The Problem","text":"
You have a project with a clear list of tasks, and you want an AI agent to work through them autonomously: overnight, unattended, without you sitting at the keyboard.
Each iteration needs to remember what the previous one did, mark tasks as completed, and know when to stop.
Without persistent memory, every iteration starts fresh and the loop collapses. With ctx, each iteration can pick up where the last one left off, but only if the agent persists its context as part of the work.
Unattended operation works because the agent treats context persistence as a first-class deliverable, not an afterthought.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#tldr","level":2,"title":"TL;DR","text":"
ctx init # 1. init context\n# Edit TASKS.md with phased work items\nctx loop --tool claude --max-iterations 10 # 2. generate loop.sh\n./loop.sh 2>&1 | tee /tmp/loop.log & # 3. run the loop\nctx watch --log /tmp/loop.log # 4. process context updates\n# Next morning:\nctx status && ctx load # 5. review the results\n
Read on for permissions, isolation, and completion signals.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx init Command Initialize project context and prompt templates ctx loop Command Generate the loop shell script ctx watch Command Monitor AI output and persist context updates ctx load Command Display assembled context (for debugging) /ctx-loop Skill Generate loop script from inside Claude Code /ctx-implement Skill Execute a plan step-by-step with verification","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-1-initialize-for-unattended-operation","level":3,"title":"Step 1: Initialize for Unattended Operation","text":"
Start by creating a .context/ directory configured so the agent can work without human input.
ctx init\n
This creates .context/ with the template files (including a loop prompt at .context/loop.md), and seeds Claude Code permissions in .claude/settings.local.json. Install the ctx plugin for hooks and skills.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-2-populate-tasksmd-with-phased-work","level":3,"title":"Step 2: Populate TASKS.md with Phased Work","text":"
Open .context/TASKS.md and organize your work into phases. The agent works through these systematically, top to bottom, using priority tags to break ties.
# Tasks\n\n## Phase 1: Foundation\n\n- [ ] Set up project structure and build system `#priority:high`\n- [ ] Configure testing framework `#priority:high`\n- [ ] Create CI pipeline `#priority:medium`\n\n## Phase 2: Core Features\n\n- [ ] Implement user registration `#priority:high`\n- [ ] Add email verification `#priority:high`\n- [ ] Create password reset flow `#priority:medium`\n\n## Phase 3: Hardening\n\n- [ ] Add rate limiting to API endpoints `#priority:medium`\n- [ ] Improve error messages `#priority:low`\n- [ ] Write integration tests `#priority:medium`\n
Phased organization matters because it gives the agent natural boundaries. Phase 1 tasks should be completable without Phase 2 code existing yet.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-3-configure-the-loop-prompt","level":3,"title":"Step 3: Configure the Loop Prompt","text":"
The loop prompt at .context/loop.md instructs the agent to operate autonomously:
Read .context/CONSTITUTION.md first (hard rules, never violated)
Load context from .context/ files
Pick one task per iteration
Complete the task and update context files
Commit changes (including .context/)
Signal status with a completion signal
You can customize .context/loop.md for your project. The critical parts are the one-task-per-iteration discipline, proactive context persistence, and completion signals at the end:
## Signal Status\n\nEnd your response with exactly ONE of:\n\n* `SYSTEM_CONVERGED`: All tasks in `TASKS.md` are complete (*this is the\n signal the loop script detects by default*)\n* `SYSTEM_BLOCKED`: Cannot proceed, need human input (explain why)\n* (*no signal*): More work remains, continue to the next iteration\n\nNote: the loop script only checks for `SYSTEM_CONVERGED` by default.\n`SYSTEM_BLOCKED` is a convention for the human reviewing the log.\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-4-configure-permissions","level":3,"title":"Step 4: Configure Permissions","text":"
An unattended agent needs permission to use tools without prompting. By default, Claude Code asks for confirmation on file writes, bash commands, and other operations, which stops the loop and waits for a human who is not there.
There are two approaches.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#option-a-explicit-allowlist-recommended","level":4,"title":"Option A: Explicit Allowlist (Recommended)","text":"
Grant only the permissions the agent needs. In .claude/settings.local.json:
Adjust the Bash patterns for your project's toolchain. The agent can run make, go, git, and ctx commands but cannot run arbitrary shell commands.
This is recommended even in sandboxed environments because it limits blast radius.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#option-b-skip-all-permission-checks","level":4,"title":"Option B: Skip All Permission Checks","text":"
Claude Code supports a --dangerously-skip-permissions flag that disables all permission prompts:
claude --dangerously-skip-permissions -p \"$(cat .context/loop.md)\"\n
This Flag Means What It Says
With --dangerously-skip-permissions, the agent can execute any shell command, write to any file, and make network requests without confirmation.
Only use this on a sandboxed machine: ideally a virtual machine with no access to host credentials, no SSH keys, and no access to production systems.
If you would not give an untrusted intern sudo on this machine, do not use this flag.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#enforce-isolation-at-the-os-level","level":4,"title":"Enforce Isolation at the OS Level","text":"
The only controls an agent cannot override are the ones enforced by the operating system, the container runtime, or the hypervisor.
Do Not Skip This Section
This is not optional hardening:
An unattended agent with unrestricted OS access is an unattended shell with unrestricted OS access.
The allowlist above is a strong first layer, but do not rely on a single runtime boundary.
For unattended runs, enforce isolation at the infrastructure level:
Layer What to enforce User account Run the agent as a dedicated unprivileged user with no sudo access and no membership in privileged groups (docker, wheel, adm). Filesystem Restrict the project directory via POSIX permissions or ACLs. The agent should have no access to other users' files or system directories. Container Run inside a Docker/Podman sandbox. Mount only the project directory. Drop capabilities (--cap-drop=ALL). Disable network if not needed (--network=none). Never mount the Docker socket and do not run privileged containers. Prefer rootless containers. Virtual machine Prefer a dedicated VM with no shared folders, no host passthrough, and no keys to other machines. Network If the agent does not need the internet, disable outbound access entirely. If it does, restrict to specific domains via firewall rules. Resource limits Apply CPU, memory, and disk limits (cgroups/container limits). A runaway loop should not fill disk or consume all RAM. Self-modification Make instruction files read-only. CLAUDE.md, .claude/settings.local.json, and .context/CONSTITUTION.md should not be writable by the agent user. If using project-local hooks, protect those too.
Use multiple layers together: OS-level isolation (the boundary the agent cannot cross), a permission allowlist (what Claude Code will do within that boundary), and CONSTITUTION.md (a soft nudge for the common case).
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-5-generate-the-loop-script","level":3,"title":"Step 5: Generate the Loop Script","text":"
Use ctx loop to generate a loop.sh tailored to your AI tool:
# Generate for Claude Code with a 10-iteration cap\nctx loop --tool claude --max-iterations 10\n\n# Generate for Aider\nctx loop --tool aider --max-iterations 10\n\n# Custom prompt file and output filename\nctx loop --tool claude --prompt my-prompt.md --output my-loop.sh\n
The generated script reads .context/loop.md, runs the tool, checks for completion signals, and loops until done or the cap is reached.
You can also use the /ctx-loop skill from inside Claude Code.
A Shell Loop Is the Best Practice
The shell loop approach spawns a fresh AI process each iteration, so the only state that carries between iterations is what lives in .context/ and git.
Claude Code's built-in /loop runs iterations within the same session, which can allow context window state to leak between iterations. This can be convenient for short runs, but it is less reliable for unattended loops.
See Shell Loop vs Built-in Loop for details.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-6-run-with-watch-mode","level":3,"title":"Step 6: Run with Watch Mode","text":"
Open two terminals. In the first, run the loop. In the second, run ctx watch to process context updates from the AI output.
# Terminal 1: Run the loop\n./loop.sh 2>&1 | tee /tmp/loop.log\n\n# Terminal 2: Watch for context updates\nctx watch --log /tmp/loop.log\n
The watch command parses XML context-update commands from the AI output and applies them:
<context-update type=\"complete\">user registration</context-update>\n<context-update type=\"learning\"\n context=\"Setting up user registration\"\n lesson=\"Email verification needs SMTP configured\"\n application=\"Add SMTP setup to deployment checklist\"\n>SMTP Requirement</context-update>\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-7-completion-signals-end-the-loop","level":3,"title":"Step 7: Completion Signals End the Loop","text":"
The generated script checks for one completion signal per run. By default this is SYSTEM_CONVERGED. You can change it with the --completion flag:
ctx loop --tool claude --completion BOOTSTRAP_COMPLETE --max-iterations 5\n
The following signals are conventions used in .context/loop.md:
Signal Convention How the script handles it SYSTEM_CONVERGED All tasks in TASKS.md are done Detected by default (--completion default value) SYSTEM_BLOCKED Agent cannot proceed Only detected if you set --completion to this BOOTSTRAP_COMPLETE Initial scaffolding done Only detected if you set --completion to this
The script uses grep -q on the agent's output, so any string works as a signal. If you need to detect multiple signals in one run, edit the generated loop.sh to add additional grep checks.
When you return in the morning, check the log and the context files:
tail -100 /tmp/loop.log\nctx status\nctx load\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#step-8-use-ctx-implement-for-plan-execution","level":3,"title":"Step 8: Use /ctx-implement for Plan Execution","text":"
Within each iteration, the agent can use /ctx-implement to execute multi-step plans with verification between steps. This is useful for complex tasks that touch multiple files.
The skill breaks a plan into atomic, verifiable steps:
Step 1/6: Create user model .................. OK\nStep 2/6: Add database migration ............. OK\nStep 3/6: Implement registration handler ..... OK\nStep 4/6: Write unit tests ................... OK\nStep 5/6: Run test suite ..................... FAIL\n -> Fixed: missing test dependency\n -> Re-verify ............................... OK\nStep 6/6: Update TASKS.md .................... OK\n
Each step is verified (build, test, syntax check) before moving to the next.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
A typical overnight run:
ctx init\n# Edit TASKS.md and .context/loop.md\n\nctx loop --tool claude --max-iterations 20\n\n./loop.sh 2>&1 | tee /tmp/loop.log &\nctx watch --log /tmp/loop.log\n\n# Next morning:\nctx status\nctx load\n
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#why-autonomous-loops-work-proactive-context-persistence","level":2,"title":"Why Autonomous Loops Work: Proactive Context Persistence","text":"
The autonomous loop pattern works because the agent persists context as part of the job.
Without proactive persistence, the loop degrades into disconnected iterations that repeat work, contradict decisions, and lose track of progress. The agent cannot rely on a human to prompt it. It must treat context updates as part of every task, not as a separate step.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-agent-playbook-self-check-drives-autonomy","level":3,"title":"The Agent Playbook Self-Check Drives Autonomy","text":"
The Agent Playbook defines a self-check prompt the agent applies continuously:
\"If this session ended right now, would the next session know what happened?\"\n
In interactive sessions, a human might remind the agent to save context. When the agent runs unattended, nobody is there to remind it.
The self-check becomes the trigger: after meaningful work, the agent evaluates whether the context files reflect reality and updates them immediately if they do not.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#what-the-agent-does-proactively-between-iterations","level":3,"title":"What the Agent Does Proactively Between Iterations","text":"
At milestones within an iteration, the agent persists without waiting for instructions:
Milestone What the agent does proactively Completes a task Marks it [x] in TASKS.md with #done timestamp Discovers a gotcha Adds it to LEARNINGS.md Makes a design choice Records it in DECISIONS.md with rationale Identifies follow-up work Creates new tasks in TASKS.md with #added timestamp Hits an unexpected failure Documents the root cause before moving on","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#example-what-proactive-persistence-looks-like","level":3,"title":"Example: What Proactive Persistence Looks Like","text":"
Here is what the agent does during a single iteration beyond writing code:
Iteration 4:\n 1. Read TASKS.md -> pick \"Add email verification\" (#priority:high)\n 2. Add #started:2026-01-25-030012 to the task\n 3. Implement the feature (code, tests, docs if needed)\n 4. Tests pass -> mark task [x], add #done:2026-01-25-031544\n 5. Add learning: \"SMTP config must be set before verification handler registers. Order matters in init().\"\n 6. Add decision: \"Use token-based verification links (not codes) because links work better in automated tests.\"\n 7. Create follow-up task: \"Add rate limiting to verification endpoint\" #added:...\n 8. Commit all changes including `.context/`\n 9. No signal emitted -> loop continues to iteration 5\n
Steps 2, 4, 5, 6, and 7 are proactive context persistence:
The agent was not asked to do any of them.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#context-persistence-at-milestones","level":3,"title":"Context Persistence at Milestones","text":"
For long autonomous runs, the agent persists context at natural boundaries, often at phase transitions or after completing a cluster of related tasks. It updates TASKS.md, DECISIONS.md, and LEARNINGS.md as it goes.
If the loop crashes at 4 AM, the context files tell you exactly where to resume. You can also use ctx journal source to review the session transcripts.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#the-persistence-contract","level":3,"title":"The Persistence Contract","text":"
The autonomous loop has an implicit contract:
Every iteration reads context: TASKS.md, DECISIONS.md, LEARNINGS.md
Every iteration writes context: task updates, new learnings, decisions
Every commit includes .context/ so the next iteration sees changes
Context stays current: if the loop stopped right now, nothing important is lost
Break any part of this contract and the loop degrades.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#tips","level":2,"title":"Tips","text":"
Markdown Is Not Enforcement
Your real guardrails are permissions and isolation, not Markdown. CONSTITUTION.md can nudge the agent, but it is probabilistic.
The permission allowlist and OS isolation are deterministic:
For unattended runs, trust the sandbox and the allowlist, not the prose.
Start with a small iteration cap. Use --max-iterations 5 on your first run.
Keep tasks atomic. Each task should be completable in a single iteration.
Check signal discipline. If the loop runs forever, the agent is not emitting SYSTEM_CONVERGED or SYSTEM_BLOCKED. Make the signal requirement explicit in .context/loop.md.
Commit after context updates. Finish code, update .context/, commit including .context/, then signal.
Set up webhook notifications to get notified when the loop completes, hits max iterations, or when hooks fire nudges. The generated loop script includes ctx notify calls automatically.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#next-up","level":2,"title":"Next Up","text":"
When to Use a Team of Agents →: Decision framework for choosing between a single agent, parallel worktrees, and a full agent team.
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/autonomous-loops/#see-also","level":2,"title":"See Also","text":"
Tracking Work Across Sessions: structuring TASKS.md
","path":["Recipes","Agents and Automation","Running an Unattended AI Agent"],"tags":[]},{"location":"recipes/building-skills/","level":1,"title":"Building Project Skills","text":"","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#the-problem","level":2,"title":"The Problem","text":"
You have workflows your agent needs to repeat across sessions: a deploy checklist, a review protocol, a release process. Each time, you re-explain the steps. The agent gets it mostly right but forgets edge cases you corrected last time.
Skills solve this by encoding domain knowledge into a reusable document the agent loads automatically when triggered. A skill is not code - it is a structured prompt that captures what took you sessions to learn.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#tldr","level":2,"title":"TL;DR","text":"
/ctx-skill-creator\n
The skill-creator walks you through: identify a repeating workflow, draft a skill, test with realistic prompts, iterate until it triggers correctly and produces good output.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-skill-creator Skill Interactive skill creation and improvement workflow ctx init Command Deploys template skills to .claude/skills/ on first setup","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-1-identify-a-repeating-pattern","level":3,"title":"Step 1: Identify a Repeating Pattern","text":"
Good skill candidates:
Checklists you repeat: deploy steps, release prep, code review
Decisions the agent gets wrong: if you keep correcting the same behavior, encode the correction
Multi-step workflows: anything with a sequence of commands and conditional branches
Domain knowledge: project-specific terminology, architecture constraints, or conventions the agent cannot infer from code alone
Not good candidates: one-off instructions, things the platform already handles (file editing, git operations), or tasks too narrow to reuse.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-2-create-the-skill","level":3,"title":"Step 2: Create the Skill","text":"
Invoke the skill-creator:
You: \"I want a skill for our deploy process\"\n\nAgent: [Asks about the workflow: what steps, what tools,\n what edge cases, what the output should look like]\n
Or capture a workflow you just did:
You: \"Turn what we just did into a skill\"\n\nAgent: [Extracts the steps from conversation history,\n confirms understanding, drafts the skill]\n
The skill-creator produces a SKILL.md file in .claude/skills/your-skill/.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-3-test-with-realistic-prompts","level":3,"title":"Step 3: Test with Realistic Prompts","text":"
The skill-creator proposes 2-3 test prompts - the kind of thing a real user would say. It runs each one and shows the result alongside a baseline (same prompt without the skill) so you can compare.
Agent: \"Here are test prompts I'd try:\n 1. 'Deploy to staging'\n 2. 'Ship the hotfix'\n 3. 'Run the release checklist'\n Want to adjust these?\"\n
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-4-iterate-on-the-description","level":3,"title":"Step 4: Iterate on the Description","text":"
The description field in frontmatter determines when a skill triggers. Claude tends to undertrigger - descriptions need to be specific and slightly \"pushy\":
# Weak - too vague, will undertrigger\ndescription: \"Use for deployments\"\n\n# Strong - covers situations and synonyms\ndescription: >-\n Use when deploying to staging or production, running the release\n checklist, or when the user says 'ship it', 'deploy this', or\n 'push to prod'. Also use after merging to main when a deploy\n is expected.\n
The skill-creator helps you tune this iteratively.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#step-5-deploy-as-template-optional","level":3,"title":"Step 5: Deploy as Template (Optional)","text":"
If the skill should be available to all projects (not just this one), place it in internal/assets/claude/skills/ so ctx init deploys it to new projects automatically.
Most project-specific skills stay in .claude/skills/ and travel with the repo.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#skill-anatomy","level":2,"title":"Skill Anatomy","text":"
my-skill/\n SKILL.md # Required: frontmatter + instructions (<500 lines)\n scripts/ # Optional: deterministic code the skill can execute\n references/ # Optional: detail loaded on demand (not always)\n assets/ # Optional: output templates, not loaded into context\n
Key sections in SKILL.md:
Section Purpose Required? Frontmatter Name, description (trigger) Yes When to Use Positive triggers Yes When NOT to Use Prevents false activations Yes Process Steps and commands Yes Examples Good/bad output pairs Recommended Quality Checklist Verify before reporting completion For complex skills","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#tips","level":2,"title":"Tips","text":"
Description is everything. A great skill with a vague description never fires. Spend time on trigger coverage - synonyms, concrete situations, edge cases.
Stay under 500 lines. If your skill is growing past this, move detail into references/ files and point to them from SKILL.md.
Do not duplicate the platform. If the agent already knows how to do something (edit files, run git commands), do not restate it. Tag paragraphs as Expert/Activation/Redundant and delete Redundant ones.
Explain why, not just what. \"Sort by date because users want recent results first\" beats \"ALWAYS sort by date.\" The agent generalizes from reasoning better than from rigid rules.
Test negative triggers. Make sure the skill does not fire on unrelated prompts. A skill that activates too broadly becomes noise.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#next-up","level":2,"title":"Next Up","text":"
Parallel Agent Development with Git Worktrees ->: Split work across multiple agents using git worktrees.
","path":["Recipes","Agents and Automation","Building Project Skills"],"tags":[]},{"location":"recipes/building-skills/#see-also","level":2,"title":"See Also","text":"
Skills Reference: full listing of all bundled and project-local skills
Guide Your Agent: how commands, skills, and conversational patterns work together
Design Before Coding: the four-skill chain for front-loading design work
Claude Code's .claude/settings.local.json controls what the agent can do without asking. Over time, this file accumulates one-off permissions from individual sessions: Exact commands with hardcoded paths, duplicate entries, and stale skill references.
A noisy \"allowlist\" makes it harder to spot dangerous permissions and increases the surface area for unintended behavior.
Since settings.local.json is .gitignored, it drifts independently of your codebase. There is no PR review, no CI check: just whatever you clicked \"Allow\" on.
This recipe shows what a well-maintained permission file looks like and how to keep it clean.
","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Command/Skill Role in this workflow ctx init Populates default ctx permissions /ctx-drift Detects missing or stale permission entries /ctx-sanitize-permissions Audits for dangerous patterns (security-focused)","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#recommended-defaults","level":2,"title":"Recommended Defaults","text":"
After running ctx init, your settings.local.json will have the ctx defaults pre-populated. Here is an opinionated safe starting point for a Go project using ctx:
The goal is intentional permissions: Every entry should be there because you decided it belongs, not because you clicked \"Allow\" once during debugging.
Use wildcards for trusted binaries: If you trust the binary (your own project's CLI, make, go), a single wildcard like Bash(ctx:*) beats twenty subcommand entries. It reduces noise and means new subcommands work without re-prompting.
Keep git commands granular: Unlike ctx or make, git has both safe commands (git log, git status) and destructive ones (git reset --hard, git clean -f). Listing safe commands individually prevents accidentally pre-approving dangerous ones.
Pre-approve all ctx- skills: Skills shipped with ctx (Skill(ctx-*)) are safe to pre-approve. They are part of your project and you control their content. This prevents the agent from prompting on every skill invocation.
ctx init automatically populates permissions.deny with rules that block dangerous operations. Deny rules are evaluated before allow rules: A denied pattern always prompts the user, even if it also matches an allow entry.
The defaults block:
Pattern Why Bash(sudo *) Cannot enter password; will hang Bash(git push *) Must be explicit user action Bash(rm -rf /*) etc. Recursive delete of system/home directories Bash(curl *) / wget Arbitrary network requests Bash(chmod 777 *) World-writable permissions Read/Edit(**/.env*) Secrets and credentials Read(**/*.pem, *.key) Private keys
Read/Edit Deny Rules
Read() and Edit() deny rules have known upstream enforcement issues (claude-code#6631,#24846).
They are included as defense-in-depth and intent documentation.
Blocked by default deny rules: no action needed, ctx init handles these:
Pattern Risk Bash(git push:*) Must be explicit user action Bash(sudo:*) Privilege escalation Bash(rm -rf:*) Recursive delete with no confirmation Bash(curl:*) / Bash(wget:*) Arbitrary network requests
Requires manual discipline: Never add these to allow:
Pattern Risk Bash(git reset:*) Can discard uncommitted work Bash(git clean:*) Deletes untracked files Skill(ctx-sanitize-permissions) Edits this file: self-modification vector Skill(release) Runs the release pipeline: high impact","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#hooks-regex-safety-net","level":2,"title":"Hooks: Regex Safety Net","text":"
Deny rules handle prefix-based blocking natively. Hooks complement them by catching patterns that require regex matching: Things deny rules can't express.
The ctx plugin ships these blocking hooks:
Hook What it blocks ctx system block-non-path-ctx Running ctx from wrong path
Project-local hooks (not part of the plugin) catch regex edge cases:
Hook What it blocks block-dangerous-commands.sh Mid-command sudo/git push (after &&), copies to bin dirs, absolute-path ctx
Pre-Approved + Hook-Blocked = Silent Block
If you pre-approve a command that a hook blocks, the user never sees the confirmation dialog. The agent gets a block response and must handle it, which is confusing.
It's better not to pre-approve commands that hooks are designed to intercept.
If manual cleanup is too tedious, use a golden image to automate it:
Snapshot a curated permission set, then restore at session start to automatically drop session-accumulated permissions. See the Permission Snapshots recipe for the full workflow.
","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/claude-code-permissions/#adapting-for-other-languages","level":2,"title":"Adapting for Other Languages","text":"
The recommended defaults above are Go-specific. For other stacks, swap the build/test tooling:
","path":["Recipes","Maintenance","Claude Code Permission Hygiene"],"tags":[]},{"location":"recipes/context-health/","level":1,"title":"Detecting and Fixing Drift","text":"","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#the-problem","level":2,"title":"The Problem","text":"
ctx files drift: you rename a package, delete a module, or finish a sprint, and suddenly ARCHITECTURE.md references paths that no longer exist, TASKS.md is 80 percent completed checkboxes, and CONVENTIONS.md describes patterns you stopped using two months ago.
Stale context is worse than no context:
An AI tool that trusts outdated references will hallucinate confidently.
This recipe shows how to detect drift, fix it, and keep your .context/ directory lean and accurate.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#tldr","level":2,"title":"TL;DR","text":"
ctx drift # detect problems\nctx drift --fix # auto-fix the easy ones\nctx sync --dry-run && ctx sync # reconcile after refactors\nctx compact --archive # archive old completed tasks\nctx status # verify\n
Or just ask your agent: \"Is our context clean?\"
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx drift Command Detect stale paths, missing files, violations ctx drift --fix Command Auto-fix simple issues ctx sync Command Reconcile context with codebase structure ctx compact Command Archive completed tasks, clean up empty sections ctx status Command Quick health overview /ctx-drift Skill Structural plus semantic drift detection /ctx-architecture Skill Refresh ARCHITECTURE.md from actual codebase /ctx-status Skill In-session context summary /ctx-prompt-audit Skill Audit prompt quality and token efficiency","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#the-workflow","level":2,"title":"The Workflow","text":"
The best way to maintain context health is conversational: Ask your agent, guide it, and let it detect problems, explain them, and fix them with your approval. CLI commands exist for CI pipelines, scripting, and fine-grained control.
For day-to-day maintenance, talk to your agent.
Your Questions Reinforce the Pattern
Asking \"is our context clean?\" does two things:
It triggers a drift check right now
It reinforces the habit
This is reinforcement, not enforcement.
Do not wait for the agent to be proactive on its own:
Guide your agent, especially in early sessions.
Over time, you will ask less and the agent will start offering more.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-1-ask-your-agent","level":3,"title":"Step 1: Ask Your Agent","text":"
The simplest way to check context health:
Is our context clean?\nAnything stale?\nHow healthy are our context files?\n
Or invoke the skill directly:
/ctx-drift\n
The agent performs two layers of analysis:
Layer 1, structural checks (via ctx drift): Dead paths, missing files, completed task counts, constitution violations. Fast and programmatic.
Layer 2, semantic analysis (agent-driven): Does CONVENTIONS.md describe patterns the code no longer follows? Does DECISIONS.md contain entries whose rationale no longer applies? Are there learnings about bugs that are now fixed? This is where the agent adds value the CLI cannot: It reads both context files and source code and compares them.
The agent reports both layers together, explains each finding in plain language, and offers to fix what it can.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-2-maintenance-at-session-start","level":3,"title":"Step 2: Maintenance at Session Start","text":"
You do not need to ask explicitly.
Using Claude Code
ctx ships with Claude Code hooks that remind the agent at the right time to take initiative.
Checking context health at the session start, offering to persist learnings before you quit, and flagging drift when it matters. The agent stays proactive without you having to prompt it:
Agent: Good morning. I've loaded the context files. A few things\n before we start:\n\n - ARCHITECTURE.md references `pkg/auth/` which is now empty\n - DECISIONS.md hasn't been updated in 40 days\n - There are 18 completed tasks ready for archival\n\n Want me to run a quick maintenance pass, or should we jump\n straight into today's work?\n
☝️️ this is what persistent, initiative-driven sessions feel like when context is treated as a system instead of a prompt.
If the agent does not offer this on its own, a gentle nudge is enough:
Anything stale before we start?\nHow's the context looking?\n
This turns maintenance from a scheduled chore into a conversation that happens when it matters.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-3-real-time-detection-during-work","level":3,"title":"Step 3: Real-Time Detection During Work","text":"
Agents can notice drift while working: When a mismatch is directly in the path of their current task. If an agent reads ARCHITECTURE.md to find where to add a handler and internal/handlers/ doesn't exist, it will notice because the stale reference blocks its work:
Agent: ARCHITECTURE.md references `internal/handlers/` but that directory\n doesn't exist. I'll look at the actual source tree to find where\n handlers live now.\n
This happens reliably when the drift intersects the task. What is less reliable is the agent generalizing from one mismatch to \"there might be more stale references; let me run drift detection\" That leap requires the agent to know /ctx-drift exists and to decide the current task should pause for maintenance.
If you want that behavior, reinforce it:
Good catch. Yes, run /ctx-drift and clean up any other stale references.\n
Over time, agents that have seen this pattern will start offering proactively. But do not expect it from a cold start.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#step-4-archival-and-cleanup","level":3,"title":"Step 4: Archival and Cleanup","text":"
ctx drift detects when TASKS.md has more than 10 completed items and flags it as a staleness warning. Running ctx drift --fix archives completed tasks automatically.
You can also run /ctx-archive to compact on demand.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#knowledge-health-flow","level":3,"title":"Knowledge Health Flow","text":"
Over time, LEARNINGS.md and DECISIONS.md accumulate entries that overlap or partially repeat each other. The check-persistence hook detects when entry counts exceed a configurable threshold and surfaces a nudge:
\"LEARNINGS.md has 25+ entries. Consider running /ctx-consolidate to merge overlapping items.\"
The consolidation workflow:
Review: /ctx-consolidate groups entries by keyword similarity and presents candidate merges for your approval.
Merge: Approved groups are combined into single entries that preserve the key information from each original.
Archive: Originals move to .context/archive/, not deleted -- the full history is preserved in git and the archive directory.
Verify: Run ctx drift after consolidation to confirm no cross-references were broken by the merge.
This replaces ad-hoc cleanup with a repeatable, nudge-driven cycle: detect accumulation, review candidates, merge with approval, archive originals.
See also: Knowledge Capture for the recording workflow that feeds into this maintenance cycle.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-doctor-the-superset-check","level":2,"title":"ctx doctor: The Superset Check","text":"
ctx doctor combines drift detection with hook auditing, configuration checks, event logging status, and token size reporting in a single command. If you want one command that covers structural health, hooks, and state:
ctx doctor # everything in one pass\nctx doctor --json # machine-readable for scripting\n
Use /ctx-doctor Too
For agent-driven diagnosis that adds semantic analysis on top of the structural checks, use /ctx-doctor.
See the Troubleshooting recipe for the full workflow.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#cli-reference","level":2,"title":"CLI Reference","text":"
The conversational approach above uses CLI commands under the hood. When you need direct control, use the commands directly.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-drift","level":3,"title":"ctx drift","text":"
Scan context files for structural problems:
ctx drift\n
Sample output:
Drift Report\n============\n\nWarnings (3):\n ARCHITECTURE.md:14 path \"internal/api/router.go\" does not exist\n ARCHITECTURE.md:28 path \"pkg/auth/\" directory is empty\n CONVENTIONS.md:9 path \"internal/handlers/\" not found\n\nViolations (1):\n TASKS.md 31 completed tasks (recommend archival)\n\nStaleness:\n DECISIONS.md last modified 45 days ago\n LEARNINGS.md last modified 32 days ago\n\nExit code: 1 (warnings found)\n
Level Meaning Action Warning Stale path references, missing files Fix or remove Violation Constitution rule heuristic failures, heavy clutter Fix soon Staleness Files not updated recently Review content
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-drift-fix","level":3,"title":"ctx drift --fix","text":"
Auto-fix mechanical issues:
ctx drift --fix\n
This handles removing dead path references, updating unambiguous renames, clearing empty sections. Issues requiring judgment are flagged but left for you.
Run ctx drift again afterward to confirm what remains.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-sync","level":3,"title":"ctx sync","text":"
After a refactor, reconcile context with the actual codebase structure:
ctx sync scans for structural changes, compares with ARCHITECTURE.md, checks for new dependencies worth documenting, and identifies context referring to code that no longer exists.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-compact","level":3,"title":"ctx compact","text":"
Consolidate completed tasks and clean up empty sections:
ctx compact # move completed tasks to Completed section,\n # remove empty sections\nctx compact --archive # also archive old tasks to .context/archive/\n
Tasks: moves completed items (with all subtasks done) into the Completed section of TASKS.md
All files: removes empty sections left behind
With --archive: writes tasks older than 7 days to .context/archive/tasks-YYYY-MM-DD.md
Without --archive, nothing is deleted: Tasks are reorganized in place.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-status","level":3,"title":"ctx status","text":"
Quick health overview:
ctx status --verbose\n
Shows file counts, token estimates, modification times, and drift warnings in a single glance.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#ctx-prompt-audit","level":3,"title":"/ctx-prompt-audit","text":"
Checks whether your context files are readable, compact, and token-efficient for the model.
/ctx-prompt-audit\n
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
Conversational approach (recommended):
Is our context clean? -> agent runs structural plus semantic checks\nFix what you can -> agent auto-fixes and proposes edits\nArchive the done tasks -> agent runs ctx compact --archive\nHow's token usage? -> agent checks ctx status\n
CLI approach (for CI, scripts, or direct control):
ctx drift # 1. Detect problems\nctx drift --fix # 2. Auto-fix the easy ones\nctx sync --dry-run && ctx sync # 3. Reconcile after refactors\nctx compact --archive # 4. Archive old completed tasks\nctx status # 5. Verify\n
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#tips","level":2,"title":"Tips","text":"
Agents cross-reference context files with source code during normal work. When drift intersects their current task, they will notice: a renamed package, a deleted directory, a path that doesn't resolve. But they rarely generalize from one mismatch to a full audit on their own. Reinforce the pattern: when an agent mentions a stale reference, ask it to run /ctx-drift. Over time, it starts offering.
When an agent says \"this reference looks stale,\" it is usually right.
Semantic drift is more damaging than structural drift: ctx drift catches dead paths. But CONVENTIONS.md describing a pattern your code stopped following three weeks ago is worse. When you ask \"is our context clean?\", the agent can do both checks.
Use ctx status as a quick check: It shows file counts, token estimates, and drift warnings in a single glance. Good for a fast \"is everything ok?\" before diving into work.
Drift detection in CI: add ctx drift --json to your CI pipeline and fail on exit code 3 (violations). This catches constitution-level problems before they reach upstream.
Do not over-compact: Completed tasks have historical value. The --archive flag preserves them in .context/archive/ so you can search past work without cluttering active context.
Sync is cautious by default: Use --dry-run after large refactors, then apply.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#next-up","level":2,"title":"Next Up","text":"
Claude Code Permission Hygiene →: Recommended permission defaults and maintenance workflow for Claude Code.
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/context-health/#see-also","level":2,"title":"See Also","text":"
Troubleshooting: full diagnostic workflow using ctx doctor, event logs, and /ctx-doctor
Tracking Work Across Sessions: task lifecycle and archival
Persisting Decisions, Learnings, and Conventions: keeping knowledge files current
The Complete Session: where maintenance fits in the daily workflow
CLI Reference: full flag documentation for all commands
Context Files: structure and purpose of each .context/ file
","path":["Recipes","Maintenance","Detecting and Fixing Drift"],"tags":[]},{"location":"recipes/customizing-hook-messages/","level":1,"title":"Customizing Hook Messages","text":"","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#the-problem","level":2,"title":"The Problem","text":"
ctx hooks speak ctx's language, not your project's. The QA gate says \"lint the ENTIRE project\" and \"make build,\" but your Python project uses pytest and ruff. The post-commit nudge suggests running lints, but your project uses npm test. You could remove the hook entirely, but then you lose the logic (counting, state tracking, adaptive frequency) just to change the words.
How do you customize what hooks say without removing what they do?
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#tldr","level":2,"title":"TL;DR","text":"
ctx system message list # see all hooks and their messages\nctx system message show qa-reminder gate # view the current template\nctx system message edit qa-reminder gate # copy default to .context/ for editing\nctx system message reset qa-reminder gate # revert to embedded default\n
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#commands-used","level":2,"title":"Commands Used","text":"Tool Type Purpose ctx system message list CLI command Show all hook messages with category and override status ctx system message show CLI command Print the effective message template ctx system message edit CLI command Copy embedded default to .context/ for editing ctx system message reset CLI command Delete user override, revert to default","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#how-it-works","level":2,"title":"How It Works","text":"
Hook messages use a 3-tier fallback:
User override: .context/hooks/messages/{hook}/{variant}.txt
Embedded default: compiled into the ctx binary
Hardcoded fallback: belt-and-suspenders safety net
The hook logic (when to fire, counting, state tracking, cooldowns) is unchanged. Only the content (what text gets emitted) comes from the template. You customize what the hook says without touching how it decides to speak.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#finding-the-original-templates","level":3,"title":"Finding the Original Templates","text":"
The default templates live in the ctx source tree at:
You can also browse them on GitHub: internal/assets/hooks/messages/
Or use ctx system message show to print any template without digging through source code:
ctx system message show qa-reminder gate # QA gate instructions\nctx system message show check-persistence nudge # persistence nudge\nctx system message show post-commit nudge # post-commit reminder\n
The show output includes the template source and available variables -- everything you need to write a replacement.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#template-variables","level":3,"title":"Template Variables","text":"
Some messages use Go text/template variables for dynamic content:
No context files updated in {{.PromptsSinceNudge}}+ prompts.\nHave you discovered learnings, made decisions,\nestablished conventions, or completed tasks\nworth persisting?\n
The show and edit commands list available variables for each message. When writing a replacement, keep the same {{.VariableName}} placeholders to preserve dynamic content. Variables that you omit render as <no value>: no error, but the output may look odd.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#intentional-silence","level":3,"title":"Intentional Silence","text":"
An empty template file (0 bytes or whitespace-only) means \"don't emit a message\". The hook still runs its logic but produces no output. This lets you silence specific messages without removing the hook from hooks.json.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#example-python-project-qa-gate","level":2,"title":"Example: Python Project QA Gate","text":"
The default QA gate says \"lint the ENTIRE project\" and references make lint. For a Python project, you want pytest and ruff:
# See the current default\nctx system message show qa-reminder gate\n\n# Copy it to .context/ for editing\nctx system message edit qa-reminder gate\n\n# Edit the override\n
Replace the content in .context/hooks/messages/qa-reminder/gate.txt:
HARD GATE! DO NOT COMMIT without completing ALL of these steps first:\n(1) Run the full test suite: pytest -x\n(2) Run the linter: ruff check .\n(3) Verify a clean working tree\nRun tests and linter BEFORE every git commit, no exceptions.\n
The hook still fires on every Edit call. The logic is identical. Only the instructions changed.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#example-silencing-ceremony-nudges","level":2,"title":"Example: Silencing Ceremony Nudges","text":"
The ceremony check nudges you to use /ctx-remember and /ctx-wrap-up. If your team has a different workflow and finds these noisy:
ctx system message edit check-ceremonies both\nctx system message edit check-ceremonies remember\nctx system message edit check-ceremonies wrapup\n
The hooks still track ceremony usage internally, but they no longer emit any visible output.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#example-javascript-project-post-commit","level":2,"title":"Example: JavaScript Project Post-Commit","text":"
The default post-commit nudge mentions generic \"lints and tests.\" For a JavaScript project:
ctx system message edit post-commit nudge\n
Replace with:
Commit succeeded. 1. Offer context capture to the user: Decision (design\nchoice?), Learning (gotcha?), or Neither. 2. Ask the user: \"Want me to\nrun npm test and eslint before you push?\" Do NOT push. The user pushes\nmanually.\n
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#the-two-categories","level":2,"title":"The Two Categories","text":"
Not all messages are equal. The list command shows each message's category:
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#customizable-17-messages","level":3,"title":"Customizable (17 messages)","text":"
Messages that are opinions: project-specific wording that benefits from customization. These are the primary targets for override.
Templates that reference undefined variables render <no value>: no error, graceful degradation.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#tips","level":2,"title":"Tips","text":"
Override files are version-controlled: they live in .context/ alongside your other context files. Team members get the same customized messages.
Start with show: always check the current default before editing. The embedded template is the baseline your override replaces.
Use reset to undo: if a customization causes confusion, reset reverts to the embedded default instantly.
Empty file = silence: you don't need to delete the hook. An empty override file silences the message while preserving the hook's logic.
JSON output for scripting: ctx system message list --json returns structured data for automation.
","path":["Recipes","Hooks and Notifications","Customizing Hook Messages"],"tags":[]},{"location":"recipes/customizing-hook-messages/#see-also","level":2,"title":"See Also","text":"
Hook Output Patterns: understanding VERBATIM relays, agent directives, and hard gates
Auditing System Hooks: verifying hooks are running and auditing their output
Understanding how packages relate to each other is the first step in onboarding, refactoring, and architecture review. ctx dep generates dependency graphs from source code so you can see the structure at a glance instead of tracing imports by hand.
# Auto-detect ecosystem and output Mermaid (default)\nctx dep\n\n# Table format for a quick terminal overview\nctx dep --format table\n\n# JSON for programmatic consumption\nctx dep --format json\n
By default, only internal (first-party) dependencies are shown. Add --external to include third-party packages:
ctx dep --external\nctx dep --external --format table\n
This is useful when auditing transitive dependencies or checking which packages pull in heavy external libraries.
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#when-to-use-it","level":2,"title":"When to Use It","text":"
Onboarding. Generate a Mermaid graph and drop it into the project wiki. New contributors see the architecture before reading code.
Refactoring. Before moving packages, check what depends on them. Combine with ctx drift to find stale references after the move.
Architecture review. Table format gives a quick overview; Mermaid format goes into design docs and PRs.
Pre-commit. Run in CI to detect unexpected new dependencies between packages.
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#combining-with-other-commands","level":2,"title":"Combining with Other Commands","text":"","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#refactoring-with-ctx-drift","level":3,"title":"Refactoring with ctx drift","text":"
# See the dependency structure before refactoring\nctx dep --format table\n\n# After moving packages, check for broken references\nctx drift\n
Use JSON output as input for context files or architecture documentation:
# Generate a dependency snapshot for the context directory\nctx dep --format json > .context/deps.json\n\n# Or pipe into other tools\nctx dep --format mermaid >> docs/architecture.md\n
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/dependency-graph/#monorepos-and-multi-ecosystem-projects","level":2,"title":"Monorepos and Multi-Ecosystem Projects","text":"
In a monorepo with multiple ecosystems, ctx dep picks the first manifest it finds (Go beats Node.js beats Python beats Rust). Use --type to target a specific ecosystem:
# In a repo with both go.mod and package.json\nctx dep --type node\nctx dep --type go\n
For separate subdirectories, run from each root:
cd services/api && ctx dep --format table\ncd frontend && ctx dep --type node --format mermaid\n
Start with table format. It is the fastest way to get a mental model of the dependency structure. Switch to Mermaid when you need a visual for documentation or a PR.
Pipe JSON to jq. Filter for specific packages, count edges, or extract subgraphs programmatically.
Skip --external unless you need it. Internal-only graphs are cleaner and load faster. Add external deps when you are specifically auditing third-party usage.
Force --type in CI. Auto-detection is convenient locally, but explicit types prevent surprises when the repo structure changes.
","path":["Generating Dependency Graphs"],"tags":[]},{"location":"recipes/design-before-coding/","level":1,"title":"Design Before Coding","text":"","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#the-problem","level":2,"title":"The Problem","text":"
You start coding a feature. Halfway through, you realize the approach doesn't handle a key edge case. You refactor. Then you discover the CLI interface doesn't fit the existing patterns. More refactoring.
The design work happened during implementation, mixed in with debugging and trial-and-error. The result works, but the spec was never written down, the trade-offs were never recorded, and the next session has no idea why things are shaped this way.
How do you front-load design so the implementation is straightforward?
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#tldr","level":2,"title":"TL;DR","text":"
/ctx-brainstorm # explore the design space\n/ctx-spec # write the spec document\n/ctx-add-task # break it into tasks\n/ctx-implement # execute step-by-step\n
Four skills, used in sequence. Each produces an artifact that feeds the next.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-brainstorm Skill Structured design dialogue: explore approaches /ctx-spec Skill Scaffold and fill out a spec from the template /ctx-add-task Skill Add implementation tasks to TASKS.md /ctx-implement Skill Execute a plan step-by-step with verification /ctx-add-decision Skill Record design choices made during brainstorming","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-1-brainstorm-the-design","level":3,"title":"Step 1: Brainstorm the Design","text":"
Start with /ctx-brainstorm when you have a rough idea but haven't committed to an approach. The skill follows a gated process:
Understand context: what exists today, what's the gap
Clarify the idea: one question at a time, not a wall of questions
Lock understanding: confirm before exploring solutions
Explore 2-3 approaches: with trade-offs for each
Stress-test: poke holes in the chosen approach
Present the design: detailed enough to implement from
You: \"I want to add RSS feed generation to the blog.\"\n\nAgent: \"Before we design, let me understand: is this for the journal\n site, the docs site, or both?\"\n\nYou: \"Journal site only.\"\n\nAgent: [asks about update frequency, content scope, existing infra...]\n [presents 3 approaches with trade-offs]\n\nYou: \"Option 2 - generate at build time.\"\n\nAgent: \"Let me stress-test that. What happens when...\"\n [probes edge cases]\n \"Design looks solid. Want me to record the approach as a\n decision and write a spec?\"\n
The brainstorm produces a validated design in conversation. It doesn't produce a file - that's what the spec is for.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-2-write-the-spec","level":3,"title":"Step 2: Write the Spec","text":"
Once the design is clear, use /ctx-spec to produce the written artifact. The skill reads the project's spec template (specs/tpl/spec-template.md) and walks through each section:
You: /ctx-spec (rss feed generation)\n\nAgent: \"Creating specs/rss-feed.md from template.\"\n\n \"Problem: What user-visible problem does this solve? Why now?\"\n\nYou: \"Users can't subscribe to blog updates. They have to check\n the site manually.\"\n\nAgent: [writes the Problem section, moves to Approach]\n\n \"Approach: High-level, how does this work?\"\n\nYou: \"Generate feed.xml during ctx journal site. Atom format.\"\n\nAgent: [writes Approach, moves to Happy Path, then Edge Cases...]\n
The skill spends extra time on Edge Cases and Error Handling: These are where specs earn their value. The skill pushes for at least three edge cases and won't accept \"none\" without a challenge.
Sections that don't apply can be skipped. The result is a complete spec at specs/{feature-name}.md.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-3-break-into-tasks","level":3,"title":"Step 3: Break Into Tasks","text":"
After the spec is written, the skill offers to create tasks:
Agent: \"Spec written to specs/rss-feed.md. Want me to break this\n into tasks in TASKS.md?\"\n\nYou: \"Yes.\"\n\nAgent: [creates a Phase section referencing the spec, adds tasks]\n \"Added Phase 0.8: RSS/Atom Feed Generation with 4 tasks:\n - [ ] Implement feed.xml generation in ctx journal site\n - [ ] Add --feed flag and .ctxrc feed config\n - [ ] Write tests for feed generation\n - [ ] Add recipe and update docs\"\n
Each task is independently completable and references the spec via Spec: specs/rss-feed.md in the Phase header.
You can also invoke /ctx-add-task directly at any point to add individual tasks.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#step-4-implement-step-by-step","level":3,"title":"Step 4: Implement Step-by-Step","text":"
Pick up tasks with /ctx-implement. The skill loads the spec, breaks work into atomic steps, and checkpoints after every 3-5 steps with a build and test verification:
You: /ctx-implement (specs/rss-feed.md)\n\nAgent: \"Loaded spec. Breaking into steps:\n 1. Add FeedEntry type to journal package\n 2. Implement feed.xml template\n 3. Wire into site generation pipeline\n [builds and tests after step 3]\n 4. Add --feed flag to ctx journal site\n 5. Add .ctxrc feed configuration\n [builds and tests after step 5]\n ...\"\n
If a build or test fails, the agent stops, diagnoses, and fixes before continuing.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#when-to-skip-steps","level":2,"title":"When to Skip Steps","text":"
Not every feature needs all four steps. Use your judgment:
Situation Start at Vague idea, multiple valid approaches Step 1: Brainstorm Clear approach, need to document it Step 2: Spec Spec already exists, need to plan work Step 3: Tasks Tasks exist, ready to code Step 4: Implement
A brainstorm without a spec is fine for small decisions. A spec without a brainstorm is fine when the design is obvious. The full chain is for features complex enough to warrant front-loaded design.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't need skill names. Natural language works:
You say What happens \"Let's think through this feature\" /ctx-brainstorm \"Spec this out\" /ctx-spec \"Write a design doc for...\" /ctx-spec \"Break this into tasks\" /ctx-add-task \"Implement the spec\" /ctx-implement \"Let's design before we build\" Starts at brainstorm","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#tips","level":2,"title":"Tips","text":"
Brainstorm first when uncertain. If you can articulate the approach in two sentences, skip to spec. If you can't, brainstorm.
Specs prevent scope creep. The Non-Goals section is as important as the approach. Writing down what you won't do keeps implementation focused.
Edge cases are the point. A spec that only describes the happy path isn't a spec - it's a wish. The /ctx-spec skill pushes for at least 3 edge cases because that's where designs break.
Record decisions during brainstorming. When you choose between approaches, the agent offers to persist the trade-off via /ctx-add-decision. Accept - future sessions need to know why, not just what.
Specs are living documents. Update them when implementation reveals new constraints. A spec that diverges from reality is worse than no spec.
The spec template is customizable. Edit specs/tpl/spec-template.md to match your project's needs. The /ctx-spec skill reads whatever template it finds there.
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/design-before-coding/#see-also","level":2,"title":"See Also","text":"
Skills Reference: /ctx-spec: spec scaffolding from template
Skills Reference: /ctx-implement: step-by-step execution with verification
Tracking Work Across Sessions: task lifecycle and archival
Importing Claude Code Plans: turning ephemeral plans into permanent specs
Persisting Decisions, Learnings, and Conventions: capturing design trade-offs
","path":["Recipes","Knowledge and Tasks","Design Before Coding"],"tags":[]},{"location":"recipes/external-context/","level":1,"title":"Keeping Context in a Separate Repo","text":"","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#the-problem","level":2,"title":"The Problem","text":"
ctx files contain project-specific decisions, learnings, conventions, and tasks. By default, they live in .context/ inside the project tree, and that works well when the context can be public.
But sometimes you need the context outside the project:
Open-source projects with private context: Your architectural notes, internal task lists, and scratchpad entries shouldn't ship with the public repo.
Compliance or IP concerns: Context files reference sensitive design rationale that belongs in a separate access-controlled repository.
Personal preference: You want a single context repo that covers multiple projects, or you just prefer keeping notes separate from code.
ctx supports this through three configuration methods. This recipe shows how to set them up and how to tell your AI assistant where to find the context.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#tldr","level":2,"title":"TL;DR","text":"
All ctx commands now use the external directory automatically.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx init CLI command Initialize context directory --context-dir Global flag Point ctx at a non-default directory --allow-outside-cwd Global flag Permit context outside the project root .ctxrc Config file Persist the context directory setting CTX_DIR Env variable Override context directory per-session /ctx-status Skill Verify context is loading correctly","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-1-create-the-private-context-repo","level":3,"title":"Step 1: Create the Private Context Repo","text":"
Create a separate repository for your context files. This can live anywhere: a private GitHub repo, a shared drive, a sibling directory:
# Create the context repo\nmkdir ~/repos/myproject-context\ncd ~/repos/myproject-context\ngit init\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-2-initialize-ctx-pointing-at-it","level":3,"title":"Step 2: Initialize ctx Pointing at It","text":"
From your project root, initialize ctx with --context-dir pointing to the external location. Because the directory is outside your project tree, you also need --allow-outside-cwd:
cd ~/repos/myproject\nctx --context-dir ~/repos/myproject-context \\\n --allow-outside-cwd \\\n init\n
This creates the full .context/-style file set inside ~/repos/myproject-context/ instead of ~/repos/myproject/.context/.
Boundary Validation
ctx validates that the .context directory is within the current working directory.
If your external directory is truly outside the project root:
Either every ctx command needs --allow-outside-cwd,
or you can persist the setting in .ctxrc (next step).
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-3-make-it-stick","level":3,"title":"Step 3: Make It Stick","text":"
Typing --context-dir and --allow-outside-cwd on every command is tedious. Pick one of these methods to make the configuration permanent.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#option-a-ctxrc-recommended","level":4,"title":"Option A: .ctxrc (Recommended)","text":"
Create a .ctxrc file in your project root:
# .ctxrc: committed to the project repo\ncontext_dir: ~/repos/myproject-context\nallow_outside_cwd: true\n
ctx reads .ctxrc automatically. Every command now uses the external directory without extra flags:
ctx status # reads from ~/repos/myproject-context\nctx add learning \"Redis MULTI doesn't roll back on error\"\n
Commit .ctxrc
.ctxrc belongs in the project repo. It contains no secrets: It's just a path and a boundary override.
.ctxrc lets teammates share the same configuration.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#option-b-ctx_dir-environment-variable","level":4,"title":"Option B: CTX_DIR Environment Variable","text":"
Good for CI pipelines, temporary overrides, or when you don't want to commit a .ctxrc:
# In your shell profile (~/.bashrc, ~/.zshrc)\nexport CTX_DIR=~/repos/myproject-context\n
Or for a single session:
CTX_DIR=~/repos/myproject-context ctx status\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#option-c-shell-alias","level":4,"title":"Option C: Shell Alias","text":"
If you prefer a shell alias over .ctxrc:
# ~/.bashrc or ~/.zshrc\nalias ctx='ctx --context-dir ~/repos/myproject-context --allow-outside-cwd'\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#priority-order","level":4,"title":"Priority Order","text":"
When multiple methods are set, ctx resolves the context directory in this order (highest priority first):
--context-dir flag
CTX_DIR environment variable
context_dir in .ctxrc
Default: .context/
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-4-agent-auto-discovery-via-bootstrap","level":3,"title":"Step 4: Agent Auto-Discovery via Bootstrap","text":"
When context lives outside the project tree, your AI assistant needs to know where to find it. The ctx system bootstrap command resolves the configured context directory and communicates it to the agent automatically:
$ ctx system bootstrap\nctx bootstrap\n=============\n\ncontext_dir: /home/user/repos/myproject-context\n\nFiles:\n CONSTITUTION.md, TASKS.md, DECISIONS.md, ...\n
The CLAUDE.md template generated by ctx init already instructs the agent to run ctx system bootstrap at session start. Because .ctxrc is in the project root, your agent inherits the external path automatically via the ctx system boostrap call instruction.
Here is the relevant section from CLAUDE.md for reference:
<!-- CLAUDE.md -->\n1. **Run `ctx system bootstrap`**: CRITICAL, not optional.\n This tells you where the context directory is. If it fails or returns\n no context_dir, STOP and warn the user.\n
Moreover, every nudge (context checkpoint, persistence reminder, etc.) also includes a Context: /home/user/repos/myproject-context footer, so the agent remains anchored to the correct directory even in long sessions.
If you use CTX_DIR instead of .ctxrc, export it in your shell profile so the hook process inherits it:
export CTX_DIR=~/repos/myproject-context\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-5-share-with-teammates","level":3,"title":"Step 5: Share with Teammates","text":"
Teammates clone both repos and set up .ctxrc:
# Clone the project\ngit clone git@github.com:org/myproject.git\ncd myproject\n\n# Clone the private context repo\ngit clone git@github.com:org/myproject-context.git ~/repos/myproject-context\n
If .ctxrc is already committed to the project, they're done: ctx commands will find the external context automatically.
If teammates use different paths, each developer sets their own CTX_DIR:
export CTX_DIR=~/my-own-path/myproject-context\n
For encryption key distribution across the team, see the Syncing Scratchpad Notes recipe.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#step-6-day-to-day-sync","level":3,"title":"Step 6: Day-to-Day Sync","text":"
The external context repo has its own git history. Treat it like any other repo: Commit and push after sessions:
cd ~/repos/myproject-context\n\n# After a session\ngit add -A\ngit commit -m \"Session: refactored auth module, added rate-limit learning\"\ngit push\n
Your AI assistant can do this too. When ending a session:
You: \"Save what we learned and push the context repo.\"\n\nAgent: [runs ctx add learning, then commits and pushes the context repo]\n
You can also set up a post-session habit: project code gets committed to the project repo, context gets committed to the context repo.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't need to remember the flags; simply ask your assistant:
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#set-up-your-system-using-natural-language","level":3,"title":"Set Up Your System Using Natural Language","text":"
You: \"Set up ctx to use ~/repos/myproject-context as the context directory.\"\n\nAgent: \"I'll create a .ctxrc in the project root pointing to that path.\n I'll also update CLAUDE.md so future sessions know where to find\n context. Want me to initialize the context files there too?\"\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#configure-separate-repo-for-context-folder-using-natural-language","level":3,"title":"Configure Separate Repo for .context Folder Using Natural Language","text":"
You: \"My context is in a separate repo. Can you load it?\"\n\nAgent: [reads .ctxrc, finds the path, loads context from the external dir]\n \"Loaded. You have 3 pending tasks, last session was about the auth\n refactor.\"\n
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#tips","level":2,"title":"Tips","text":"
Start simple. If you don't need external context yet, don't set it up. The default .context/ in-tree is the easiest path. Move to an external repo when you have a concrete reason.
One context repo per project. Sharing a single context directory across multiple projects creates confusion. Keep the mapping 1:1.
Use .ctxrc over env vars when the path is stable. It's committed, documented, and works for the whole team without per-developer shell setup.
Don't forget the boundary flag. The most common error is Error: context directory is outside the project root. Set allow_outside_cwd: true in .ctxrc or pass --allow-outside-cwd.
Commit both repos at session boundaries. Context without code history (or code without context history) loses half the value.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#next-up","level":2,"title":"Next Up","text":"
The Complete Session →: Walk through a full ctx session from start to finish.
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/external-context/#see-also","level":2,"title":"See Also","text":"
Setting Up ctx Across AI Tools: initial setup recipe
Syncing Scratchpad Notes Across Machines: distribute encryption keys when context is shared
CLI Reference: all global flags including --context-dir and --allow-outside-cwd
","path":["Recipes","Getting Started","Keeping Context in a Separate Repo"],"tags":[]},{"location":"recipes/guide-your-agent/","level":1,"title":"Guide Your Agent","text":"
Commands vs. Skills
Commands (ctx status, ctx add task) run in your terminal.
Skills (/ctx-reflect, /ctx-next) run inside your AI coding assistant.
Recipes combine both.
Think of commands as structure and skills as behavior.
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/guide-your-agent/#proactive-behavior","level":2,"title":"Proactive Behavior","text":"
These recipes show explicit commands and skills, but agents trained on the ctx playbook are proactive: They offer to save learnings after debugging, record decisions after trade-offs, create follow-up tasks after completing work, and suggest what to work on next.
Your questions train the agent. Asking \"what have we learned?\" or \"is our context clean?\" does two things:
It triggers the workflow right now,
and it reinforces the pattern.
The more you guide, the more the agent habituates the behavior and begins offering on its own.
Each recipe includes a Conversational Approach section showing these natural-language patterns.
Tip
Don't wait passively for proactive behavior: especially in early sessions.
Ask, guide, reinforce. Over time, you ask less and the agent offers more.
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/guide-your-agent/#next-up","level":2,"title":"Next Up","text":"
Setup Across AI Tools →: Initialize ctx and configure hooks for Claude Code, Cursor, Aider, Copilot, or Windsurf.
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/guide-your-agent/#see-also","level":2,"title":"See Also","text":"
The Complete Session: full session lifecycle from start to finish
Prompting Guide: general tips for working effectively with AI coding assistants
","path":["Recipes","Getting Started","Guide Your Agent"],"tags":[]},{"location":"recipes/hook-output-patterns/","level":1,"title":"Hook Output Patterns","text":"","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#the-problem","level":2,"title":"The Problem","text":"
Claude Code hooks can output text, JSON, or nothing at all. But the format of that output determines who sees it and who acts on it.
Choose the wrong pattern, and your carefully crafted warning gets silently absorbed by the agent, or your agent-directed nudge gets dumped on the user as noise.
This recipe catalogs the known hook output patterns and explains when to use each one.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#tldr","level":2,"title":"TL;DR","text":"
Eight patterns from full control to full invisibility:
hard gate (exit 2),
VERBATIM relay (agent MUST show),
agent directive (context injection),
and silent side-effect (background work).
Most hooks belong in the middle.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#the-spectrum","level":2,"title":"The Spectrum","text":"
These patterns form a spectrum based on who decides what the user sees:
The spectrum runs from full hook control (hard gate) to full invisibility (silent side effect).
Most hooks belong somewhere in the middle.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-1-hard-gate","level":2,"title":"Pattern 1: Hard Gate","text":"
Block the tool call entirely. The agent cannot proceed: it must find another approach or tell the user.
echo '{\"decision\": \"block\", \"reason\": \"Use ctx from PATH, not ./ctx\"}'\n
When to use: Enforcing invariants that must never be violated: Constitution rules, security boundaries, destructive command prevention.
Hook type: PreToolUse only (Claude Code first-class mechanism).
Examples in ctx:
ctx system block-non-path-ctx: Enforces the PATH invocation rule
block-git-push.sh: Requires explicit user approval for pushes (project-local)
block-dangerous-commands.sh: Prevents sudo, copies to ~/.local/bin (project-local)
Trade-off: The agent gets a block response with a reason. Good reasons help the agent recover (\"use X instead\"); bad reasons leave it stuck.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-2-verbatim-relay","level":2,"title":"Pattern 2: VERBATIM Relay","text":"
Force the agent to show this to the user as-is. The explicit instruction overcomes the agent's tendency to silently absorb context.
echo \"IMPORTANT: Relay this warning to the user VERBATIM before answering their question.\"\necho \"\"\necho \"┌─ Journal Reminder ─────────────────────────────\"\necho \"│ You have 12 sessions not yet exported.\"\necho \"└────────────────────────────────────────────────\"\n
When to use: Actionable reminders the user needs to see regardless of what they asked: Stale backups, unimported sessions, resource warnings.
Hook type: UserPromptSubmit (runs before the agent sees the prompt).
Examples in ctx:
ctx system check-journal: Unexported sessions and unenriched entries
ctx system check-context-size: Context capacity warning
ctx system check-resources: Resource pressure (memory, swap, disk, load): DANGER only
ctx system check-freshness: Technology constant staleness warning
check-backup-age.sh: Stale backup warning (project-local)
Trade-off: Noisy if overused. Every VERBATIM relay adds a preamble before the agent's actual answer. Throttle with once-per-day markers or adaptive frequency.
Key detail: The phrase IMPORTANT: Relay this ... VERBATIM is what makes this work. Without it, agents tend to process the information internally and never surface it. The explicit instruction is the pattern: the box-drawing is just fancy formatting.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-3-agent-directive","level":2,"title":"Pattern 3: Agent Directive","text":"
Tell the agent to do something, not the user. The agent decides whether and how to involve the user.
echo \"┌─ Persistence Checkpoint (prompt #25) ───────────\"\necho \"│ No context files updated in 15+ prompts.\"\necho \"│ Have you discovered learnings, decisions,\"\necho \"│ or completed tasks worth persisting?\"\necho \"└──────────────────────────────────────────────────\"\n
When to use: Behavioral nudges. The hook detects a condition and asks the agent to consider an action. The user may never need to know.
Hook type: UserPromptSubmit.
Examples in ctx:
ctx system check-persistence: Nudges the agent to persist context
Trade-off: No guarantee the agent acts. The nudge is one signal among many in the context window. Strong phrasing helps (\"Have you...?\" is better than \"Consider...\"), but ultimately the agent decides.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-4-silent-context-injection","level":2,"title":"Pattern 4: Silent Context Injection","text":"
Load context with no visible output. The agent gets enriched without either party noticing.
ctx agent --budget 4000 >/dev/null || true\n
When to use: Background context loading that should be invisible. The agent benefits from the information, but neither it, nor the user needs to know it happened.
Hook type: PreToolUse with .* matcher (runs on every tool call).
Examples in ctx:
The ctx agentPreToolUse hook: injects project context silently
Trade-off: Adds latency to every tool call. Keep the injected content small and fast to generate.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-5-silent-side-effect","level":2,"title":"Pattern 5: Silent Side-Effect","text":"
Do work, produce no output: Housekeeping that needs no acknowledgment.
find \"$CTX_TMPDIR\" -type f -mtime +15 -delete\n
When to use: Cleanup, log rotation, temp file management. Anything where the action is the point and nobody needs to know it happened.
Hook type: Any hook where output is irrelevant.
Examples in ctx:
Log rotation, marker file cleanup, state directory maintenance
Trade-off: None, if the action is truly invisible. If it can fail in a way that matters, consider logging.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-6-conditional-relay","level":3,"title":"Pattern 6: Conditional Relay","text":"
Tell the agent to relay only if a condition holds in context.
echo \"If the user's question involves modifying .context/ files,\"\necho \"relay this warning VERBATIM:\"\necho \"\"\necho \"┌─ Context Integrity ─────────────────────────────\"\necho \"│ CONSTITUTION.md has not been verified in 7 days.\"\necho \"└────────────────────────────────────────────────\"\necho \"\"\necho \"Otherwise, proceed normally.\"\n
When to use: Warnings that only matter in certain contexts. Avoids noise when the user is doing unrelated work.
Trade-off: Depends on the agent's judgment about when the condition holds. More fragile than VERBATIM relay, but less noisy.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-7-suggested-action","level":3,"title":"Pattern 7: Suggested Action","text":"
Give the agent a specific command to propose to the user.
echo \"┌─ Stale Dependencies ──────────────────────────\"\necho \"│ go.sum is 30+ days newer than go.mod.\"\necho \"│ Suggested: run \\`go mod tidy\\`\"\necho \"│ Ask the user before proceeding.\"\necho \"└───────────────────────────────────────────────\"\n
When to use: The hook detects a fixable condition and knows the fix. Goes beyond a nudge: Gives the agent a concrete next step. The agent still asks for permission but knows exactly what to propose.
Trade-off: The suggestion might be wrong or outdated. The \"ask the user before proceeding\" part is critical.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#pattern-8-escalating-severity","level":3,"title":"Pattern 8: Escalating Severity","text":"
Different urgency tiers with different relay expectations.
# INFO: agent processes silently, mentions if relevant\necho \"INFO: Last test run was 3 days ago.\"\n\n# WARN: agent should mention to user at next natural pause\necho \"WARN: 12 uncommitted changes across 3 branches.\"\n\n# CRITICAL: agent must relay immediately, before any other work\necho \"CRITICAL: Relay VERBATIM before answering. Disk usage at 95%.\"\n
When to use: When you have multiple hooks producing output and need to avoid overwhelming the user. INFO gets absorbed, WARN gets mentioned, CRITICAL interrupts.
Examples in ctx:
ctx system check-resources: Uses two tiers (WARNING/DANGER) internally but only fires the VERBATIM relay at DANGER level: WARNING is silent. See ctx system for the user-facing command that shows both tiers.
Trade-off: Requires agent training or convention to recognize the tiers. Without a shared protocol, the prefixes are just text.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#choosing-a-pattern","level":2,"title":"Choosing a Pattern","text":"
Is the agent about to do something forbidden?\n └─ Yes → Hard gate\n\nDoes the user need to see this regardless of what they asked?\n └─ Yes → VERBATIM relay\n └─ Sometimes → Conditional relay\n\nShould the agent consider an action?\n └─ Yes, with a specific fix → Suggested action\n └─ Yes, open-ended → Agent directive\n\nIs this background context the agent should have?\n └─ Yes → Silent injection\n\nIs this housekeeping?\n └─ Yes → Silent side-effect\n
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#design-tips","level":2,"title":"Design Tips","text":"
Throttle aggressively: VERBATIM relays that fire every prompt will be ignored or resented. Use once-per-day markers (touch $REMINDED), adaptive frequency (every Nth prompt), or staleness checks (only fire if condition persists).
Include actionable commands: \"You have 12 unimported sessions\" is less useful than \"You have 12 unimported sessions. Run: ctx journal import --all.\" Give the user (or agent) the exact next step.
Use box-drawing for visual structure: The ┌─ ─┐ │ └─ ─┘ pattern makes hook output visually distinct from agent prose. It also signals \"this is machine-generated, not agent opinion.\"
Test the silence path: Most hook runs should produce no output (the condition isn't met). Make sure the common case is fast and silent.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#common-pitfalls","level":2,"title":"Common Pitfalls","text":"
Lessons from 19 days of hook debugging in ctx. Every one of these was encountered, debugged, and fixed in production.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#silent-misfire-wrong-key-name","level":3,"title":"Silent Misfire: Wrong Key Name","text":"
{ \"PreToolUseHooks\": [ ... ] }\n
The key is PreToolUse, not PreToolUseHooks. Claude Code validates silently: A misspelled key means the hook is ignored with no error. Always test with a debug echo first to confirm the hook fires before adding real logic.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#json-escaping-breaks-shell-commands","level":3,"title":"JSON Escaping Breaks Shell Commands","text":"
Go's json.Marshal escapes >, <, and & as Unicode sequences (\\u003e) by default. This breaks shell commands in generated config:
\"command\": \"ctx agent 2\\u003e/dev/null\"\n
Fix: use json.Encoder with SetEscapeHTML(false) when generating hook configuration.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#stdin-not-environment-variables","level":3,"title":"stdin, Not Environment Variables","text":"
Hook input arrives as JSON via stdin, not environment variables:
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#regex-overfitting","level":3,"title":"Regex Overfitting","text":"
A regex meant to catch ctx as a binary will also match ctx as a directory component:
# Too broad: blocks: git -C /home/jose/WORKSPACE/ctx status\n(/home/|/tmp/|/var/)[^ ]*ctx[^ ]*\n\n# Narrow to binary only:\n(/home/|/tmp/|/var/)[^ ]*/ctx( |$)\n
Test hook regexes against paths that contain the target string as a substring, not just as the final component.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#repetition-fatigue","level":3,"title":"Repetition Fatigue","text":"
Injecting context on every tool call sounds safe. In practice, after seeing the same context injection fifteen times, the agent treats it as background noise: Conventions stated in the injected context get violated because salience has been destroyed by repetition.
Fix: cooldowns. ctx agent --session $PPID --cooldown 10m injects at most once per ten minutes per session using a tombstone file in /tmp/. This is not an optimization; it is a correction for a design flaw. Every injection consumes attention budget: 50 tool calls at 4,000 tokens each means 200,000 tokens of repeated context, most of it wasted.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#hardcoded-paths","level":3,"title":"Hardcoded Paths","text":"
A username rename (parallels to jose) broke every hook at once. Use $CLAUDE_PROJECT_DIR instead of absolute paths:
If the platform provides a runtime variable for paths, always use it.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#next-up","level":2,"title":"Next Up","text":"
Webhook Notifications →: Get push notifications when loops complete, hooks fire, or agents hit milestones.
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-output-patterns/#see-also","level":2,"title":"See Also","text":"
Customizing Hook Messages: override what hooks say without changing what they do
Claude Code Permission Hygiene: how permissions and hooks work together
Defense in Depth: why hooks matter for agent security
","path":["Recipes","Hooks and Notifications","Hook Output Patterns"],"tags":[]},{"location":"recipes/hook-sequence-diagrams/","level":1,"title":"Hook Sequence Diagrams","text":"","path":["Hook Sequence Diagrams"],"tags":[]},{"location":"recipes/hook-sequence-diagrams/#hook-lifecycle","level":2,"title":"Hook Lifecycle","text":"
Every ctx hook is a Go binary invoked by Claude Code at one of three lifecycle events: PreToolUse (before a tool runs, can block), PostToolUse (after a tool completes), or UserPromptSubmit (on every user prompt, before any tools run). Hooks receive JSON on stdin and emit JSON or plain text on stdout.
This page documents the execution flow of every hook as a sequence diagram.
Daily check for unimported sessions and unenriched journal entries.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-journal\n participant State as .context/state/\n participant Journal as Journal dir\n participant Claude as Claude projects dir\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Check daily throttle marker\n alt throttled\n Hook-->>CC: (silent exit)\n end\n Hook->>Journal: Check dir exists\n Hook->>Claude: Check dir exists\n alt either dir missing\n Hook-->>CC: (silent exit)\n end\n Hook->>Journal: Get newest entry mtime\n Hook->>Claude: Count .jsonl files newer than journal\n Hook->>Journal: Count unenriched entries\n alt unimported == 0 and unenriched == 0\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, variant, {counts})\n Note over Hook: variant: both | unimported | unenriched\n Hook-->>CC: Nudge box (counts)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Touch throttle marker
Per-session check for MEMORY.md changes since last sync.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-memory-drift\n participant State as .context/state/\n participant Mem as memory.Discover\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Check session tombstone\n alt already nudged this session\n Hook-->>CC: (silent exit)\n end\n Hook->>Mem: DiscoverMemoryPath(projectRoot)\n alt auto memory not active\n Hook-->>CC: (silent exit)\n end\n Hook->>Mem: HasDrift(contextDir, sourcePath)\n alt no drift\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, nudge, fallback)\n Hook-->>CC: Nudge box (drift reminder)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Touch session tombstone
Tracks context file modification and nudges when edits happen without persisting context. Adaptive threshold based on prompt count.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-persistence\n participant State as .context/state/\n participant Ctx as .context/ files\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Read persistence state {Count, LastNudge, LastMtime}\n alt first prompt (no state)\n Hook->>State: Initialize state {Count:1, LastNudge:0, LastMtime:now}\n Hook-->>CC: (silent exit)\n end\n Hook->>Hook: Increment Count\n Hook->>Ctx: Get current context mtime\n alt context modified since LastMtime\n Hook->>State: Reset LastNudge = Count, update LastMtime\n Hook-->>CC: (silent exit)\n end\n Hook->>Hook: sinceNudge = Count - LastNudge\n Hook->>Hook: PersistenceNudgeNeeded(Count, sinceNudge)?\n alt threshold not reached\n Hook->>State: Write state\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, nudge, vars)\n Hook-->>CC: Nudge box (prompt count, time since last persist)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Update LastNudge = Count, write state
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-reminders\n participant Store as Reminders store\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>Store: ReadReminders()\n alt load error\n Hook-->>CC: (silent exit)\n end\n Hook->>Hook: Filter by due date (After <= today)\n alt no due reminders\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, reminders, {list})\n Hook-->>CC: Nudge box (reminder list + dismiss hints)\n Hook->>Hook: NudgeAndRelay(message)
Silent per-prompt pulse. Tracks prompt count, context modification, and token usage. The agent never sees this hook's output.
sequenceDiagram\n participant CC as Claude Code\n participant Hook as heartbeat\n participant State as .context/state/\n participant Ctx as .context/ files\n participant Notify as Webhook + EventLog\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Increment heartbeat counter\n Hook->>Ctx: Get latest context file mtime\n Hook->>State: Compare with last recorded mtime\n Hook->>State: Update mtime record\n Hook->>State: Read session token info\n Hook->>Notify: Send heartbeat notification\n Hook->>Notify: Append to event log\n Hook->>State: Write heartbeat log entry\n Note over Hook: No stdout - agent never sees this
sequenceDiagram\n participant CC as Claude Code\n participant Hook as check-backup-age\n participant State as .context/state/\n participant FS as Filesystem\n participant Tpl as Message Template\n\n CC->>Hook: stdin {session_id}\n Hook->>Hook: Check initialized + HookPreamble\n alt not initialized or paused\n Hook-->>CC: (silent exit)\n end\n Hook->>State: Check daily throttle marker\n alt throttled\n Hook-->>CC: (silent exit)\n end\n Hook->>FS: Check SMB mount (if env var set)\n Hook->>FS: Check backup marker file age\n alt no warnings\n Hook-->>CC: (silent exit)\n end\n Hook->>Tpl: LoadMessage(hook, warning, {Warnings})\n Hook-->>CC: Nudge box (warnings)\n Hook->>Hook: NudgeAndRelay(message)\n Hook->>State: Touch throttle marker
","path":["Hook Sequence Diagrams"],"tags":[]},{"location":"recipes/hook-sequence-diagrams/#throttling-summary","level":2,"title":"Throttling Summary","text":"Hook Lifecycle Throttle Type Scope context-load-gate PreToolUse One-shot marker Per session block-non-path-ctx PreToolUse None Every match qa-reminder PreToolUse None Every git command specs-nudge PreToolUse None Every prompt post-commit PostToolUse None Every git commit check-task-completion PostToolUse Configurable interval Per session check-context-size UserPromptSubmit Adaptive counter Per session check-ceremonies UserPromptSubmit Daily marker Once per day check-freshness UserPromptSubmit Daily marker Once per day check-journal UserPromptSubmit Daily marker Once per day check-knowledge UserPromptSubmit Daily marker Once per day check-map-staleness UserPromptSubmit Daily marker Once per day check-memory-drift UserPromptSubmit Session tombstone Once per session check-persistence UserPromptSubmit Adaptive counter Per session check-reminders UserPromptSubmit None Every prompt check-resources UserPromptSubmit None Every prompt check-version UserPromptSubmit Daily marker Once per day heartbeat UserPromptSubmit None Every prompt block-dangerous-commands PreToolUse * None Every match check-backup-age UserPromptSubmit * Daily marker Once per day
* Project-local hook (settings.local.json), not shipped with ctx.
Claude Code plan files (~/.claude/plans/*.md) are ephemeral: They have structured context, approach, and file lists, but they're orphaned after the session ends. The filenames are UUIDs, so you can't tell what's in them without opening each one.
How do you turn a useful plan into a permanent project spec?
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#tldr","level":2,"title":"TL;DR","text":"
You: /ctx-import-plans\nAgent: [lists plans with dates and titles]\n 1. 2026-02-28 Add authentication middleware\n 2. 2026-02-27 Refactor database connection pool\nYou: \"import 1\"\nAgent: [copies to specs/add-authentication-middleware.md]\n
Plans are copied (not moved) to specs/, slugified by their H1 heading.
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-import-plans Skill List, filter, and import plan files to specs /ctx-add-task Skill Optionally add a task referencing the spec","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-1-list-available-plans","level":3,"title":"Step 1: List Available Plans","text":"
Invoke the skill and it lists plans with modification dates and titles:
You: /ctx-import-plans\n\nAgent: Found 3 plan files:\n 1. 2026-02-28 Add authentication middleware\n 2. 2026-02-27 Refactor database connection pool\n 3. 2026-02-25 Import plans skill\n Which plans would you like to import?\n
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-2-filter-optional","level":3,"title":"Step 2: Filter (Optional)","text":"
You can narrow the list with arguments:
Argument Effect --today Only plans modified today --since YYYY-MM-DD Only plans modified on or after the date --all Import everything without prompting (none) Interactive selection
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-3-select-and-import","level":3,"title":"Step 3: Select and Import","text":"
Pick one or more plans by number:
You: \"import 1 and 3\"\n\nAgent: Imported 2 plan(s):\n ~/.claude/plans/abc123.md -> specs/add-authentication-middleware.md\n ~/.claude/plans/ghi789.md -> specs/import-plans-skill.md\n Want me to add tasks referencing these specs?\n
The agent reads the H1 heading from each plan and slugifies it for the filename. If a plan has no H1 heading, the original filename (minus extension) is used as the slug.
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#step-4-add-follow-up-tasks-optional","level":3,"title":"Step 4: Add Follow-Up Tasks (Optional)","text":"
If you say yes, the agent creates tasks in TASKS.md that reference the imported specs:
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't need to remember the exact skill name:
You say What happens \"import my plans\" /ctx-import-plans (interactive) \"save today's plans as specs\" /ctx-import-plans --today \"import all plans from this week\" /ctx-import-plans --since ... \"turn that plan into a spec\" /ctx-import-plans (filtered)","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#tips","level":2,"title":"Tips","text":"
Plans are copied, not moved: The originals stay in ~/.claude/plans/. Claude Code manages that directory; ctx doesn't delete from it.
Conflict handling: If specs/{slug}.md already exists, the agent asks whether to overwrite or pick a different name.
Specs are project memory: Once imported, specs are tracked in git and available to future sessions. Reference them from TASKS.md phase headers with Spec: specs/slug.md.
Pair with /ctx-implement: After importing a plan as a spec, use /ctx-implement to execute it step-by-step with verification.
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/import-plans/#see-also","level":2,"title":"See Also","text":"
Skills Reference: /ctx-import-plans: full skill description
The Complete Session: where plan import fits in the session flow
Tracking Work Across Sessions: managing tasks that reference imported specs
","path":["Recipes","Knowledge and Tasks","Importing Claude Code Plans"],"tags":[]},{"location":"recipes/knowledge-capture/","level":1,"title":"Persisting Decisions, Learnings, and Conventions","text":"","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#the-problem","level":2,"title":"The Problem","text":"
You debug a subtle issue, discover the root cause, and move on.
Three weeks later, a different session hits the same issue. The knowledge existed briefly in one session's memory but was never written down.
Architectural decisions suffer the same fate: you weigh trade-offs, pick an approach, and six sessions later the AI suggests the alternative you already rejected.
How do you make sure important context survives across sessions?
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#tldr","level":2,"title":"TL;DR","text":"
/ctx-reflect # surface items worth persisting\n/ctx-add-decision \"Title\" # record with context/rationale/consequence\n/ctx-add-learning \"Title\" # record with context/lesson/application\n
Or just tell your agent: \"What have we learned this session?\"
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx add decision Command Record an architectural decision ctx add learning Command Record a gotcha, tip, or lesson ctx add convention Command Record a coding pattern or standard ctx reindex Command Rebuild both quick-reference indices ctx decision reindex Command Rebuild the DECISIONS.md index ctx learning reindex Command Rebuild the LEARNINGS.md index /ctx-add-decision Skill AI-guided decision capture with validation /ctx-add-learning Skill AI-guided learning capture with validation /ctx-add-convention Skill AI-guided convention recording with placement /ctx-reflect Skill Surface items worth persisting at breakpoints","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#the-workflow","level":2,"title":"The Workflow","text":"
Decision, Learning, or Convention?
If you chose between alternatives, it is a decision.
If you discovered something surprising, it is a learning.
If you are codifying a repeated pattern, it is a convention.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-1-understand-what-to-persist","level":3,"title":"Step 1: Understand What to Persist","text":"
Three context files serve different purposes:
Decisions (DECISIONS.md) answer \"why is it this way?\" They record trade-offs between alternatives with structured fields: context, rationale, and consequences. Decisions prevent re-debating settled questions.
Learnings (LEARNINGS.md) answer \"what did we discover the hard way?\" They record gotchas and debugging insights specific to this project with structured fields: context, lesson, and application.
Learnings prevent repeating past mistakes.
Conventions (CONVENTIONS.md) answer \"how do we do things here?\" They record patterns and standards. No structured fields required: just a name, a rule, and an example. Conventions keep code consistent across sessions.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-2-record-decisions","level":3,"title":"Step 2: Record Decisions","text":"
Decisions have three structured fields:
context,
rationale,
and consequences.
The easiest way to record one is to tell your agent:
You: \"Save that as a decision: we picked file-based cooldown\n tokens over env vars.\"\n\nAgent: \"Recorded in DECISIONS.md:\n - Decision: Use file-based cooldown tokens instead of env vars\n - Context: Hook subprocesses cannot persist env vars to parent shell\n - Rationale: File tokens survive across processes. Simpler than\n IPC. Cleanup is automatic via TTL.\n - Consequence: Tombstone files accumulate in /tmp. Cannot share\n state across machines.\"\n
The agent was present for the trade-off discussion. It fills in the structured fields from what it observed, which is hard to reconstruct later as a terminal one-liner.
When multiple options were considered, the agent includes rejected alternatives in the rationale automatically.
The /ctx-add-decision skill also guides you through the fields interactively. For quick decisions, it supports a Y-statement:
\"In the context of [situation], facing [constraint], we decided for [choice] and against [alternatives], to achieve [benefit], accepting that [trade-off].\"
CLI Command for Scripting and Automation
When no agent is in the loop (CI pipelines, shell scripts, hooks), use the CLI directly with structured flags:
ctx add decision \"Use file-based cooldown tokens instead of env vars\" \\\n --context \"Hook subprocesses cannot persist env vars to parent shell\" \\\n --rationale \"File tokens survive across processes. Simpler than IPC. Cleanup is automatic via TTL.\" \\\n --consequence \"Tombstone files accumulate in /tmp. Cannot share state across machines.\"\n
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-3-record-learnings","level":3,"title":"Step 3: Record Learnings","text":"
Learnings have three structured fields: context, lesson, and application.
Again, your agent is the best tool for this. It was there when you discovered the gotcha and can express it more completely than a backslash-escaped terminal command:
You: \"That subprocess thing was painful. Save it as a learning.\"\n\nAgent: \"Added to LEARNINGS.md:\n - Learning: Claude Code hooks run in a subprocess\n - Context: Set env var in PreToolUse hook, but it was not visible\n in the main session\n - Lesson: Hook scripts execute in a child process. Env changes do\n not propagate to parent.\n - Application: Use tombstone files for hook-to-session\n communication. Never rely on hook env vars.\"\n
You said one sentence. The agent produced a structured entry with all three fields filled from the session context.
That is the point: the agent has the full conversation and can be far more expressive than what you would type at a command prompt.
The /ctx-add-learning skill applies three quality filters:
Could someone Google this in 5 minutes?
Is it specific to this codebase?
Did it take real effort to discover?
All three must pass.
Learnings capture principles and heuristics, not code snippets.
CLI Command for Scripting and Automation
When no agent is in the loop:
ctx add learning \"Claude Code hooks run in a subprocess\" \\\n --context \"Set env var in PreToolUse hook, but it was not visible in the main session\" \\\n --lesson \"Hook scripts execute in a child process. Env changes do not propagate to parent.\" \\\n --application \"Use tombstone files for hook-to-session communication. Never rely on hook env vars.\"\n
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-4-record-conventions","level":3,"title":"Step 4: Record Conventions","text":"
Conventions are simpler: just a name, a rule, and optionally a section.
These are short enough that either approach works:
You: \"We've been using kebab-case for every CLI flag. Codify that.\"\n\nAgent: \"Added to CONVENTIONS.md under Naming:\n 'Use kebab-case for all CLI flag names.'\"\n
Or from the terminal:
ctx add convention \"Use kebab-case for all CLI flag names\" --section \"Naming\"\n
Conventions work best for rules that come up repeatedly. Codify a pattern the third time you see it, not the first.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-5-reindex-after-manual-edits","level":3,"title":"Step 5: Reindex After Manual Edits","text":"
DECISIONS.md and LEARNINGS.md maintain a quick-reference index at the top: a compact table of date and title for each entry. The index updates automatically via ctx add, but falls out of sync after hand edits.
ctx reindex\n
This single command regenerates both indices. You can also reindex individually with ctx decision reindex or ctx learning reindex.
Run reindex after any manual edit. The index lets AI tools scan all entries without reading the full file, which matters when token budgets are tight.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-6-use-ctx-reflect-to-surface-what-to-capture","level":3,"title":"Step 6: Use /ctx-reflect to Surface What to Capture","text":"
Keep It Conversational
/ctx-reflect is not the only way to trigger reflection.
Agents trained on the ctx playbook naturally surface persist-worthy items at breakpoints, even without invoking the skill explicitly.
A conversational prompt like \"anything worth saving?\" or \"let's wrap up\" can trigger the same review.
The skill provides a structured checklist, but the behavior is available through natural conversation.
At natural breakpoints (after completing a feature, fixing a bug, or before ending a session) use /ctx-reflect to identify items worth persisting.
/ctx-reflect\n
The skill walks through learnings, decisions, tasks, and session notes, skipping categories with nothing to report. The output includes specific commands for each suggested persist:
This session implemented file-based cooldown for `ctx agent` and\ndiscovered that hook subprocesses cannot set env vars in the parent.\n\nI'd suggest persisting:\n- **Learning**: Hook subprocesses cannot propagate env vars\n `ctx add learning \"...\" --context \"...\" --lesson \"...\" --application \"...\"`\n- **Decision**: File-based cooldown tokens over env vars\n `ctx add decision \"...\" --context \"...\" --rationale \"...\" --consequence \"...\"`\n\nWant me to persist any of these?\n
The skill always asks before persisting.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#step-7-the-conversational-approach","level":3,"title":"Step 7: The Conversational Approach","text":"
The commands shown in Steps 2 to 4 are precise, but you rarely need to type them yourself. When working with an agent that has loaded the ctx playbook, natural language triggers the same operations.
You ask, the agent acts:
You say What the agent does \"What have we learned this session?\" Reviews recent work, identifies learnings, offers to persist \"Save that as a decision\" Captures the trade-off you just discussed with structured fields \"Any gotchas worth remembering?\" Scans recent work for non-obvious lessons \"Record that convention\" Adds the pattern you just agreed on to CONVENTIONS.md \"Let's wrap up\" Runs through the reflect checklist conversationally
Example exchange:
You: \"What did we learn from that debugging session?\"\n\nAgent: \"We discovered three things worth persisting:\n 1. Hook subprocesses can't propagate env vars to the parent. We\n spent 20 minutes on this before switching to tombstone files.\n 2. Tombstone files need a TTL or they accumulate in /tmp.\n 3. The cooldown logic must check file mtime, not just existence.\n Want me to save these as learnings?\"\n\nYou: \"Yes, all of them.\"\n\nAgent: \"Done. Added all three to LEARNINGS.md with structured fields.\"\n
The agent can also be proactive without being asked. The playbook encourages agents to persist context at natural milestones.
You may see:
After fixing a tricky bug: \"I noticed this was a non-obvious issue. Want me to save it as a learning?\"
After choosing between approaches: \"We just made a trade-off between file-based tokens and IPC. Should I record this as a decision?\"
At session milestones: the agent checks what is worth persisting and offers a brief summary.
When agents run unattended (no human in the loop), they may persist directly:
\"I've added the subprocess env var gotcha to LEARNINGS.md so we don't hit it again.\"
\"Recorded the decision to use file-based tokens over env vars in DECISIONS.md.\"
\"Marked the cooldown task done and added a follow-up for TTL cleanup.\"
This is behavior by design.
The playbook's self-check prompt, \"If this session ended right now, would the next session know what happened?\" drives agents to persist early and often rather than waiting for explicit instructions.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#command-line-approach-scripting-and-automation","level":3,"title":"Command-line Approach (Scripting and Automation)","text":"
# Decision: record the trade-off\nctx add decision \"Use PostgreSQL over SQLite\" \\\n --context \"Need concurrent multi-user access\" \\\n --rationale \"SQLite locks on writes; Postgres handles concurrency\" \\\n --consequence \"Requires a database server; team needs Postgres training\"\n\n# Learning: record the gotcha\nctx add learning \"SQL migrations must be idempotent\" \\\n --context \"Deploy failed when migration ran twice after rollback\" \\\n --lesson \"CREATE TABLE without IF NOT EXISTS fails on retry\" \\\n --application \"Always use IF NOT EXISTS guards in migrations\"\n\n# Convention: record the pattern\nctx add convention \"API handlers return structured errors\" --section \"API\"\n\n# Reindex after manual edits\nctx reindex\n\n# Reflect at breakpoints (from AI assistant)\n# /ctx-reflect\n
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#conversational-approach-interactive-sessions","level":3,"title":"Conversational Approach (Interactive Sessions)","text":"
You: \"We just picked Postgres over SQLite for the concurrency\n reasons. Save that?\"\n\nAgent: \"Recorded. Added to DECISIONS.md:\n - Decision: Use PostgreSQL over SQLite\n - Context: Need concurrent multi-user access\n - Rationale: SQLite locks on writes; Postgres handles concurrency\n - Consequence: Requires a database server; team needs Postgres\n training\"\n\nYou: \"Also that migration thing. The deploy blew up when the\n migration ran twice.\"\n\nAgent: \"Got it. Added to LEARNINGS.md:\n - Learning: SQL migrations must be idempotent\n - Lesson: CREATE TABLE without IF NOT EXISTS fails on retry\n - Application: Always use IF NOT EXISTS guards in migrations\"\n\nYou: \"Good. Anything else worth saving before we wrap up?\"\n\nAgent: \"We've been using structured error returns in every API\n handler this session. Want me to codify that as a convention?\"\n\nYou: \"Do it.\"\n\nAgent: \"Done. Added 'API handlers return structured errors' to\n CONVENTIONS.md under the API section.\"\n
Both approaches produce the same structured entries in the same context files.
The conversational approach is the natural fit for interactive sessions;
the CLI commands are better suited for scripts, hooks, and automation pipelines.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#tips","level":2,"title":"Tips","text":"
Record decisions at the moment of choice. The alternatives you considered and the reasons you rejected them fade quickly. Capture trade-offs while they are fresh.
Learnings should fail the Gemini test. If someone could find it in a 5-minute Gemini search, it does not belong in LEARNINGS.md.
Conventions earn their place through repetition. Add a convention the third time you see a pattern, not the first.
Use /ctx-reflect at natural breakpoints. The checklist catches items you might otherwise lose.
Keep the entries self-contained. Each entry should make sense on its own. A future session may load only one due to token budget constraints.
Reindex after every hand edit. It takes less than a second. A stale index causes AI tools to miss entries.
Prefer the structured fields. The verbosity forces clarity. A decision without a rationale is just a fact. A learning without an application is just a story.
Talk to your agent, do not type commands. In interactive sessions, the conversational approach is the recommended way to capture knowledge. Say \"save that as a learning\" or \"any decisions worth recording?\" and let the agent handle the structured fields. Reserve the CLI commands for scripting, automation, and CI/CD pipelines where there is no agent in the loop.
Trust the agent's proactive instincts. Agents trained on the ctx playbook will offer to persist context at milestones. A brief \"want me to save this?\" is cheaper than re-discovering the same lesson three sessions later.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#next-up","level":2,"title":"Next Up","text":"
Tracking Work Across Sessions →: Add, prioritize, complete, and archive tasks across sessions.
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/knowledge-capture/#see-also","level":2,"title":"See Also","text":"
Tracking Work Across Sessions: managing the tasks that decisions and learnings support
The Complete Session: full session lifecycle including reflection and context persistence
Detecting and Fixing Drift: keeping knowledge files accurate as the codebase evolves
CLI Reference: full documentation for ctx add, ctx decision, ctx learning
Context Files: format and conventions for DECISIONS.md, LEARNINGS.md, and CONVENTIONS.md
","path":["Recipes","Knowledge and Tasks","Persisting Decisions, Learnings, and Conventions"],"tags":[]},{"location":"recipes/memory-bridge/","level":1,"title":"Bridging Claude Code Auto Memory","text":"","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#the-problem","level":2,"title":"The Problem","text":"
Claude Code maintains per-project auto memory at ~/.claude/projects/<slug>/memory/MEMORY.md. This file is:
Outside the repo - not version-controlled, not portable
Machine-specific - tied to one ~/.claude/ directory
Invisible to ctx - context loading and hooks don't read it
Meanwhile, ctx maintains structured context files (DECISIONS.md, LEARNINGS.md, CONVENTIONS.md) that are git-tracked, portable, and token-budgeted - but Claude Code doesn't automatically write to them.
The two systems hold complementary knowledge with no bridge between them.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#tldr","level":2,"title":"TL;DR","text":"
ctx memory sync # Mirror MEMORY.md into .context/memory/mirror.md\nctx memory status # Check for drift\nctx memory diff # See what changed since last sync\n
The check-memory-drift hook nudges automatically when MEMORY.md changes - you don't need to remember to sync manually.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx memory sync CLI command Copy MEMORY.md to mirror, archive previous ctx memory status CLI command Show drift, timestamps, line counts ctx memory diff CLI command Show changes since last sync ctx memory import CLI command Classify and promote entries to .context/ files ctx memory publish CLI command Push curated .context/ content to MEMORY.md ctx memory unpublish CLI command Remove published block from MEMORY.md ctx system check-memory-drift Hook Nudge when MEMORY.md has changed (once/session)","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#how-it-works","level":2,"title":"How It Works","text":"","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#discovery","level":3,"title":"Discovery","text":"
Claude Code encodes project paths as directory names under ~/.claude/projects/. The encoding replaces / with - and prefixes with -:
ctx memory uses this encoding to locate MEMORY.md automatically from your project root - no configuration needed.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#mirroring","level":3,"title":"Mirroring","text":"
When you run ctx memory sync:
The previous mirror is archived to .context/memory/archive/mirror-<timestamp>.md
MEMORY.md is copied to .context/memory/mirror.md
Sync state is updated in .context/state/memory-import.json
The mirror is git-tracked, so it travels with the project. Archives provide a fallback for projects that don't use git.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#drift-detection","level":3,"title":"Drift Detection","text":"
The check-memory-drift hook compares MEMORY.md's modification time against the mirror. When drift is detected, the agent sees:
┌─ Memory Drift ────────────────────────────────────────────────\n│ MEMORY.md has changed since last sync.\n│ Run: ctx memory sync\n│ Context: .context\n└────────────────────────────────────────────────────────────────\n
The nudge fires once per session to avoid noise.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#typical-workflow","level":2,"title":"Typical Workflow","text":"","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#at-session-start","level":3,"title":"At Session Start","text":"
If the hook fires a drift nudge, sync before diving into work:
ctx memory diff # Review what changed\nctx memory sync # Mirror the changes\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#periodic-check","level":3,"title":"Periodic Check","text":"
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#dry-run","level":3,"title":"Dry Run","text":"
Preview what sync would do without writing:
ctx memory sync --dry-run\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#storage-layout","level":2,"title":"Storage Layout","text":"
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#edge-cases","level":2,"title":"Edge Cases","text":"Scenario Behavior Auto memory not active sync exits 1 with message. status reports \"not active\". Hook skips silently. First sync (no mirror) Creates mirror without archiving. MEMORY.md is empty Syncs to empty mirror (valid). Not initialized Init guard rejects (same as all ctx commands).","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#importing-entries","level":2,"title":"Importing Entries","text":"
Once you've synced, you can classify and promote entries into structured .context/ files:
Keywords Target always use, prefer, never use, standard CONVENTIONS.md decided, chose, trade-off, approach DECISIONS.md gotcha, learned, watch out, bug, caveat LEARNINGS.md todo, need to, follow up TASKS.md Everything else Skipped
Entries that don't match any pattern are skipped - they stay in the mirror for manual review. Deduplication (hash-based) prevents re-importing the same entry on subsequent runs.
Review Before Importing
Use --dry-run first. The heuristic classifier is deliberately simple - it may misclassify ambiguous entries. Review the plan, then import.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#full-workflow","level":3,"title":"Full Workflow","text":"
ctx memory sync # 1. Mirror MEMORY.md\nctx memory import --dry-run # 2. Preview what would be imported\nctx memory import # 3. Promote entries to .context/ files\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#publishing-context-to-memorymd","level":2,"title":"Publishing Context to MEMORY.md","text":"
Push curated .context/ content back into MEMORY.md so Claude Code sees structured project context on session start - without needing hooks.
ctx memory publish --dry-run # Preview what would be published\nctx memory publish # Write to MEMORY.md\nctx memory publish --budget 40 # Tighter line budget\n
ctx memory publish replaces only inside the markers
To remove the published block entirely:
ctx memory unpublish\n
Publish at Wrap-Up, Not on Commit
The best time to publish is during session wrap-up, after persisting decisions and learnings. Never auto-publish - give yourself a chance to review what's going into MEMORY.md.
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/memory-bridge/#full-bidirectional-workflow","level":3,"title":"Full Bidirectional Workflow","text":"
ctx memory sync # 1. Mirror MEMORY.md\nctx memory import --dry-run # 2. Check what Claude wrote\nctx memory import # 3. Promote entries to .context/\nctx memory publish --dry-run # 4. Check what would be published\nctx memory publish # 5. Push context to MEMORY.md\n
","path":["Recipes","Knowledge and Tasks","Bridging Claude Code Auto Memory"],"tags":[]},{"location":"recipes/multi-tool-setup/","level":1,"title":"Setup Across AI Tools","text":"","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#the-problem","level":2,"title":"The Problem","text":"
You have installed ctx and want to set it up with your AI coding assistant so that context persists across sessions. Different tools have different integration depths. For example:
Claude Code supports native hooks that load and save context automatically.
Cursor injects context via its system prompt.
Aider reads context files through its --read flag.
This recipe walks through the complete setup for each tool, from initialization through verification, so you end up with a working memory layer regardless of which AI tool you use.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#tldr","level":2,"title":"TL;DR","text":"
Create a .ctxrc in your project root to configure token budgets, context directory, drift thresholds, and more.
Then start your AI tool and ask: \"Do you remember?\"
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Command/Skill Role in this workflow ctx init Create .context/ directory, templates, and permissions ctx setup Generate integration configuration for a specific AI tool ctx agent Print a token-budgeted context packet for AI consumption ctx load Output assembled context in read order (for manual pasting) ctx watch Auto-apply context updates from AI output (non-native tools) ctx completion Generate shell autocompletion for bash, zsh, or fish ctx journal import Import sessions to editable journal Markdown","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-1-initialize-ctx","level":3,"title":"Step 1: Initialize ctx","text":"
Run ctx init in your project root. This creates the .context/ directory with all template files and seeds ctx permissions in settings.local.json.
cd your-project\nctx init\n
This produces the following structure:
.context/\n CONSTITUTION.md # Hard rules the AI must never violate\n TASKS.md # Current and planned work\n CONVENTIONS.md # Code patterns and standards\n ARCHITECTURE.md # System overview\n DECISIONS.md # Architectural decisions with rationale\n LEARNINGS.md # Lessons learned, gotchas, tips\n GLOSSARY.md # Domain terms and abbreviations\n AGENT_PLAYBOOK.md # How AI tools should use this system\n
Using a Different .context Directory
The .context/ directory doesn't have to live inside your project. You can point ctx to an external folder via .ctxrc, the CTX_DIR environment variable, or the --context-dir CLI flag.
This is useful for monorepos or shared context across repositories.
See Configuration for details and External Context for a full recipe.
For Claude Code, install the ctx plugin to get hooks and skills:
claude /plugin marketplace add ActiveMemory/ctx\nclaude /plugin install ctx@activememory-ctx\n
If you only need the core files (useful for lightweight setups), use the --minimal flag:
ctx init --minimal\n
This creates only TASKS.md, DECISIONS.md, and CONSTITUTION.md.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-2-generate-tool-specific-hooks","level":3,"title":"Step 2: Generate Tool-Specific Hooks","text":"
If you are using a tool other than Claude Code (which is configured automatically by ctx init), generate its integration configuration:
# For Cursor\nctx setup cursor\n\n# For Aider\nctx setup aider\n\n# For GitHub Copilot\nctx setup copilot\n\n# For Windsurf\nctx setup windsurf\n
Each command prints the configuration you need. How you apply it depends on the tool.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#claude-code","level":4,"title":"Claude Code","text":"
No action needed. Just install ctx from the Marketplace as ActiveMemory/ctx.
Claude Code is a First-Class Citizen
With the ctx plugin installed, Claude Code gets hooks and skills automatically. The PreToolUse hook runs ctx agent --budget 4000 on every tool call (with a 10-minute cooldown so it only fires once per window).
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#cursor","level":4,"title":"Cursor","text":"
Add the system prompt snippet to .cursor/settings.json:
{\n \"ai.systemPrompt\": \"Read .context/TASKS.md and .context/CONVENTIONS.md before responding. Follow rules in .context/CONSTITUTION.md.\"\n}\n
Context files appear in Cursor's file tree. You can also paste a context packet directly into chat:
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#aider","level":4,"title":"Aider","text":"
Create .aider.conf.yml so context files are loaded on every session:
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-3-set-up-shell-completion","level":3,"title":"Step 3: Set Up Shell Completion","text":"
Shell completion lets you tab-complete ctx subcommands and flags, which is especially useful while learning the CLI.
# Bash (add to ~/.bashrc)\nsource <(ctx completion bash)\n\n# Zsh (add to ~/.zshrc)\nsource <(ctx completion zsh)\n\n# Fish\nctx completion fish > ~/.config/fish/completions/ctx.fish\n
After sourcing, typing ctx a<TAB> completes to ctx agent, and ctx journal <TAB> shows list, show, and export.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-4-verify-the-setup-works","level":3,"title":"Step 4: Verify the Setup Works","text":"
Start a fresh session in your AI tool and ask:
\"Do you remember?\"
A correctly configured tool responds with specific context: current tasks from TASKS.md, recent decisions, and previous session topics. It should not say \"I don't have memory\" or \"Let me search for files.\"
This question checks the passive side of memory. A properly set-up agent is also proactive: it treats context maintenance as part of its job:
After a debugging session, it offers to save a learning.
After a trade-off discussion, it asks whether to record the decision.
After completing a task, it suggests follow-up items.
The \"do you remember?\" check verifies both halves: recall and responsibility.
For example, after resolving a tricky bug, a proactive agent might say:
That Redis timeout issue was subtle. Want me to save this as a *learning*\nso we don't hit it again?\n
If you see behavior like this, the setup is working end to end.
In Claude Code, you can also invoke the /ctx-status skill:
/ctx-status\n
This prints a summary of all context files, token counts, and recent activity, confirming that hooks are loading context.
If context is not loading, check the basics:
Symptom Fix ctx: command not found Ensure ctx is in your PATH: which ctx Hook errors Verify plugin is installed: claude /plugin list Context not refreshing Cooldown may be active; wait 10 minutes or set --cooldown 0","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-5-enable-watch-mode-for-non-native-tools","level":3,"title":"Step 5: Enable Watch Mode for Non-Native Tools","text":"
Tools like Aider, Copilot, and Windsurf do not support native hooks for saving context automatically. For these, run ctx watch alongside your AI tool.
Pipe the AI tool's output through ctx watch:
# Terminal 1: Run Aider with output logged\naider 2>&1 | tee /tmp/aider.log\n\n# Terminal 2: Watch the log for context updates\nctx watch --log /tmp/aider.log\n
Or for any generic tool:
your-ai-tool 2>&1 | tee /tmp/ai.log &\nctx watch --log /tmp/ai.log\n
When the AI emits structured update commands, ctx watch parses and applies them automatically:
<context-update type=\"learning\"\n context=\"Debugging rate limiter\"\n lesson=\"Redis MULTI/EXEC does not roll back on error\"\n application=\"Wrap rate-limit checks in Lua scripts instead\"\n>Redis Transaction Behavior</context-update>\n
To preview changes without modifying files:
ctx watch --dry-run --log /tmp/ai.log\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#step-6-import-session-transcripts-optional","level":3,"title":"Step 6: Import Session Transcripts (Optional)","text":"
If you want to browse past session transcripts, import them to the journal:
ctx journal import --all\n
This converts raw session data into editable Markdown files in .context/journal/. You can then enrich them with metadata using /ctx-journal-enrich-all inside your AI assistant.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
Here is the condensed setup for all three tools:
# ## Common (run once per project) ##\ncd your-project\nctx init\nsource <(ctx completion zsh) # or bash/fish\n\n# ## Claude Code (automatic, just verify) ##\n# Start Claude Code, then ask: \"Do you remember?\"\n\n# ## Cursor ##\nctx setup cursor\n# Add the system prompt to .cursor/settings.json\n# Paste context: ctx agent --budget 4000 | pbcopy\n\n# ## Aider ##\nctx setup aider\n# Create .aider.conf.yml with read: paths\n# Run watch mode alongside: ctx watch --log /tmp/aider.log\n\n# ## Verify any Tool ##\n# Ask your AI: \"Do you remember?\"\n# Expect: specific tasks, decisions, recent context\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#tips","level":2,"title":"Tips","text":"
Start with ctx init (not --minimal) for your first project. The full template set gives the agent more to work with, and you can always delete files later.
For Claude Code, the token budget is configured in the plugin's hooks.json. To customize, adjust the --budget flag in the ctx agent hook command.
The --session $PPID flag isolates cooldowns per Claude Code process, so parallel sessions do not suppress each other.
Commit your .context/ directory to version control. Several ctx features (journals, changelogs, blog generation) rely on git history.
For Cursor and Copilot, keep CONVENTIONS.md visible. These tools treat open files as higher-priority context.
Run ctx drift periodically to catch stale references before they confuse the agent.
The agent playbook instructs the agent to persist context at natural milestones (completed tasks, decisions, gotchas). In practice, this works best when you reinforce the habit: a quick \"anything worth saving?\" after a debugging session goes a long way.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#companion-tools-highly-recommended","level":2,"title":"Companion Tools (Highly Recommended)","text":"
ctx skills can leverage external MCP servers for web search and code intelligence. ctx works without them, but they significantly improve agent behavior across sessions — the investment is small and the benefits compound. Skills like /ctx-code-review, /ctx-explain, and /ctx-refactor all become noticeably better with these tools connected.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#gemini-search","level":3,"title":"Gemini Search","text":"
Provides grounded web search with citations. Used by skills and the agent playbook as the preferred search backend (faster and more accurate than built-in web search).
Setup: Add the Gemini Search MCP server to your Claude Code settings. See the Gemini Search MCP documentation for installation.
Verification:
# The agent checks this automatically during /ctx-remember\n# Manual test: ask the agent to search for something\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#gitnexus","level":3,"title":"GitNexus","text":"
Provides a code knowledge graph with symbol resolution, blast radius analysis, and domain clustering. Used by skills like /ctx-refactor (impact analysis) and /ctx-code-review (dependency awareness).
Setup: Add the GitNexus MCP server to your Claude Code settings, then index your project:
npx gitnexus analyze\n
Verification:
# The agent checks this automatically during /ctx-remember\n# If the index is stale, it will suggest rehydrating\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#suppressing-the-check","level":3,"title":"Suppressing the Check","text":"
If you don't use companion tools and want to skip the availability check at session start, add to .ctxrc:
companion_check: false\n
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#future-direction","level":3,"title":"Future Direction","text":"
The companion tool integration is evolving toward a pluggable model: bring your own search engine, bring your own code intelligence. The current integration is MCP-based and limited to Gemini Search and GitNexus. If you use a different search or code intelligence tool, skills will degrade gracefully to built-in capabilities.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#next-up","level":2,"title":"Next Up","text":"
Keeping Context in a Separate Repo →: Store context files outside the project tree for multi-repo or open source setups.
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multi-tool-setup/#see-also","level":2,"title":"See Also","text":"
The Complete Session: full session lifecycle recipe
Multilingual Session Parsing: configure session header prefixes for other languages
CLI Reference: all commands and flags
Integrations: detailed per-tool integration docs
","path":["Recipes","Getting Started","Setup Across AI Tools"],"tags":[]},{"location":"recipes/multilingual-sessions/","level":1,"title":"Multilingual Session Parsing","text":"","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#the-problem","level":2,"title":"The Problem","text":"
Your team works across languages. Session files written by AI tools might use headers like # Oturum: 2026-01-15 - API Düzeltme (Turkish) or # セッション: 2026-01-15 - テスト (Japanese) instead of # Session: 2026-01-15 - Fix API.
By default, ctx only recognizes Session: as a session header prefix. Files with other prefixes are silently skipped during journal import and journal generation: They look like regular Markdown, not sessions.
session_prefixes:\n - \"Session:\" # English (include to keep default)\n - \"Oturum:\" # Turkish\n - \"セッション:\" # Japanese\n
Restart your session. All configured prefixes are now recognized.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#how-it-works","level":2,"title":"How It Works","text":"
The Markdown session parser detects session files by looking for an H1 header that starts with a known prefix followed by a date:
# Session: 2026-01-15 - Fix API Rate Limiting\n# Oturum: 2026-01-15 - API Düzeltme\n# セッション: 2026-01-15 - テスト\n
The list of recognized prefixes comes from session_prefixes in .ctxrc. When the key is absent or empty, ctx falls back to the built-in default: [\"Session:\"].
Date-only headers (# 2026-01-15 - Morning Work) are always recognized regardless of prefix configuration.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#configuration","level":2,"title":"Configuration","text":"","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#adding-a-language","level":3,"title":"Adding a language","text":"
Add the prefix with a trailing colon to your .ctxrc:
When you override session_prefixes, the default is replaced, not extended. If you still want English headers recognized, include \"Session:\" in your list.
Commit .ctxrc to the repo so all team members share the same prefix list. This ensures ctx journal import and journal generation pick up sessions from all team members regardless of language.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#common-prefixes","level":3,"title":"Common prefixes","text":"Language Prefix English Session: Turkish Oturum: Spanish Sesión: French Session: German Sitzung: Japanese セッション: Korean 세션: Portuguese Sessão: Chinese 会话:","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#verifying","level":3,"title":"Verifying","text":"
After configuring, test with ctx journal source. Sessions with the new prefixes should appear in the output.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/multilingual-sessions/#what-this-does-not-do","level":2,"title":"What This Does NOT Do","text":"
Change the interface language: ctx output is always English. This setting only controls which session files ctx can parse.
Generate headers: ctx never writes session headers. The prefix list is recognition-only (input, not output).
Affect JSONL sessions: Claude Code JSONL transcripts don't use header prefixes. This only applies to Markdown session files in .context/sessions/.
See also: Setup Across AI Tools - complete multi-tool setup including Markdown session configuration.
See also: CLI Reference - full .ctxrc field reference including session_prefixes.
","path":["Recipes","Getting Started","Multilingual Session Parsing"],"tags":[]},{"location":"recipes/parallel-worktrees/","level":1,"title":"Parallel Agent Development with Git Worktrees","text":"","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#the-problem","level":2,"title":"The Problem","text":"
You have a large backlog (10, 20, 30 open tasks) and many of them are independent: docs work that doesn't touch Go code, a new package that doesn't overlap with existing ones, test coverage for a stable module.
Running one agent at a time means serial execution. You want 3-4 agents working in parallel, each on its own track, without stepping on each other's files.
Git worktrees solve this.
Each worktree is a separate working directory with its own branch, but they share the same .git object database. Combined with ctx's persistent context, each agent session picks up the full project state and works independently.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#tldr","level":2,"title":"TL;DR","text":"
/ctx-worktree # 1. group tasks by file overlap\ngit worktree add ../myproject-docs -b work/docs # 2. create worktrees\ncd ../myproject-docs && claude # 3. launch agents (one per track)\n/ctx-worktree teardown docs # 4. merge back and clean up\n
TASKS.md will conflict on merge: Accept all [x] completions from both sides.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-worktree Skill Create, list, and tear down worktrees /ctx-next Skill Pick tasks from the backlog for each track git worktree Command Underlying git worktree management git merge Command Merge completed tracks back to main","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-1-assess-the-backlog","level":3,"title":"Step 1: Assess the Backlog","text":"
Start in your main checkout. Ask the agent to analyze your tasks and group them by blast radius: which files and directories each task touches.
/ctx-worktree\nLook at TASKS.md and group the pending tasks into 2-3 independent\ntracks based on which files they'd touch. Show me the grouping\nbefore creating anything.\n
The agent reads TASKS.md, estimates file overlap, and proposes groups:
Proposed worktree groups:\n\n work/docs # recipe updates, blog post (touches: docs/)\n work/crypto # scratchpad encryption infra (touches: internal/crypto/)\n work/tests # journal test coverage (touches: internal/cli/journal/)\n
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-2-create-the-worktrees","level":3,"title":"Step 2: Create the Worktrees","text":"
Once you approve the grouping, the agent creates worktrees as sibling directories:
Each worktree is a full working copy on its own branch.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-3-launch-agents","level":3,"title":"Step 3: Launch Agents","text":"
Open a separate terminal (or editor window) for each worktree and start a Claude Code session:
Each agent sees the full project, including .context/, and can work independently.
Do Not Initialize Context in Worktrees
Do not run ctx init in worktrees: The .context directory is already tracked in git.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-4-work","level":3,"title":"Step 4: Work","text":"
Each agent works through its assigned tasks. They can read TASKS.md to know what's assigned to their track, use /ctx-next to pick the next item, and commit normally on their work/* branch.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-5-merge-back","level":3,"title":"Step 5: Merge Back","text":"
As each track finishes, return to the main checkout and merge:
/ctx-worktree teardown docs\n
The agent checks for uncommitted changes, merges work/docs into your current branch, removes the worktree, and deletes the branch.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-6-handle-tasksmd-conflicts","level":3,"title":"Step 6: Handle TASKS.md Conflicts","text":"
TASKS.md will almost always conflict when merging: Multiple agents will mark different tasks as [x]. This is expected and easy to resolve:
Accept all completions from both sides. No task should go from [x] back to [ ]. The merge resolution is always additive.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#step-7-cleanup","level":3,"title":"Step 7: Cleanup","text":"
After all tracks are merged, verify everything is clean:
/ctx-worktree list\n
Should show only the main working tree. All work/* branches should be gone.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#conversational-approach","level":2,"title":"Conversational Approach","text":"
You don't have to use the skill directly for every step. These natural prompts work:
\"I have a big backlog. Can we split it across worktrees?\"
\"Which of these tasks can run in parallel without conflicts?\"
\"Merge the docs track back in.\"
\"Clean up all the worktrees, we're done.\"
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#what-works-differently-in-worktrees","level":2,"title":"What Works Differently in Worktrees","text":"
The encryption key lives at ~/.ctx/.ctx.key (user-level, outside the project). Because all worktrees on the same machine share this path, ctx pad and ctx notify work in worktrees automatically - no special setup needed.
One thing to watch:
Journal enrichment: ctx journal import and ctx journal enrich write files relative to the current working directory. Enrichments created in a worktree stay there and are discarded on teardown. Enrich journals on the main branch after merging: the JSONL session logs are always intact, and you don't lose any data.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#tips","level":2,"title":"Tips","text":"
3-4 worktrees max. Beyond that, merge complexity outweighs the parallelism benefit. The skill enforces this limit.
Group by package or directory, not by priority. Two high-priority tasks that touch the same files must be in the same track.
TASKS.md will conflict on merge. This is normal. Accept all [x] completions: The resolution is always additive.
Don't run ctx init in worktrees. The .context/ directory is tracked in git. Running init overwrites shared context files.
Name worktrees by concern, not by number. work/docs and work/crypto are more useful than work/track-1 and work/track-2.
Commit frequently in each worktree. Smaller commits make merge conflicts easier to resolve.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#next-up","level":2,"title":"Next Up","text":"
Back to the beginning: Guide Your Agent →
Or explore the full recipe list.
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/parallel-worktrees/#see-also","level":2,"title":"See Also","text":"
Running an Unattended AI Agent: for serial autonomous loops instead of parallel tracks
Tracking Work Across Sessions: managing the task backlog that feeds into parallelization
The Complete Session: the complete session workflow end-to-end, with examples
","path":["Recipes","Agents and Automation","Parallel Agent Development with Git Worktrees"],"tags":[]},{"location":"recipes/permission-snapshots/","level":1,"title":"Permission Snapshots","text":"","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#the-problem","level":2,"title":"The Problem","text":"
Claude Code's .claude/settings.local.json accumulates one-off permissions every time you click \"Allow\". After busy sessions the file is full of session-specific entries that expand the agent's surface area beyond intent.
Since settings.local.json is .gitignored, there is no PR review or CI check. The file drifts independently on every machine, and there is no built-in way to reset to a known-good state.
/ctx-sanitize-permissions # audit for dangerous patterns\nctx permission snapshot # save golden image\n# ... sessions accumulate cruft ...\nctx permission restore # reset to golden state\n
Save a curated settings.local.json as a golden image, then restore from it to drop session-accumulated permissions. The golden file (.claude/settings.golden.json) is committed to version control and shared with the team.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Command/Skill Role in this workflow ctx permission snapshot Save settings.local.json as golden image ctx permission restore Reset settings.local.json from golden image /ctx-sanitize-permissions Audit for dangerous patterns before snapshotting","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#step-by-step","level":2,"title":"Step by Step","text":"","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#1-curate-your-permissions","level":3,"title":"1. Curate Your Permissions","text":"
Start with a clean settings.local.json. Optionally run /ctx-sanitize-permissions to remove dangerous patterns first.
Review the file manually. Every entry should be there because you decided it belongs, not because you clicked \"Allow\" once during debugging.
See the Permission Hygiene recipe for recommended defaults.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#2-take-a-snapshot","level":3,"title":"2. Take a Snapshot","text":"
ctx permission snapshot\n# Saved golden image: .claude/settings.golden.json\n
This creates a byte-for-byte copy. No re-encoding, no indent changes.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#3-commit-the-golden-file","level":3,"title":"3. Commit the Golden File","text":"
git add .claude/settings.golden.json\ngit commit -m \"Add permission golden image\"\n
The golden file is not gitignored (unlike settings.local.json). This is intentional: it becomes a team-shared baseline.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#4-auto-restore-at-the-session-start","level":3,"title":"4. Auto-Restore at the Session Start","text":"
Add this instruction to your CLAUDE.md:
## On Session Start\n\nRun `ctx permission restore` to reset permissions to the golden image.\n
The agent will restore the golden image at the start of every session, automatically dropping any permissions accumulated during previous sessions.
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#5-update-when-intentional-changes-are-made","level":3,"title":"5. Update When Intentional Changes Are Made","text":"
When you add a new permanent permission (not a one-off debugging entry):
# Edit settings.local.json with the new permission\n# Then update the golden image:\nctx permission snapshot\ngit add .claude/settings.golden.json\ngit commit -m \"Update permission golden image: add cargo test\"\n
You don't need to remember exact commands. These natural-language prompts work with agents trained on the ctx playbook:
What you say What happens \"Save my current permissions as baseline\" Agent runs ctx permission snapshot \"Reset permissions to the golden image\" Agent runs ctx permission restore \"Clean up my permissions\" Agent runs /ctx-sanitize-permissions then snapshot \"What permissions did I accumulate?\" Agent diffs local vs golden","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/permission-snapshots/#next-up","level":2,"title":"Next Up","text":"
Turning Activity into Content →: Generate blog posts, changelogs, and journal sites from your project activity.
Permission Hygiene: recommended defaults and maintenance workflow
CLI Reference: ctx permission: full command documentation
","path":["Recipes","Maintenance","Permission Snapshots"],"tags":[]},{"location":"recipes/publishing/","level":1,"title":"Turning Activity into Content","text":"","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#the-problem","level":2,"title":"The Problem","text":"
Your .context/ directory is full of decisions, learnings, and session history.
Your git log tells the story of a project evolving.
But none of this is visible to anyone outside your terminal.
You want to turn this raw activity into:
a browsable journal site,
blog posts,
changelog posts.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#tldr","level":2,"title":"TL;DR","text":"
ctx journal import --all # 1. import sessions to markdown\n\n/ctx-journal-enrich-all # 2. add metadata and tags\n\nctx journal site --serve # 3. build and serve the journal\n\n/ctx-blog about the caching layer # 4. draft a blog post\n/ctx-blog-changelog v0.1.0 \"v0.2\" # 5. write a changelog post\n
Read on for details on each stage.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx journal import Command Import session JSONL to editable markdown ctx journal site Command Generate a static site from journal entries ctx journal obsidian Command Generate an Obsidian vault from journal entries ctx serve Command Serve any zensical directory (default: journal) ctx site feed Command Generate Atom feed from finalized blog posts make journal Makefile Shortcut for import + site rebuild /ctx-journal-enrich-all Skill Full pipeline: import if needed, then batch-enrich (recommended) /ctx-journal-enrich Skill Add metadata, summaries, and tags to one entry /ctx-blog Skill Draft a blog post from recent project activity /ctx-blog-changelog Skill Write a themed post from a commit range","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-1-import-sessions-to-markdown","level":3,"title":"Step 1: Import Sessions to Markdown","text":"
Raw session data lives as JSONL files in Claude Code's internal storage. The first step is converting these into readable, editable markdown.
# Import all sessions from the current project\nctx journal import --all\n\n# Import from all projects (if you work across multiple repos)\nctx journal import --all --all-projects\n\n# Import a single session by ID or slug\nctx journal import abc123\nctx journal import gleaming-wobbling-sutherland\n
Imported files land in .context/journal/ as individual Markdown files with session metadata and the full conversation transcript.
--all is safe by default: Only new sessions are imported. Existing files are skipped. Use --regenerate to re-import existing files (YAML frontmatter is preserved). Use --regenerate --keep-frontmatter=false -y to regenerate everything including frontmatter.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-2-enrich-entries-with-metadata","level":3,"title":"Step 2: Enrich Entries with Metadata","text":"
Raw entries have timestamps and conversations but lack the structured metadata that makes a journal searchable. Use /ctx-journal-enrich-all to process your entire backlog at once:
/ctx-journal-enrich-all\n
The skill finds all unenriched entries, filters out noise (suggestion sessions, very short sessions, multipart continuations), and processes each one by extracting titles, topics, technologies, and summaries from the conversation.
For large backlogs (20+ entries), it can spawn subagents to process entries in parallel.
This metadata powers better navigation in the journal site:
titles replace slugs,
summaries appear in the index,
and search covers topics and technologies.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-3-generate-the-journal-site","level":3,"title":"Step 3: Generate the Journal Site","text":"
With entries exported and enriched, generate the static site:
# Generate site files\nctx journal site\n\n# Generate and build static HTML\nctx journal site --build\n\n# Generate and serve locally (opens at http://localhost:8000)\nctx journal site --serve\n\n# Custom output directory\nctx journal site --output ~/my-journal\n
The site is generated in .context/journal-site/ by default. It uses zensical for static site generation (pipx install zensical).
Or use the Makefile shortcut that combines export and rebuild:
make journal\n
This runs ctx journal import --all followed by ctx journal site --build, then reminds you to enrich before rebuilding. To serve the built site, use make journal-serve or ctx serve (serve-only, no regeneration).
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#alternative-export-to-obsidian-vault","level":3,"title":"Alternative: Export to Obsidian Vault","text":"
If you use Obsidian for knowledge management, generate a vault instead of (or alongside) the static site:
This produces an Obsidian-ready directory with wikilinks, MOC (Map of Content) pages for topics/files/types, and a \"Related Sessions\" footer on each entry for graph connectivity. Open the output directory in Obsidian as a vault.
The vault uses the same enriched source entries as the static site. Both outputs can coexist: The static site goes to .context/journal-site/, the vault to .context/journal-obsidian/.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-4-draft-blog-posts-from-activity","level":3,"title":"Step 4: Draft Blog Posts from Activity","text":"
When your project reaches a milestone worth sharing, use /ctx-blog to draft a post from recent activity. The skill gathers context from multiple sources: git log, DECISIONS.md, LEARNINGS.md, completed tasks, and journal entries.
/ctx-blog about the caching layer we just built\n/ctx-blog last week's refactoring work\n/ctx-blog lessons learned from the migration\n
The skill gathers recent commits, decisions, and learnings; identifies a narrative arc; drafts an outline for approval; writes the full post; and saves it to docs/blog/YYYY-MM-DD-slug.md.
Posts are written in first person with code snippets, commit references, and an honest discussion of what went wrong.
The Output is zensical-Flavored Markdown
The blog skills produce Markdown tuned for a zensical site: topics: frontmatter (zensical's tag field), a docs/blog/ output path, and a banner image reference.
The content is still standard Markdown and can be adapted to other static site generators, but the defaults assume a zensical project structure.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-5-write-changelog-posts-from-commit-ranges","level":3,"title":"Step 5: Write Changelog Posts from Commit Ranges","text":"
For release notes or \"what changed\" posts, /ctx-blog-changelog takes a starting commit and a theme, then analyzes everything that changed:
/ctx-blog-changelog 040ce99 \"building the journal system\"\n/ctx-blog-changelog HEAD~30 \"what's new in v0.2.0\"\n/ctx-blog-changelog v0.1.0 \"the road to v0.2.0\"\n
The skill diffs the commit range, identifies the most-changed files, and constructs a narrative organized by theme rather than chronology, including a key commits table and before/after comparisons.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#step-6-generate-the-blog-feed","level":3,"title":"Step 6: Generate the Blog Feed","text":"
After publishing blog posts, generate the Atom feed so readers and automation can discover new content:
ctx site feed\n
This scans docs/blog/ for finalized posts (reviewed_and_finalized: true), extracts title, date, author, topics, and summary, and writes a valid Atom 1.0 feed to site/feed.xml. The feed is also generated automatically as part of make site.
The feed is available at ctx.ist/feed.xml.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#the-conversational-approach","level":2,"title":"The Conversational Approach","text":"
You can also drive your publishing anytime with natural language:
\"write about what we did this week\"\n\"turn today's session into a blog post\"\n\"make a changelog post covering everything since the last release\"\n\"enrich the last few journal entries\"\n
The agent has full visibility into your .context/ state (tasks completed, decisions recorded, learnings captured), so its suggestions are grounded in what actually happened.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
The full pipeline from raw transcripts to published content:
# 1. Import all sessions\nctx journal import --all\n\n# 2. In Claude Code: enrich all entries with metadata\n/ctx-journal-enrich-all\n\n# 3. Build and serve the journal site\nmake journal\nmake journal-serve\n\n# 3b. Or generate an Obsidian vault\nctx journal obsidian\n\n# 4. In Claude Code: draft a blog post\n/ctx-blog about the features we shipped this week\n\n# 5. In Claude Code: write a changelog post\n/ctx-blog-changelog v0.1.0 \"what's new in v0.2.0\"\n
The journal pipeline is idempotent at every stage. You can rerun ctx journal import --all without losing enrichment. You can rebuild the site as many times as you want.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#tips","level":2,"title":"Tips","text":"
Import regularly. Run ctx journal import --all after each session to keep your journal current. Only new sessions are imported: Existing files are skipped by default.
Use batch enrichment. /ctx-journal-enrich-all filters noise (suggestion sessions, trivial sessions, multipart continuations) so you do not have to decide what is worth enriching.
Keep journal files in .gitignore. Session journals can contain sensitive data: file contents, commands, internal discussions, and error messages with stack traces. Add .context/journal/ and .context/journal-site/ to .gitignore.
Use /ctx-blog for narrative posts and /ctx-blog-changelog for release posts. One finds a story in recent activity, the other explains a commit range by theme.
Edit the drafts. These skills produce drafts, not final posts. Review the narrative, add your perspective, and remove anything that does not serve the reader.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#next-up","level":2,"title":"Next Up","text":"
Running an Unattended AI Agent →: Set up an AI agent that works through tasks overnight without you at the keyboard.
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/publishing/#see-also","level":2,"title":"See Also","text":"
CLI Reference: ctx serve: serve-only (no regeneration)
Browsing and Enriching Past Sessions: journal browsing workflow
The Complete Session: capturing context during a session
","path":["Recipes","Maintenance","Turning Activity into Content"],"tags":[]},{"location":"recipes/scratchpad-sync/","level":1,"title":"Syncing Scratchpad Notes Across Machines","text":"","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#the-problem","level":2,"title":"The Problem","text":"
You work from multiple machines: a desktop and a laptop, or a local machine and a remote dev server.
The scratchpad entries are encrypted. The ciphertext (.context/scratchpad.enc) travels with git, but the encryption key lives outside the project at ~/.ctx/.ctx.key and is never committed. Without the key on each machine, you cannot read or write entries.
How do you distribute the key and keep the scratchpad in sync?
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#tldr","level":2,"title":"TL;DR","text":"
ctx init # 1. generates key\nscp ~/.ctx/.ctx.key user@machine-b:~/.ctx/.ctx.key # 2. copy key\nchmod 600 ~/.ctx/.ctx.key # 3. secure it\n# Normal git push/pull syncs the encrypted scratchpad.enc\n# On conflict: ctx pad resolve → rebuild → git add + commit\n
Finding Your Key File
The key is always at ~/.ctx/.ctx.key - one key, one machine.
Treat the Key Like a Password
The scratchpad key is the only thing protecting your encrypted entries.
Store a backup in a secure enclave such as a password manager, and treat it with the same care you would give passwords, certificates, or API tokens.
Anyone with the key can decrypt every scratchpad entry.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx init CLI command Initialize context (generates the key automatically) ctx pad add CLI command Add a scratchpad entry ctx pad rm CLI command Remove a scratchpad entry ctx pad edit CLI command Edit a scratchpad entry ctx pad resolve CLI command Show both sides of a merge conflict ctx pad merge CLI command Merge entries from other scratchpad files ctx pad import CLI command Bulk-import lines from a file ctx pad export CLI command Export blob entries to a directory scp Shell Copy the key file between machines git push / git pull Shell Sync the encrypted file via git/ctx-pad Skill Natural language interface to pad commands","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-1-initialize-on-machine-a","level":3,"title":"Step 1: Initialize on Machine A","text":"
Run ctx init on your first machine. The key is created automatically at ~/.ctx/.ctx.key:
ctx init\n# ...\n# Created ~/.ctx/.ctx.key (0600)\n# Created .context/scratchpad.enc\n
The key lives outside the project directory and is never committed. The .enc file is tracked in git.
Key Folder Change (v0.7.0+)
If you built ctx from source or upgraded past v0.6.0, the key location changed to ~/.ctx/.ctx.key. Check these legacy folders and copy your key manually:
# Old locations (pick whichever exists)\nls ~/.local/ctx/keys/ # pre-v0.7.0 user-level\nls .context/.ctx.key # pre-v0.6.0 project-local\n\n# Copy to the new location\nmkdir -p ~/.ctx && chmod 700 ~/.ctx\ncp <old-key-path> ~/.ctx/.ctx.key\nchmod 600 ~/.ctx/.ctx.key\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-2-copy-the-key-to-machine-b","level":3,"title":"Step 2: Copy the Key to Machine B","text":"
Use any secure transfer method. The key is always at ~/.ctx/.ctx.key:
# scp - create the target directory first\nssh user@machine-b \"mkdir -p ~/.ctx && chmod 700 ~/.ctx\"\nscp ~/.ctx/.ctx.key user@machine-b:~/.ctx/.ctx.key\n\n# Or use a password manager, USB drive, etc.\n
Set permissions on Machine B:
chmod 600 ~/.ctx/.ctx.key\n
Secure the Transfer
The key is a raw 256-bit AES key. Anyone with the key can decrypt the scratchpad. Use an encrypted channel (SSH, password manager, vault).
Never paste it in plaintext over email or chat.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-3-normal-pushpull-workflow","level":3,"title":"Step 3: Normal Push/Pull Workflow","text":"
The encrypted file is committed, so standard git sync works:
# Machine A: add entries and push\nctx pad add \"staging API key: sk-test-abc123\"\ngit add .context/scratchpad.enc\ngit commit -m \"Update scratchpad\"\ngit push\n\n# Machine B: pull and read\ngit pull\nctx pad\n# 1. staging API key: sk-test-abc123\n
Both machines have the same key, so both can decrypt the same .enc file.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-4-read-and-write-from-either-machine","level":3,"title":"Step 4: Read and Write from Either Machine","text":"
Once the key is distributed, all ctx pad commands work identically on both machines. Entries added on Machine A are visible on Machine B after a git pull, and vice versa.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#step-5-handle-merge-conflicts","level":3,"title":"Step 5: Handle Merge Conflicts","text":"
If both machines add entries between syncs, pulling will create a merge conflict on .context/scratchpad.enc. Git cannot merge binary (encrypted) content automatically.
The fastest approach is ctx pad merge: It reads both conflict sides, deduplicates, and writes the union:
# Extract theirs to a temp file, then merge it in\ngit show :3:.context/scratchpad.enc > /tmp/theirs.enc\ngit checkout --ours .context/scratchpad.enc\nctx pad merge /tmp/theirs.enc\n\n# Done: Commit the resolved scratchpad:\ngit add .context/scratchpad.enc\ngit commit -m \"Resolve scratchpad merge conflict\"\n
Alternatively, use ctx pad resolve to inspect both sides manually:
ctx pad resolve\n# === Ours (this machine) ===\n# 1. staging API key: sk-test-abc123\n# 2. check DNS after deploy\n#\n# === Theirs (incoming) ===\n# 1. staging API key: sk-test-abc123\n# 2. new endpoint: api.example.com/v2\n
Then reconstruct the merged scratchpad:
# Start fresh with all entries from both sides\nctx pad add \"staging API key: sk-test-abc123\"\nctx pad add \"check DNS after deploy\"\nctx pad add \"new endpoint: api.example.com/v2\"\n\n# Mark the conflict resolved\ngit add .context/scratchpad.enc\ngit commit -m \"Resolve scratchpad merge conflict\"\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#merge-conflict-walkthrough","level":2,"title":"Merge Conflict Walkthrough","text":"
Here's a full scenario showing how conflicts arise and how to resolve them:
1. Both machines start in sync (1 entry):
Machine A: 1. staging API key: sk-test-abc123\nMachine B: 1. staging API key: sk-test-abc123\n
2. Both add entries independently:
Machine A adds: \"check DNS after deploy\"\nMachine B adds: \"new endpoint: api.example.com/v2\"\n
3. Machine A pushes first. Machine B pulls and gets a conflict:
git pull\n# CONFLICT (content): Merge conflict in .context/scratchpad.enc\n
4. Machine B runs ctx pad resolve:
ctx pad resolve\n# === Ours ===\n# 1. staging API key: sk-test-abc123\n# 2. new endpoint: api.example.com/v2\n#\n# === Theirs ===\n# 1. staging API key: sk-test-abc123\n# 2. check DNS after deploy\n
5. Rebuild with entries from both sides and commit:
# Clear and rebuild (or use the skill to guide you)\nctx pad add \"staging API key: sk-test-abc123\"\nctx pad add \"check DNS after deploy\"\nctx pad add \"new endpoint: api.example.com/v2\"\n\ngit add .context/scratchpad.enc\ngit commit -m \"Merge scratchpad: keep entries from both machines\"\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#conversational-approach","level":3,"title":"Conversational Approach","text":"
When working with an AI assistant, you can resolve conflicts naturally:
You: \"I have a scratchpad merge conflict. Can you resolve it?\"\n\nAgent: \"Let me extract theirs and merge it in.\"\n [runs git show :3:.context/scratchpad.enc > /tmp/theirs.enc]\n [runs git checkout --ours .context/scratchpad.enc]\n [runs ctx pad merge /tmp/theirs.enc]\n \"Merged 2 new entries (1 duplicate skipped). Want me to\n commit the resolution?\"\n
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#tips","level":2,"title":"Tips","text":"
Back up the key: If you lose it, you lose access to all encrypted entries. Store a copy in your password manager.
One key per project: Each ctx init generates a unique key. Don't reuse keys across projects.
Keys work in worktrees: Because the key lives at ~/.ctx/.ctx.key (outside the project), git worktrees on the same machine share the key automatically. No special setup needed.
Plaintext fallback for non-sensitive projects: If encryption adds friction and you have nothing sensitive, set scratchpad_encrypt: false in .ctxrc. Merge conflicts become trivial text merges.
Never commit the key: The key is stored outside the project at ~/.ctx/.ctx.key and should never be copied into the repository.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#next-up","level":2,"title":"Next Up","text":"
Hook Output Patterns →: Choose the right output pattern for your Claude Code hooks.
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-sync/#see-also","level":2,"title":"See Also","text":"
Scratchpad: feature overview, all commands, when to use scratchpad vs context files
Persisting Decisions, Learnings, and Conventions: for structured knowledge that outlives the scratchpad
","path":["Recipes","Knowledge and Tasks","Syncing Scratchpad Notes Across Machines"],"tags":[]},{"location":"recipes/scratchpad-with-claude/","level":1,"title":"Using the Scratchpad","text":"","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#the-problem","level":2,"title":"The Problem","text":"
During a session you accumulate quick notes, reminders, intermediate values, and sometimes sensitive tokens. They don't fit TASKS.md (not work items) or DECISIONS.md (not decisions). They don't have the structured fields that LEARNINGS.md requires.
Without somewhere to put them, they get lost between sessions.
How do you capture working memory that persists across sessions without polluting your structured context files?
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#tldr","level":2,"title":"TL;DR","text":"
ctx pad add \"check DNS propagation after deploy\"\nctx pad # list entries\nctx pad show 1 # print entry (pipe-friendly)\n
Entries are encrypted at rest and travel with git.
Use the /ctx-pad skill to manage entries from inside your AI session.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx pad CLI command List all scratchpad entries ctx pad show N CLI command Output raw text of entry N (pipe-friendly) ctx pad add CLI command Add a new entry ctx pad edit CLI command Replace, append to, or prepend to an entry ctx pad add --file CLI command Ingest a file as a blob entry ctx pad show N --out CLI command Extract a blob entry to a file ctx pad rm CLI command Remove an entry ctx pad mv CLI command Reorder entries ctx pad import CLI command Bulk-import lines from a file (or stdin) ctx pad export CLI command Export all blob entries to a directory /ctx-pad Skill Natural language interface to all pad commands","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-1-add-a-note-naturally","level":3,"title":"Step 1: Add a Note Naturally","text":"
You don't need to remember any syntax. Just tell your assistant what to jot down:
You: \"jot down: check DNS propagation after deploy\"\n\nAgent: \"Added to scratchpad:\n 1. check DNS propagation after deploy\"\n
Behind the scenes, the agent runs ctx pad add \"check DNS propagation after deploy\".
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-2-view-your-scratchpad","level":3,"title":"Step 2: View Your Scratchpad","text":"
You: \"show my scratchpad\"\n\nAgent: \"Your scratchpad has 3 entries:\n 1. check DNS propagation after deploy\n 2. staging API endpoint: api.staging.example.com/v2\n 3. retry limit should be 5, not 3\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-3-edit-an-entry","level":3,"title":"Step 3: Edit an Entry","text":"
You: \"update entry 2 to include the new endpoint\"\n\nAgent: \"Updated entry 2:\n was: staging API endpoint: api.staging.example.com/v2\n now: staging API endpoint: api.prod.example.com/v2\"\n
For partial edits, the agent can use --append or --prepend:
You: \"append the port number to entry 2\"\n\nAgent: \"Updated entry 2:\n staging API endpoint: api.prod.example.com/v2 :8443\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-4-compose-entries","level":3,"title":"Step 4: Compose Entries","text":"
Use ctx pad show N to pipe entry content into other operations:
You: \"append the content of entry 3 to entry 1\"\n\nAgent: [runs ctx pad edit 1 --append \"$(ctx pad show 3)\"]\n \"Done. Entry 1 is now:\n check DNS propagation after deploy retry limit should be 5, not 3\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-5-reorder-by-priority","level":3,"title":"Step 5: Reorder by Priority","text":"
You: \"move entry 4 to the top\"\n\nAgent: [runs ctx pad mv 4 1]\n \"Moved entry 4 to position 1. Scratchpad reordered.\"\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-6-clean-up","level":3,"title":"Step 6: Clean Up","text":"
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-7-store-a-file-as-a-blob","level":3,"title":"Step 7: Store a File as a Blob","text":"
The scratchpad can hold small files (up to 64 KB) as encrypted blob entries. The file is base64-encoded and stored alongside a label you provide:
# Ingest a file: the first argument is the label\nctx pad add \"deploy config\" --file ./deploy.yaml\n\n# List shows the label with a [BLOB] marker\nctx pad\n# 1. check DNS propagation after deploy\n# 2. deploy config [BLOB]\n
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-8-extract-a-blob","level":3,"title":"Step 8: Extract a Blob","text":"
Use show --out to write the decoded file back to disk:
# Write blob entry to a file\nctx pad show 2 --out ./recovered-deploy.yaml\n\n# Or print to stdout (for piping)\nctx pad show 2 | head -5\n
Blob entries are encrypted identically to text entries: They're just base64-encoded before encryption. The --out flag decodes and writes the raw bytes.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-9-bulk-import-notes","level":3,"title":"Step 9: Bulk Import Notes","text":"
When you have a file with many notes (one per line), import them in bulk instead of adding one at a time:
# Import from a file: Each non-empty line becomes an entry\nctx pad import notes.txt\n\n# Or pipe from stdin\ngrep TODO *.go | ctx pad import -\n
All entries are written in a single encrypt/write cycle, regardless of how many lines the file contains.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#step-10-export-blobs-to-disk","level":3,"title":"Step 10: Export Blobs to Disk","text":"
Export all blob entries to a directory as individual files. Each blob's label becomes the filename:
# Export to a directory (created if needed)\nctx pad export ./ideas\n\n# Preview what would be exported\nctx pad export --dry-run ./ideas\n\n# Force overwrite existing files\nctx pad export --force ./backup\n
When a file already exists, a unix timestamp is prepended to the filename to avoid collisions. Use --force to overwrite instead.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#using-ctx-pad-in-a-session","level":2,"title":"Using /ctx-pad in a Session","text":"
Invoke the /ctx-pad skill first, then describe what you want in natural language. Without the skill prefix, the agent may route your request to TASKS.md or another context file instead of the scratchpad.
You: /ctx-pad jot down: check DNS after deploy\nYou: /ctx-pad show my scratchpad\nYou: /ctx-pad delete entry 3\n
Once the skill is active, it translates intent into commands:
You say (after /ctx-pad) What the agent does \"jot down: check DNS after deploy\" ctx pad add \"check DNS after deploy\" \"remember this: retry limit is 5\" ctx pad add \"retry limit is 5\" \"show my scratchpad\" / \"what's on my pad\" ctx pad \"show me entry 3\" ctx pad show 3 \"delete the third one\" / \"remove entry 3\" ctx pad rm 3 \"change entry 2 to ...\" ctx pad edit 2 \"new text\" \"append ' +important' to entry 3\" ctx pad edit 3 --append \" +important\" \"prepend 'URGENT:' to entry 1\" ctx pad edit 1 --prepend \"URGENT: \" \"prioritize entry 4\" / \"move to the top\" ctx pad mv 4 1 \"import my notes from notes.txt\" ctx pad import notes.txt \"export all blobs to ./ideas\" ctx pad export ./ideas
When in Doubt, Use the CLI Directly
The ctx pad commands work the same whether you run them yourself or let the skill invoke them.
If the agent misroutes a request, fall back to ctx pad add \"...\" in your terminal.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#when-to-use-scratchpad-vs-context-files","level":2,"title":"When to Use Scratchpad vs Context Files","text":"Situation Use Temporary reminders (\"check X after deploy\") Scratchpad Session-start reminders (\"remind me next session\") ctx remind Working values during debugging (ports, endpoints, counts) Scratchpad Sensitive tokens or API keys (short-term storage) Scratchpad Quick notes that don't fit anywhere else Scratchpad Work items with completion tracking TASKS.md Trade-offs between alternatives with rationale DECISIONS.md Reusable lessons with context/lesson/application LEARNINGS.md Codified patterns and standards CONVENTIONS.md
Decision Guide
If it has structured fields (context, rationale, lesson, application), it belongs in a context file like DECISIONS.md or LEARNINGS.md.
If it's a work item you'll mark done, it belongs in TASKS.md.
If you want a message relayed VERBATIM at the next session start, it belongs in ctx remind.
If it's a quick note, reminder, or working value (especially if it's sensitive or ephemeral) it belongs on the scratchpad.
Scratchpad Is Not a Junk Drawer
The scratchpad is for working memory, not long-term storage.
If a note is still relevant after several sessions, promote it:
A persistent reminder becomes a task, a recurring value becomes a convention, a hard-won insight becomes a learning.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#tips","level":2,"title":"Tips","text":"
Entries persist across sessions: The scratchpad is committed (encrypted) to git, so entries survive session boundaries. Pick up where you left off.
Entries are numbered and reorderable: Use ctx pad mv to put high-priority items at the top.
ctx pad show N enables unix piping: Output raw entry text with no numbering prefix. Compose with --append, --prepend, or other shell tools.
Never mention the key file contents to the AI: The agent knows how to use ctx pad commands but should never read or print the encryption key (~/.ctx/.ctx.key) directly.
Encryption is transparent: You interact with plaintext; the encryption/decryption happens automatically on every read/write.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#next-up","level":2,"title":"Next Up","text":"
Syncing Scratchpad Notes Across Machines →: Distribute encryption keys and scratchpad data across environments.
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/scratchpad-with-claude/#see-also","level":2,"title":"See Also","text":"
Scratchpad: feature overview, all commands, encryption details, plaintext override
Persisting Decisions, Learnings, and Conventions: for structured knowledge that outlives the scratchpad
The Complete Session: full session lifecycle showing how the scratchpad fits into the broader workflow
","path":["Recipes","Knowledge and Tasks","Using the Scratchpad"],"tags":[]},{"location":"recipes/session-archaeology/","level":1,"title":"Browsing and Enriching Past Sessions","text":"","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#the-problem","level":2,"title":"The Problem","text":"
After weeks of AI-assisted development you have dozens of sessions scattered across JSONL files in ~/.claude/projects/. Finding the session where you debugged the Redis connection pool, or remembering what you decided about the caching strategy three Tuesdays ago, often means grepping raw JSON.
There is no table of contents, no search, and no summaries.
This recipe shows how to turn that raw session history into a browsable, searchable, and enriched journal site you can navigate in your browser.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#tldr","level":2,"title":"TL;DR","text":"
Export and Generate
ctx journal import --all\nctx journal site --serve\n
Enrich
/ctx-journal-enrich-all\n
Rebuild
ctx journal site --serve\n
Read on for what each stage does and why.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx journal source Command List parsed sessions with metadata ctx journal source --show Command Inspect a specific session in detail ctx journal import Command Import sessions to editable journal Markdown ctx journal site Command Generate a static site from journal entries ctx journal obsidian Command Generate an Obsidian vault from journal entries ctx serve Command Serve any zensical directory (default: journal) /ctx-history Skill Browse sessions inside your AI assistant /ctx-journal-enrich Skill Add frontmatter metadata to a single entry /ctx-journal-enrich-all Skill Full pipeline: import if needed, then batch-enrich","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#the-workflow","level":2,"title":"The Workflow","text":"
The session journal follows a four-stage pipeline.
Each stage is idempotent and safe to re-run:
By default, each stage skips entries that have already been processed.
import -> enrich -> rebuild\n
Stage Tool What it does Skips if Where Import ctx journal import --all Converts session JSONL to Markdown File already exists (safe default) CLI or agent Enrich /ctx-journal-enrich-all Adds frontmatter, summaries, topic tags Frontmatter already present Agent only Rebuild ctx journal site --build Generates browsable static HTML N/A CLI only Obsidian ctx journal obsidian Generates Obsidian vault with wikilinks N/A CLI only
Where Do You Run Each Stage?
Import (Steps 1 to 3) works equally well from the terminal or inside your AI assistant via /ctx-history. The CLI is fine here: the agent adds no special intelligence, it just runs the same command.
Enrich (Step 4) requires the agent: it reads conversation content and produces structured metadata.
Rebuild and serve (Step 5) is a terminal operation that starts a long-running server.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-1-list-your-sessions","level":3,"title":"Step 1: List Your Sessions","text":"
Start by seeing what sessions exist for the current project:
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-2-inspect-a-specific-session","level":3,"title":"Step 2: Inspect a Specific Session","text":"
Before exporting everything, inspect a single session to see its metadata and conversation summary:
ctx journal source --show --latest\n
Or look up a specific session by its slug, partial ID, or UUID:
Add --full to see the complete message content instead of the summary view:
ctx journal source --show --latest --full\n
This is useful for checking what happened before deciding whether to export and enrich it.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-3-import-sessions-to-the-journal","level":3,"title":"Step 3: Import Sessions to the Journal","text":"
Import converts raw session data into editable Markdown files in .context/journal/:
# Import all sessions from the current project\nctx journal import --all\n\n# Import a single session\nctx journal import gleaming-wobbling-sutherland\n\n# Include sessions from all projects\nctx journal import --all --all-projects\n
--keep-frontmatter=false Discards Enrichments
--keep-frontmatter=false discards enriched YAML frontmatter during regeneration.
Back up your journal before using this flag.
Each imported file contains session metadata (date, time, duration, model, project, git branch), a tool usage summary, and the full conversation transcript.
Re-importing is safe. Running ctx journal import --all only imports new sessions: Existing files are never touched. Use --dry-run to preview what would be imported without writing anything.
To re-import existing files (e.g., after a format improvement), use --regenerate: Conversation content is regenerated while preserving any YAML frontmatter you or the enrichment skill has added. You'll be prompted before any files are overwritten.
--regenerate Replaces the Markdown Body
--regenerate preserves YAML frontmatter but replaces the entire Markdown body with freshly generated content from the source JSONL.
If you manually edited the conversation transcript (added notes, redacted sensitive content, restructured sections), those edits will be lost.
BACK UP YOUR JOURNAL FIRST.
To protect entries you've hand-edited, you can explicitly lock them:
ctx journal lock <pattern>\n
Locked entries are always skipped, regardless of flags.
If you prefer to add locked: true directly in frontmatter during enrichment, run ctx journal sync to propagate the lock state to .state.json:
ctx journal sync\n
See ctx journal lock --help and ctx journal sync --help for details.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-4-enrich-with-metadata","level":3,"title":"Step 4: Enrich with Metadata","text":"
Raw imports have timestamps and transcripts but lack the semantic metadata that makes sessions searchable: topics, technology tags, outcome status, and summaries. The /ctx-journal-enrich* skills add this structured frontmatter.
Locked entries are skipped by enrichment skills, just as they are by import. Lock entries you want to protect before running batch enrichment.
Batch enrichment (recommended):
/ctx-journal-enrich-all\n
The skill finds all unenriched entries, filters out noise (suggestion sessions, very short sessions, multipart continuations), and processes each one by extracting titles, topics, technologies, and summaries from the conversation.
It shows you a grouped summary before applying changes so you can scan quickly rather than reviewing one by one.
For large backlogs (20+ entries), the skill can spawn subagents to process entries in parallel.
The skill also generates a summary and can extract decisions, learnings, and tasks mentioned during the session.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#step-5-generate-and-serve-the-site","level":3,"title":"Step 5: Generate and Serve the Site","text":"
With imported and enriched journal files, generate the static site:
# Generate site structure only\nctx journal site\n\n# Generate and build static HTML\nctx journal site --build\n\n# Generate, build, and serve locally\nctx journal site --serve\n
Then open http://localhost:8000 to browse.
The site includes a date-sorted index, individual session pages with full conversations, search (press /), dark mode, and enriched titles in the navigation when frontmatter exists.
You can also serve an already-generated site without regenerating using ctx serve (serve-only, no regeneration).
The site generator requires zensical (pipx install zensical).
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#where-does-agent-add-value","level":2,"title":"Where Does Agent Add Value?","text":"
Export, list, and show are mechanical: The agent runs the same CLI commands you would, so you can stay in your terminal for those.
The agent earns its keep in enrich. It reads conversation content, understands what happened, and produces structured metadata. That is agent work, not CLI work.
You can also ask your agent to browse sessions conversationally instead of remembering flags:
What did we work on last week?\nShow me the session about Redis.\nImport everything to the journal.\n
This is convenient but not required: ctx journal source gives you the same inventory.
Where the agent genuinely helps is chaining the pipeline:
You: What happened last Tuesday?\nAgent: Last Tuesday you worked on two sessions:\n - bright-dancing-hopper (2h 10m): refactored the middleware\n pipeline and added Redis caching\n - quiet-flowing-dijkstra (18m): quick fix for a nil pointer\n in the config loader\n Want me to export and enrich them?\nYou: Yes, do it.\nAgent: Exports both, enriches, then proposes frontmatter.\n
The value is staying in one context while the agent runs import -> enrich without you manually switching tools.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
A typical pipeline from raw sessions to a browsable site:
# Terminal: import and generate\nctx journal import --all\nctx journal site --serve\n
# AI assistant: enrich\n/ctx-journal-enrich-all\n
# Terminal: rebuild with enrichments\nctx journal site --serve\n
If your project includes Makefile.ctx (deployed by ctx init), use make journal to combine import and rebuild stages. Then enrich inside Claude Code, then make journal again to pick up enrichments.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#session-retention-and-cleanup","level":2,"title":"Session Retention and Cleanup","text":"
Claude Code does not keep JSONL transcripts forever. Understanding its cleanup behavior helps you avoid losing session history.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#default-behavior","level":3,"title":"Default Behavior","text":"
Claude Code retains session transcripts for approximately 30 days. After that, JSONL files are automatically deleted during cleanup. Once deleted, ctx journal can no longer see those sessions - the data is gone.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#the-cleanupperioddays-setting","level":3,"title":"The cleanupPeriodDays Setting","text":"
Claude Code exposes a cleanupPeriodDays setting in its configuration (~/.claude/settings.json) that controls retention:
Value Behavior 30 (default) Transcripts older than 30 days are deleted 60, 90, etc. Extends the retention window 0 Disables writing new transcripts entirely - not \"keep forever\"
Setting cleanupPeriodDays to 0
Setting this to 0 does not mean \"never delete.\" It disables transcript creation altogether. No new JSONL files are written, which means ctx journal sees nothing new. This is rarely what you want.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#why-journal-import-matters","level":3,"title":"Why Journal Import Matters","text":"
The journal import pipeline (Steps 1-4 above) is your archival mechanism. Imported Markdown files in .context/journal/ persist independently of Claude Code's cleanup cycle. Even after the source JSONL files are deleted, your journal entries remain.
Recommendation: import regularly - weekly, or after any session worth revisiting. A quick ctx journal import --all takes seconds and ensures nothing falls through the 30-day window.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#quick-archival-checklist","level":3,"title":"Quick Archival Checklist","text":"
Run ctx journal import --all at least weekly
Enrich high-value sessions with /ctx-journal-enrich before the details fade from your own memory
Lock enriched entries (ctx journal lock <pattern>) to protect them from accidental regeneration
Rebuild the journal site periodically to keep it current
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#tips","level":2,"title":"Tips","text":"
Start with /ctx-history inside your AI assistant. If you want to quickly check what happened in a recent session without leaving your editor, /ctx-history lets you browse interactively without importing.
Large sessions may be split automatically. Sessions with 200+ messages can be split into multiple parts (session-abc123.md, session-abc123-p2.md, session-abc123-p3.md) with navigation links between them. The site generator can handle this.
Suggestion sessions can be separated. Claude Code can generate short suggestion sessions for autocomplete. These may appear under a separate section in the site index, so they do not clutter your main session list.
Your agent is a good session browser. You do not need to remember slugs, dates, or flags. Ask \"what did we do yesterday?\" or \"find the session about Redis\" and it can map the question to recall commands.
Journal Files Are Sensitive
Journal files MUST be .gitignored.
Session transcripts can contain sensitive data such as file contents, commands, error messages with stack traces, and potentially API keys.
Add .context/journal/, .context/journal-site/, and .context/journal-obsidian/ to your .gitignore.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#next-up","level":2,"title":"Next Up","text":"
Persisting Decisions, Learnings, and Conventions →: Record decisions, learnings, and conventions so they survive across sessions.
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-archaeology/#see-also","level":2,"title":"See Also","text":"
The Complete Session: where session saving fits in the daily workflow
Turning Activity into Content: generating blog posts from session history
Session Journal: full documentation of the journal system
CLI Reference: ctx journal: all journal subcommands and flags
CLI Reference: ctx serve: serve-only (no regeneration)
Context Files: the .context/ directory structure
","path":["Recipes","Sessions","Browsing and Enriching Past Sessions"],"tags":[]},{"location":"recipes/session-ceremonies/","level":1,"title":"Session Ceremonies","text":"","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#the-problem","level":2,"title":"The Problem","text":"
Sessions have two critical moments: the start and the end.
At the start, you need the agent to load context and confirm it knows what is going on.
At the end, you need to capture whatever the session produced before the conversation disappears.
Most ctx skills work conversationally: \"jot down: check DNS after deploy\" is as good as /ctx-pad add \"check DNS after deploy\". But session boundaries are different. They are well-defined moments with specific requirements, and partial execution is costly.
If the agent only half-loads context at the start, it works from stale assumptions. If it only half-persists at the end, learnings and decisions are lost.
This Is One of the Few Times Being Explicit Matters
Session ceremonies are the two bookend skills that mark these boundaries.
They are the exception to the conversational rule:
Invoke /ctx-remember and /ctx-wrap-up explicitly as slash commands.
Most ctx skills encourage natural language. These two are different:
Well-defined moments: Sessions have clear boundaries. A slash command marks the boundary unambiguously.
Ambiguity risk: \"Do you remember?\" could mean many things. /ctx-remember means exactly one thing: load context and present a structured readback.
Completeness: Conversational triggers risk partial execution. The agent might load some files but skip the session history, or persist one learning but forget to check for uncommitted changes. The slash command runs the full ceremony.
Muscle memory: Typing /ctx-remember at session start and /ctx-wrap-up at session end becomes a habit, like opening and closing braces.
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose /ctx-remember Skill Load context and present structured readback /ctx-wrap-up Skill Gather session signal, propose and persist context /ctx-commit Skill Commit with context capture (offered by wrap-up) ctx agent CLI Load token-budgeted context packet ctx journal source CLI List recent sessions ctx add CLI Persist learnings, decisions, conventions, tasks","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#session-start-ctx-remember","level":2,"title":"Session Start: /ctx-remember","text":"
Invoke at the beginning of every session:
/ctx-remember\n
The skill silently:
Loads the context packet via ctx agent --budget 4000
Reads TASKS.md, DECISIONS.md, LEARNINGS.md
Checks recent sessions via ctx journal source --limit 3
Then presents a structured readback with four sections:
Last session: topic, date, what was accomplished
Active work: pending and in-progress tasks
Recent context: 1-2 relevant decisions or learnings
Next step: suggestion or question about what to focus on
The readback should feel like recall, not a file system tour. If the agent says \"Let me check if there are files...\" instead of a confident summary, the skill is not working correctly.
What About 'do you remember?'
The conversational trigger still works. But /ctx-remember guarantees the full ceremony runs:
After persisting, the skill marks the session as wrapped up via ctx system mark-wrapped-up. This suppresses context checkpoint nudges for 2 hours so the wrap-up ceremony itself does not trigger noisy reminders.
If there are uncommitted changes, offers to run /ctx-commit. Does not auto-commit.
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#when-to-skip","level":2,"title":"When to Skip","text":"
Not every session needs ceremonies.
Skip /ctx-remember when:
You are doing a quick one-off lookup (reading a file, checking a value)
Context was already loaded this session via /ctx-agent
You are continuing immediately after a previous session and context is still fresh
Skip /ctx-wrap-up when:
Nothing meaningful happened (only read files, answered a question)
You already persisted everything manually during the session
The session was trivial (typo fix, quick config change)
A good heuristic: if the session produced something a future session should know about, run /ctx-wrap-up. If not, just close.
# Session start\n/ctx-remember\n\n# ... do work ...\n\n# Session end\n/ctx-wrap-up\n
That is the complete ceremony. Two commands, bookending your session.
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-ceremonies/#relationship-to-other-skills","level":2,"title":"Relationship to Other Skills","text":"Skill When Purpose /ctx-remember Session start Load and confirm context /ctx-reflect Mid-session breakpoints Checkpoint at milestones /ctx-wrap-up Session end Full session review and persist /ctx-commit After completing work Commit with context capture
/ctx-reflect is for mid-session checkpoints. /ctx-wrap-up is for end-of-session: it is more thorough, covers the full session arc, and includes the commit offer. If you already ran /ctx-reflect recently, /ctx-wrap-up avoids proposing the same candidates again.
Make it a habit: The value of ceremonies compounds over sessions. Each /ctx-wrap-up makes the next /ctx-remember richer.
Trust the candidates: The agent scans the full conversation. It often catches learnings you forgot about.
Edit before approving: If a proposed candidate is close but not quite right, tell the agent what to change. Do not settle for a vague learning when a precise one is possible.
Do not force empty ceremonies: If /ctx-wrap-up finds nothing worth persisting, that is fine. A session that only read files and answered questions does not need artificial learnings.
The Complete Session: the full session workflow that ceremonies bookend
Persisting Decisions, Learnings, and Conventions: deep dive on what gets persisted during wrap-up
Detecting and Fixing Drift: keeping context files accurate between ceremonies
Pausing Context Hooks: skip ceremonies entirely for quick tasks that don't need them
","path":["Recipes","Sessions","Session Ceremonies"],"tags":[]},{"location":"recipes/session-changes/","level":1,"title":"Reviewing Session Changes","text":"","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#what-changed-while-you-were-away","level":2,"title":"What Changed While You Were Away?","text":"
Between sessions, teammates commit code, context files get updated, and decisions pile up. ctx change gives you a single-command summary of everything that moved since your last session.
# Auto-detects your last session and shows what changed\nctx change\n\n# Check what changed in the last 48 hours\nctx change --since 48h\n\n# Check since a specific date\nctx change --since 2026-03-10\n
","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#how-reference-time-works","level":2,"title":"How Reference Time Works","text":"
ctx change needs a reference point to compare against. It tries these sources in order:
--since flag: explicit duration (24h, 72h) or date (2026-03-10, RFC3339 timestamp)
Session markers: ctx-loaded-* files in .context/state/; picks the second-most-recent (your previous session start)
Event log: last context-load-gate event from .context/state/events.jsonl
Fallback: 24 hours ago
The marker-based detection means ctx change usually just works without any flags: it knows when you last loaded context and shows everything after that.
","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#what-it-reports","level":2,"title":"What It Reports","text":"","path":["Reviewing Session Changes"],"tags":[]},{"location":"recipes/session-changes/#context-file-changes","level":3,"title":"Context file changes","text":"
Any .md file in .context/ modified after the reference time:
No changes? If nothing shows up, the reference time might be wrong. Use --since 48h to widen the window.
Works without git. Context file changes are detected by filesystem mtime, not git. Code changes require git.
Hook integration. The context-load-gate hook writes the session marker that ctx change uses for auto-detection. If you're not using the ctx plugin, markers won't exist and it falls back to the event log or 24h window.
\"What does a full ctx session look like from start to finish?\"
You have ctx installed and your .context/ directory initialized, but the individual commands and skills feel disconnected.
How do they fit together into a coherent workflow?
This recipe walks through a complete session, from opening your editor to persisting context before you close it, so you can see how each piece connects.
Load: /ctx-remember: load context, get structured readback.
Orient: /ctx-status: check file health and token usage.
Pick: /ctx-next: choose what to work on.
Work: implement, test, iterate.
Commit: /ctx-commit: commit and capture decisions/learnings.
Reflect: /ctx-reflect: identify what to persist (at milestones)
Wrap up: /ctx-wrap-up: end-of-session ceremony.
Read on for the full walkthrough with examples.
What is a Readback?
A readback is a structured summary where the agent plays back what it knows:
last session,
active tasks,
recent decisions.
This way, you can confirm it loaded the right context.
The term \"readback\" comes from aviation, where pilots repeat instructions back to air traffic control to confirm they heard correctly.
Same idea in ctx: The agent tells you what it \"thinks\" is going on, and you correct anything that's off before the work begins.
Last session: topic, date, what was accomplished
Active work: pending and in-progress tasks
Recent context: 1-2 decisions or learnings that matter now
Next step: suggestion or question about what to focus on
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx status CLI command Quick health check on context files ctx agent CLI command Load token-budgeted context packet ctx journal source CLI command List previous sessions ctx journal source --show CLI command Inspect a specific session in detail /ctx-remember Skill Recall project context with structured readback /ctx-agent Skill Load full context packet inside the assistant /ctx-status Skill Show context summary with commentary /ctx-next Skill Suggest what to work on with rationale /ctx-commit Skill Commit code and prompt for context capture /ctx-reflect Skill Structured reflection checkpoint /ctx-history Skill Browse session history inside your AI assistant","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#the-workflow","level":2,"title":"The Workflow","text":"
The session lifecycle has seven steps. You will not always use every step (for example, a quick bugfix might skip reflection, and a research session might skip committing), but the full arc looks like this:
Load context > Orient > Pick a Task > Work > Commit > Reflect
Start every session by loading what you know. The fastest way is a single prompt:
Do you remember what we were working on?\n
This triggers the /ctx-remember skill. Behind the scenes, the assistant runs ctx agent --budget 4000, reads the files listed in the context packet (TASKS.md, DECISIONS.md, LEARNINGS.md, CONVENTIONS.md), checks ctx journal source --limit 3 for recent sessions, and then presents a structured readback.
The readback should feel like a recall, not a file system tour. If you see \"Let me check if there are files...\" instead of a confident summary, the context system is not loaded properly.
As an alternative, if you want raw data instead of a readback, run ctx status in your terminal or invoke /ctx-status for a summarized health check showing file counts, token usage, and recent activity.
After loading context, verify you understand the current state.
/ctx-status\n
The status output shows which context files are populated, how many tokens they consume, and which files were recently modified. Look for:
Empty core files: TASKS.md or CONVENTIONS.md with no content means the context is sparse
High token count (over 30k): the context is bloated and might need ctx compact
No recent activity: files may be stale and need updating
If the status looks healthy and the readback from Step 1 gave you enough context, skip ahead.
If something seems off (stale tasks, missing decisions...), spend a minute reading the relevant file before proceeding.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-3-pick-what-to-work-on","level":3,"title":"Step 3: Pick What to Work On","text":"
With context loaded, choose a task. You can pick one yourself, or ask the assistant to recommend:
/ctx-next\n
The skill reads TASKS.md, checks recent sessions to avoid re-suggesting completed work, and presents 1-3 ranked recommendations with rationale.
It prioritizes in-progress tasks over new starts (finishing is better than starting), respects explicit priority tags, and favors momentum: continuing a thread from a recent session is cheaper than context-switching.
If you already know what you want to work on, state it directly:
Let's work on the session enrichment feature.\n
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-4-do-the-work","level":3,"title":"Step 4: Do the Work","text":"
This is the main body of the session: write code, fix bugs, refactor, research: whatever the task requires.
During this phase, a few ctx-specific patterns help:
Check decisions before choosing: when you face a design choice, check if a prior decision covers it.
Is this consistent with our decisions?\n
Constrain scope: keep the assistant focused on the task at hand.
Only change files in internal/cli/session/. Nothing else.\n
Use /ctx-implement for multistep plans: if the task has multiple steps, this skill executes them one at a time with build/test verification between each step.
Context monitoring runs automatically: the check-context-size hook monitors context capacity at adaptive intervals. Early in a session it stays silent. After 16+ prompts it starts monitoring, and past 30 prompts it checks frequently. If context capacity is running high, it will suggest saving unsaved work. No manual invocation is needed.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-5-commit-with-context","level":3,"title":"Step 5: Commit with Context","text":"
When the work is ready, use the context-aware commit instead of raw git commit:
/ctx-commit\n
The Agent May Recommend Committing
You do not always need to invoke /ctx-commit explicitly.
After a commit, the agent may proactively offer to capture context:
\"We just made a trade-off there. Want me to record it as a decision?\"
This is normal: The Agent Playbook encourages persisting at milestones, and a commit is a natural milestone.
As an alternative, you can ask the assistant \"can we commit this?\" and it will pick up the /ctx-commit skill for you.
The skill runs a pre-commit build check (for Go projects, go build), reviews the staged changes, drafts a commit message focused on \"why\" rather than \"what\", and then commits.
After the commit succeeds, it prompts you:
**Any context to capture?**\n\n- **Decision**: Did you make a design choice or trade-off?\n- **Learning**: Did you hit a gotcha or discover something?\n- **Neither**: No context to capture; we are done.\n
If you made a decision, the skill records it with ctx add decision. If you learned something, it records it with ctx add learning including context, lesson, and application fields. This is the bridge between committing code and remembering why the code looks the way it does.
If source code changed in areas that affect documentation, the skill also offers to check for doc drift.
At natural breakpoints (after finishing a feature, resolving a complex bug, or before switching tasks) pause to reflect:
/ctx-reflect\n
Agents Reflect at Milestones
Agents often reflect without explicit invocation.
After completing a significant piece of work, the agent may naturally surface items worth persisting:
\"We discovered that $PPID resolves differently inside hooks. Should I save that as a learning?\"
This is the agent following the Work-Reflect-Persist cycle from the Agent Playbook.
You do not need to say /ctx-reflect for this to happen; the agent treats milestones as reflection triggers on its own.
The skill works through a checklist: learnings discovered, decisions made, tasks completed or created, and whether there are items worth persisting. It then presents a summary with specific items to persist, each with the exact command to run:
I would suggest persisting:\n\n- **Learning**: `$PPID` in PreToolUse hooks resolves to the Claude Code PID\n `ctx add learning --context \"...\" --lesson \"...\" --application \"...\"`\n- **Task**: mark \"Add cooldown to ctx agent\" as done\n- **Decision**: tombstone-based cooldown with 10m default\n `ctx add decision \"...\"`\n\nWant me to persist any of these?\n
The skill asks before persisting anything. You choose what to keep.
Not every commit needs reflection. A typo fix does not. But when you have been debugging for an hour and finally understand the root cause, that is worth a reflection checkpoint.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#step-7-persist-before-ending","level":3,"title":"Step 7: Persist Before Ending","text":"
Before ending the session, run the wrap-up ceremony to capture outstanding learnings, decisions, conventions, and tasks:
/ctx-wrap-up\n
Ceremony Skills
/ctx-remember at session start and /ctx-wrap-up at session end are ceremony skills: Invoke them as explicit slash commands for completeness, not conversationally.
See Session Ceremonies for details on why these two skills are the exception to the conversational rule.
The skill gathers signal from git diff, recent commits, and the conversation, then proposes structured candidates for your approval. After you select which to keep, it persists them via ctx add and offers /ctx-commit if uncommitted changes remain.
Session transcripts are automatically captured by Claude Code and can be browsed later with ctx journal source and ctx journal source --show.
Last session (Feb 7, caching-layer): we implemented the in-memory cache with\nTTL expiration and wrote tests for the eviction policy. \n\nYou have 3 pending tasks; the biggest one is adding Redis as a backend.\n\nRecent decision:<br>\nwe chose LRU over LFU because the access pattern is mostly recency-based.\n\nWant to continue with the Redis backend, or tackle something else?\n
You: Let's work on the Redis caching backend.
Agent (picks up the task, begins implementation):
Got it. I see the task in TASKS.md: \"Add Redis backend for cache layer.\"\n\nI will start with the connection pool setup. Only touching\n`internal/cache/redis/` for now.\n
(The agent works... writes code, runs tests, iterates.)
Agent (after implementing the connection pool and choosing a retry strategy):
The connection pool is working. I went with exponential backoff with jitter\nfor retries instead of fixed intervals; it handles thundering herd better\nunder reconnection storms. That is a design choice worth recording. \n\nWant me to save it as a decision?\n
You: Yes, go ahead.
Agent:
Done. Added to DECISIONS.md: \"Exponential backoff with jitter for Redis\nreconnection, to avoid thundering herd on pool recovery.\"\n
(More work happens... the agent implements the cache read/write paths.)
You: What else did we learn during this session?
Agent:
A few things worth capturing:\n\n* **Learning**: Redis `SET` with `NX` and `PX` flags handles atomic\n set-if-not-exists with TTL in a single call. No need for a separate\n `EXPIRE` command. We initially did it in two calls and hit a race condition.\n* **Learning**: The `go-redis` library's `PoolSize` default is 10 per CPU.\n For our workload, explicit sizing at 20 was more predictable.\n* **Task update**: The connection pool subtask is done; the read/write path\n is implemented but needs integration tests.\n\nWant me to persist these?\n
You: Yes. And let's wrap up.
Agent (reflects and persists... all in one flow):
Persisted:\n\n* 2 learnings added to LEARNINGS.md\n* Marked \"connection pool setup\" done in TASKS.md\n* Added \"write integration tests for Redis cache\" as a new task\n\nContext is up to date. You are good to close.\n
Notice What Happened
In the above workflow, the user never typed /ctx-reflect or ctx add learning.
The agent moved through Load, Orient, Pick, Work, Commit, and Reflect driven by natural conversation.
\"Let's wrap up\" was enough to trigger the full reflect-and-persist flow.
The agent surfaced persist-worthy items at milestones: after a design choice, after discovering a gotcha: without waiting to be asked.
This is the intended experience.
The commands and skills still exist for when you want precise control, but the agent is a proactive partner in the lifecycle, not a passive executor of slash commands.
","path":["Recipes","Sessions","The Complete Session"],"tags":[]},{"location":"recipes/session-lifecycle/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
Quick-reference checklist for a complete session:
Load: /ctx-remember: load context and confirm readback
Orient: /ctx-status: check file health and token usage
Pick: /ctx-next: choose what to work on
Work: implement, test, iterate (scope with \"only change X\")
Commit: /ctx-commit: commit and capture decisions/learnings
Reflect: /ctx-reflect: identify what to persist (at milestones)
Wrap up: /ctx-wrap-up: end-of-session ceremony
Conversational equivalents: you can drive the same lifecycle with plain language:
Step Slash command Natural language Load /ctx-remember \"Do you remember?\" / \"What were we working on?\" Orient /ctx-status \"How's our context looking?\" Pick /ctx-next \"What should we work on?\" / \"Let's do the caching task\" Work -- \"Only change files in internal/cache/\" Commit /ctx-commit \"Commit this\" / \"Ship it\" Reflect /ctx-reflect \"What did we learn?\" / (agent offers at milestones) Wrap up /ctx-wrap-up (use the slash command for completeness)
The agent understands both columns.
In practice, most sessions use a mix:
Explicit Commands when you want precision;
Natural Language when you want flow and agentic autonomy.
The agent will also initiate steps on its own (particularly \"Reflect\") when it recognizes a milestone.
Short sessions (quick bugfix) might only use: Load, Work, Commit.
Long sessions should Reflect after each major milestone and persist learnings and decisions before ending.
Persist early if context is running low. A hook monitors context capacity and notifies you when it gets high, but do not wait for the notification. If you have been working for a while and have unpersisted learnings, persist proactively.
Browse previous sessions by topic. If you need context from a prior session, ctx journal source --show auth will match by keyword. You do not need to remember the exact date or slug.
Reflection is optional but valuable. You can skip /ctx-reflect for small changes, but always persist learnings and decisions before ending a session where you did meaningful work. These are what the next session loads.
Let the hook handle context loading. The PreToolUse hook runs ctx agent automatically with a cooldown, so context loads on first tool use without you asking. The /ctx-remember prompt at session start is for your benefit (to get a readback), not because the assistant needs it.
The agent is a proactive partner, not a passive tool. A ctx-aware agent follows the Agent Playbook: it watches for milestones (completed tasks, design decisions, discovered gotchas) and offers to persist them without being asked. If you finish a tricky debugging session, it may say \"That root cause is worth saving as a learning. Want me to record it?\" before you think to ask. This is by design.
Not every session needs the full ceremony. Quick investigations, one-off questions, small fixes unrelated to active project work: These tasks don't benefit from persistence nudges, ceremony reminders, or knowledge checks. Every hook still fires, consuming tokens and attention on work that won't produce learnings or decisions worth capturing.
","path":["Recipes","Sessions","Pausing Context Hooks"],"tags":[]},{"location":"recipes/session-pause/#tldr","level":2,"title":"TL;DR","text":"Command What it does ctx pause or /ctx-pause Silence all nudge hooks for this session ctx resume or /ctx-resume Restore normal hook behavior
Pause is session-scoped: It only affects the current session. Other sessions (same project, different terminal) are unaffected.
","path":["Recipes","Sessions","Pausing Context Hooks"],"tags":[]},{"location":"recipes/session-pause/#what-still-fires","level":2,"title":"What Still Fires","text":"
Security hooks always run, even when paused:
block-non-path-ctx: prevents ./ctx invocations
block-dangerous-commands: blocks sudo, force push, etc.
# 1. Session starts: Context loads normally.\n\n# 2. You realize this is a quick task\nctx pause\n\n# 3. Work without interruption: hooks are silent\n\n# 4. Session evolves into real work? Resume first\nctx resume\n\n# 5. Now wrap up normally\n# /ctx-wrap-up\n
Resume before wrapping up. If your quick task turns into real work, resume hooks before running /ctx-wrap-up. The wrap-up ceremony needs active hooks to capture learnings properly.
Initial context load is unaffected. The ~8k token startup injection (CLAUDE.md, playbook, constitution) happens before any command runs. Pause only affects hooks that fire during the session.
Use for quick investigations. Debugging a stack trace? Checking a git log? Answering a colleague's question? Pause, do the work, close the session. No ceremony needed.
Don't use for real work. If you're implementing features, fixing bugs, or making decisions: keep hooks active. The nudges exist to prevent context loss.
You're deep in a session and realize: \"I need to refactor the swagger definitions next time.\" You could add a task, but this isn't a work item: it's a note to future-you. You could jot it on the scratchpad, but scratchpad entries don't announce themselves.
How do you leave a message that your next session opens with?
Reminders surface automatically at session start: VERBATIM, every session, until you dismiss them.
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx remind CLI command Add a reminder (default action) ctx remind list CLI command Show all pending reminders ctx remind dismiss CLI command Remove a reminder by ID (or --all) /ctx-remind Skill Natural language interface to reminders","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#step-1-leave-a-reminder","level":3,"title":"Step 1: Leave a Reminder","text":"
Tell your agent what to remember, or run it directly:
You: \"remind me to refactor the swagger definitions\"\n\nAgent: [runs ctx remind \"refactor the swagger definitions\"]\n \"Reminder set:\n + [1] refactor the swagger definitions\"\n
Or from the terminal:
ctx remind \"refactor the swagger definitions\"\n
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#step-2-set-a-date-gate-optional","level":3,"title":"Step 2: Set a Date Gate (Optional)","text":"
If the reminder shouldn't fire until a specific date:
You: \"remind me to check the deploy logs after Tuesday\"\n\nAgent: [runs ctx remind \"check the deploy logs\" --after 2026-02-25]\n \"Reminder set:\n + [2] check the deploy logs (after 2026-02-25)\"\n
The reminder stays silent until that date, then fires every session.
The agent converts natural language dates (\"tomorrow\", \"next week\", \"after the release on Friday\") to YYYY-MM-DD. If it's ambiguous, it asks.
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#step-3-start-a-new-session","level":3,"title":"Step 3: Start a New Session","text":"
Next session, the reminder appears automatically before anything else:
[1] refactor the swagger definitions\n [3] review auth token expiry logic\n [4] check deploy logs (after 2026-02-25, not yet due)\n
Date-gated reminders that haven't reached their date show (not yet due).
","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#using-ctx-remind-in-a-session","level":2,"title":"Using /ctx-remind in a Session","text":"
Invoke the /ctx-remind skill, then describe what you want:
You: /ctx-remind remind me to update the API docs\nYou: /ctx-remind what reminders do I have?\nYou: /ctx-remind dismiss reminder 3\n
You say (after /ctx-remind) What the agent does \"remind me to update the API docs\" ctx remind \"update the API docs\" \"remind me next week to check staging\" ctx remind \"check staging\" --after 2026-03-02 \"what reminders do I have?\" ctx remind list \"dismiss reminder 3\" ctx remind dismiss 3 \"clear all reminders\" ctx remind dismiss --all","path":["Recipes","Sessions","Session Reminders"],"tags":[]},{"location":"recipes/session-reminders/#reminders-vs-scratchpad-vs-tasks","level":2,"title":"Reminders vs Scratchpad vs Tasks","text":"You want to... Use Leave a note that announces itself next session ctx remind Jot down a quick value or sensitive token ctx pad Track work with status and completion TASKS.md Record a decision or lesson for all sessions Context files
Decision guide:
If it should announce itself at session start → ctx remind
If it's a quiet note you'll check manually → ctx pad
If it's a work item you'll mark done → TASKS.md
Reminders Are Sticky Notes, Not Tasks
A reminder has no status, no priority, no lifecycle. It's a message to \"future you\" that fires until dismissed.
Reminders fire every session: Unlike nudges (which throttle to once per day), reminders repeat until you dismiss them. This is intentional: You asked to be reminded.
Date gating is session-scoped, not clock-scoped: --after 2026-02-25 means \"don't show until sessions on or after Feb 25.\" It does not mean \"alarm at midnight on Feb 25.\"
The agent handles date parsing: Say \"next week\" or \"after Friday\": The agent converts it to YYYY-MM-DD. The CLI only accepts the explicit date format.
Reminders are committed to git: They travel with the repo. If you switch machines, your reminders follow.
IDs never reuse: After dismissing reminder 3, the next reminder gets ID 4 (or higher). No confusion from recycled numbers.
Every session creates tombstone files in .context/state/ - small markers that suppress repeat hook nudges (\"already checked context size\", \"already sent persistence reminder\"). Over days and weeks, these accumulate into hundreds of files from long-dead sessions.
The files are harmless individually, but the clutter makes it harder to reason about state, and stale global tombstones can suppress nudges across sessions entirely.
ctx system prune --dry-run # preview what would be removed\nctx system prune # prune files older than 7 days\nctx system prune --days 1 # more aggressive: keep only today\n
","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/state-maintenance/#commands-used","level":2,"title":"Commands Used","text":"Tool Type Purpose ctx system prune Command Remove old per-session state files ctx status Command Quick health overview including state dir","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/state-maintenance/#understanding-state-files","level":2,"title":"Understanding State Files","text":"
State files fall into two categories:
Session-scoped (contain a UUID in the filename): Created per-session to suppress repeat nudges. Safe to prune once the session ends. Examples:
Global (no UUID): Persist across sessions. ctx system prune preserves these automatically. Some are legitimate state (events.jsonl, memory-import.json); others may be stale tombstones that need manual review.
ctx system prune # older than 7 days\nctx system prune --days 3 # older than 3 days\nctx system prune --days 1 # older than 1 day (aggressive)\n
","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/state-maintenance/#step-3-review-global-files","level":3,"title":"Step 3: Review Global Files","text":"
After pruning, check what prune preserved:
ls .context/state/ | grep -v '[0-9a-f]\\{8\\}-[0-9a-f]\\{4\\}'\n
Legitimate global files (keep):
events.jsonl - event log
memory-import.json - import tracking state
Stale global tombstones (safe to delete):
Files like backup-reminded, ceremony-reminded, version-checked with no session UUID are one-shot markers. If they are from a previous session, they are stale and can be removed manually.
Pruning active sessions is safe but noisy: If you prune a file belonging to a still-running session, the corresponding hook will re-fire its nudge on the next prompt. Minor UX annoyance, not data loss.
No context files are stored in state: The state directory contains only tombstones, counters, and diagnostic data. Nothing in .context/state/ affects your decisions, learnings, tasks, or conventions.
Test artifacts sneak in: Files like context-check-statstest or heartbeat-unknown are artifacts from development or testing. They lack UUIDs so prune preserves them. Delete manually.
Detecting and Fixing Drift: broader context maintenance including drift detection and archival
Troubleshooting: diagnostic workflow using ctx doctor and event logs
CLI Reference: system: full flag documentation for ctx system prune and related commands
","path":["State Directory Maintenance"],"tags":[]},{"location":"recipes/system-hooks-audit/","level":1,"title":"Auditing System Hooks","text":"","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#the-problem","level":2,"title":"The Problem","text":"
ctx runs 14 system hooks behind the scenes: nudging your agent to persist context, warning about resource pressure, gating commits on QA. But these hooks are invisible by design. You never see them fire. You never know if they stopped working.
How do you verify your hooks are actually running, audit what they do, and get alerted when they go silent?
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#tldr","level":2,"title":"TL;DR","text":"
ctx system check-resources # run a hook manually\nls -la .context/logs/ # check hook execution logs\nctx notify setup # get notified when hooks fire\n
Or ask your agent: \"Are our hooks running?\"
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx system <hook> CLI command Run a system hook manually ctx system resources CLI command Show system resource status ctx system stats CLI command Stream or dump per-session token stats ctx notify setup CLI command Configure webhook for audit trail ctx notify test CLI command Verify webhook delivery .ctxrcnotify.events Configuration Subscribe to relay for full hook audit .context/logs/ Log files Local hook execution ledger","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#what-are-system-hooks","level":2,"title":"What Are System Hooks?","text":"
System hooks are plumbing commands that ctx registers with your AI tool (Claude Code, Cursor, etc.) via the plugin's hooks.json. They fire automatically at specific events during your AI session:
Event When Hooks UserPromptSubmit Before the agent sees your prompt 10 check hooks + heartbeat PreToolUse Before the agent uses a tool block-non-path-ctx, qa-reminderPostToolUse After a tool call succeeds post-commit
You never run these manually. Your AI tool runs them for you: That's the point.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#the-complete-hook-catalog","level":2,"title":"The Complete Hook Catalog","text":"","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#prompt-time-checks-userpromptsubmit","level":3,"title":"Prompt-Time Checks (UserPromptSubmit)","text":"
These fire before every prompt, but most are throttled to avoid noise.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-context-size-context-capacity-warning","level":4,"title":"check-context-size: Context Capacity Warning","text":"
What: Adaptive prompt counter. Silent for the first 15 prompts, then nudges with increasing frequency (every 5th, then every 3rd).
Why: Long sessions lose coherence. The nudge reminds both you and the agent to persist context before the window fills up.
Output: VERBATIM relay box with prompt count.
┌─ Context Checkpoint (prompt #20) ────────────────\n│ This session is getting deep. Consider wrapping up\n│ soon. If there are unsaved learnings, decisions, or\n│ conventions, now is a good time to persist them.\n│ ⏱ Context window: ~45k tokens (~22% of 200k)\n└──────────────────────────────────────────────────\n
Stats: Every prompt records token usage to .context/state/stats-{session}.jsonl. Monitor live with ctx system stats --follow or query with ctx system stats --json. Stats are recorded even during wrap-up suppression (event: suppressed).
Billing guard: When billing_token_warn is set in .ctxrc, a one-shot warning fires if session tokens exceed the threshold. This warning is independent of all other triggers - it fires even during wrap-up suppression.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-persistence-context-staleness-nudge","level":4,"title":"check-persistence: Context Staleness Nudge","text":"
What: Tracks when .context/*.md files were last modified. If too many prompts pass without a write, nudges the agent to persist.
Why: Sessions produce insights that evaporate if not recorded. This catches the \"we talked about it but never wrote it down\" failure mode.
Output: VERBATIM relay after 20+ prompts without a context file change.
┌─ Persistence Checkpoint (prompt #20) ───────────\n│ No context files updated in 20+ prompts.\n│ Have you discovered learnings, made decisions,\n│ established conventions, or completed tasks\n│ worth persisting?\n│\n│ Run /ctx-wrap-up to capture session context.\n└──────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-ceremonies-session-ritual-adoption","level":4,"title":"check-ceremonies: Session Ritual Adoption","text":"
What: Scans your last 3 journal entries for /ctx-remember and /ctx-wrap-up usage. Nudges once per day if missing.
Why: Session ceremonies are the highest-leverage habit in ctx. This hook bootstraps the habit until it becomes automatic.
Output: Tailored nudge depending on which ceremony is missing.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-journal-unimported-session-reminder","level":4,"title":"check-journal: Unimported Session Reminder","text":"
What: Detects unimported Claude Code sessions and unenriched journal entries. Fires once per day.
Why: Exported sessions become searchable history. Unenriched entries lack metadata for filtering. Both decay in value over time.
Output: VERBATIM relay with counts and exact commands.
┌─ Journal Reminder ─────────────────────────────\n│ You have 3 new session(s) not yet exported.\n│ 5 existing entries need enrichment.\n│\n│ Export and enrich:\n│ ctx journal import --all\n│ /ctx-journal-enrich-all\n└────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-resources-system-resource-pressure","level":4,"title":"check-resources: System Resource Pressure","text":"
What: Monitors memory, swap, disk, and CPU load. Only fires at DANGER severity (memory >= 90%, swap >= 75%, disk >= 95%, load >= 1.5x CPU count).
Why: Resource exhaustion mid-session can corrupt work. This provides early warning to persist and exit.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-knowledge-knowledge-file-growth","level":4,"title":"check-knowledge: Knowledge File Growth","text":"
What: Counts entries in LEARNINGS.md, DECISIONS.md, and lines in CONVENTIONS.md. Fires once per day when thresholds are exceeded.
Why: Large knowledge files dilute agent context. 35 learnings compete for attention; 15 focused ones get applied. Thresholds are configurable in .ctxrc.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-version-binaryplugin-version-drift","level":4,"title":"check-version: Binary/Plugin Version Drift","text":"
What: Compares the ctx binary version against the plugin version. Fires once per day. Also checks encryption key age for rotation nudge.
Why: Version drift means hooks reference features the binary doesn't have. The key rotation nudge prevents indefinite key reuse.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-reminders-pending-reminder-relay","level":4,"title":"check-reminders: Pending Reminder Relay","text":"
What: Reads .context/reminders.json and surfaces any due reminders via VERBATIM relay. No throttle: fires every session until dismissed.
Why: Reminders are sticky notes to future-you. Unlike nudges (which throttle to once per day), reminders repeat deliberately until the user dismisses them.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-freshness-technology-constant-staleness","level":4,"title":"check-freshness: Technology Constant Staleness","text":"
What: Stats files listed in .ctxrcfreshness_files and warns if any haven't been modified in over 6 months. Daily throttle. Silent when no files are configured (opt-in via .ctxrc).
Why: Model capabilities evolve - token budgets, attention limits, and context window sizes that were accurate 6 months ago may no longer reflect best practices. This hook reminds you to review and touch the file to confirm values are still current.
Config (.ctxrc):
freshness_files:\n - path: config/thresholds.yaml\n desc: Model token limits and batch sizes\n review_url: https://docs.example.com/limits # optional\n
Each entry has a path (relative to project root), desc (what constants live there), and optional review_url (where to check current values). When review_url is set, the nudge includes \"Review against: {url}\". When absent, just \"Touch the file to mark it as reviewed.\"
Output: VERBATIM relay listing stale files, silent otherwise.
┌─ Technology Constants Stale ──────────────────────\n│ config/thresholds.yaml (210 days ago)\n│ - Model token limits and batch sizes\n│ Review against: https://docs.example.com/limits\n│ Touch each file to mark it as reviewed.\n└───────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#check-map-staleness-architecture-map-drift","level":4,"title":"check-map-staleness: Architecture Map Drift","text":"
What: Checks whether map-tracking.json is older than 30 days and there are commits touching internal/ since the last map refresh. Daily throttle prevents repeated nudges.
Why: Architecture documentation drifts silently as code evolves. This hook detects structural changes that the map hasn't caught up with and suggests running /ctx-architecture to refresh.
Output: VERBATIM relay when stale and modules changed, silent otherwise.
┌─ Architecture Map Stale ────────────────────────────\n│ ARCHITECTURE.md hasn't been refreshed since 2026-01-15\n│ and there are commits touching 12 modules.\n│ /ctx-architecture keeps architecture docs drift-free.\n│\n│ Want me to run /ctx-architecture to refresh?\n└─────────────────────────────────────────────────────\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#heartbeat-session-heartbeat-webhook","level":4,"title":"heartbeat: Session Heartbeat Webhook","text":"
What: Fires on every prompt. Sends a webhook notification with prompt count, session ID, context modification status, and token usage telemetry. Never produces stdout.
Why: Other hooks only send webhooks when they \"speak\" (nudge/relay). When silent, you have no visibility into session activity. The heartbeat provides a continuous session-alive signal with token consumption data for observability dashboards or liveness monitoring.
Token fields (tokens, context_window, usage_pct) are included when usage data is available from the session JSONL file.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#tool-time-hooks-pretooluse-posttooluse","level":3,"title":"Tool-Time Hooks (PreToolUse / PostToolUse)","text":"","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#block-non-path-ctx-path-enforcement-hard-gate","level":4,"title":"block-non-path-ctx: PATH Enforcement (Hard Gate)","text":"
What: Blocks any Bash command that invokes ./ctx, ./dist/ctx, go run ./cmd/ctx, or an absolute path to ctx. Only PATH invocations are allowed.
Why: Enforces CONSTITUTION.md's invocation invariant. Running a dev-built binary in production context causes version confusion and silent behavior drift.
Output: Block response (prevents the tool call):
{\"decision\": \"block\", \"reason\": \"Use 'ctx' from PATH, not './ctx'...\"}\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#qa-reminder-pre-commit-qa-gate","level":4,"title":"qa-reminder: Pre-Commit QA Gate","text":"
What: Fires on every Edit tool use. Reminds the agent to lint and test the entire project before committing.
Why: Agents tend to \"I'll test later\" and then commit untested code. Repetition is intentional: the hook reinforces the habit on every edit, not just before commits.
Output: Agent directive with hard QA gate instructions.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#post-commit-context-capture-after-commit","level":4,"title":"post-commit: Context Capture After Commit","text":"
What: Fires after any git commit (excludes --amend). Prompts the agent to offer context capture (decision? learning?) and suggest running lints/tests before pushing.
Why: Commits are natural reflection points. The nudge converts mechanical git operations into context-capturing opportunities.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#auditing-hooks-via-the-local-event-log","level":2,"title":"Auditing Hooks via the Local Event Log","text":"
If you don't need an external audit trail, enable the local event log for a self-contained record of hook activity:
# .ctxrc\nevent_log: true\n
Once enabled, every hook that fires writes an entry to .context/state/events.jsonl. Query it with ctx system events:
ctx system events # last 50 events\nctx system events --hook qa-reminder # filter by hook\nctx system events --session <id> # filter by session\nctx system events --json | jq '.' # raw JSONL for processing\n
The event log is local, queryable, and doesn't require any external service. For a full diagnostic workflow combining event logs with structural health checks, see Troubleshooting.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#auditing-hooks-via-webhooks","level":2,"title":"Auditing Hooks via Webhooks","text":"
The most powerful audit setup pipes all hook output to a webhook, giving you a real-time external record of what your agent is being told.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#step-1-set-up-the-webhook","level":3,"title":"Step 1: Set Up the Webhook","text":"
ctx notify setup\n# Enter your webhook URL (Slack, Discord, ntfy.sh, IFTTT, etc.)\n
See Webhook Notifications for service-specific setup.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#step-2-subscribe-to-relay-events","level":3,"title":"Step 2: Subscribe to relay Events","text":"
# .ctxrc\nnotify:\n events:\n - relay # all hook output: VERBATIM relays, directives, blocks\n - nudge # just the user-facing VERBATIM relays\n
The relay event fires for every hook that produces output. This includes:
Hook Event sent check-context-sizerelay + nudgecheck-persistencerelay + nudgecheck-ceremoniesrelay + nudgecheck-journalrelay + nudgecheck-resourcesrelay + nudgecheck-knowledgerelay + nudgecheck-versionrelay + nudgecheck-remindersrelay + nudgecheck-freshnessrelay + nudgecheck-map-stalenessrelay + nudgeheartbeatheartbeat only block-non-path-ctxrelay only post-commitrelay only qa-reminderrelay only","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#step-3-cross-reference","level":3,"title":"Step 3: Cross-Reference","text":"
With relay enabled, your webhook receives a JSON payload every time a hook fires:
{\n \"event\": \"relay\",\n \"message\": \"check-persistence: No context updated in 20+ prompts\",\n \"session_id\": \"b854bd9c\",\n \"timestamp\": \"2026-02-22T14:30:00Z\",\n \"project\": \"my-project\"\n}\n
This creates an external audit trail independent of the agent. You can now cross-verify: did the agent actually relay the checkpoint the hook told it to relay?
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#verifying-hooks-actually-fire","level":2,"title":"Verifying Hooks Actually Fire","text":"
Hooks are invisible. An invisible thing that breaks is indistinguishable from an invisible thing that never existed. Three verification methods, from simplest to most robust:
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#method-1-ask-the-agent","level":3,"title":"Method 1: Ask the Agent","text":"
The simplest check. After a few prompts into a session:
\"Did you receive any hook output this session? Print the last\ncontext checkpoint or persistence nudge you saw.\"\n
The agent should be able to recall recent hook output from its context window. If it says \"I haven't received any hook output\", either:
The hooks aren't firing (check installation);
The session is too short (hooks throttle early);
The hooks fired but the agent absorbed them silently.
Limitation: You are trusting the agent to report accurately. Agents sometimes confabulate or miss context. Use this as a quick smoke test, not definitive proof.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#method-2-check-the-webhook-trail","level":3,"title":"Method 2: Check the Webhook Trail","text":"
If you have relay events enabled, check your webhook receiver. Every hook that fires sends a timestamped notification. No notification = no fire.
This is the ground truth. The webhook is called directly by the ctx binary, not by the agent. The agent cannot fake, suppress, or modify webhook deliveries.
Compare what the webhook received against what the agent claims to have relayed. Discrepancies mean the agent is absorbing nudges instead of surfacing them.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#method-3-read-the-local-logs","level":3,"title":"Method 3: Read the Local Logs","text":"
Hooks that support logging write to .context/logs/:
Logs are append-only and written by the ctx binary, not the agent.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#detecting-silent-hook-failures","level":2,"title":"Detecting Silent Hook Failures","text":"
The hardest failure mode: hooks that stop firing without error. The plugin config changes, a binary update drops a hook, or a PATH issue silently breaks execution. Nothing errors: The hook just never runs.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#the-staleness-signal","level":3,"title":"The Staleness Signal","text":"
If .context/logs/check-context-size.log has no entries newer than 5 days but you've been running sessions daily, something is wrong. The absence of evidence is evidence of absence: but only if you control for inactivity.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#false-positive-protection","level":3,"title":"False Positive Protection","text":"
A naive \"hooks haven't fired in N days\" alert fires incorrectly when you simply haven't used ctx. The correct check needs two inputs:
Last hook fire time: from .context/logs/ or webhook history
Last session activity: from journal entries or ctx journal source
If sessions are happening but hooks aren't firing, that's a real problem. If neither sessions nor hooks are happening, that's a vacation.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#what-to-check","level":3,"title":"What to Check","text":"
When you suspect hooks aren't firing:
# 1. Verify the plugin is installed\nls ~/.claude/plugins/\n\n# 2. Check hook registration\ncat ~/.claude/plugins/ctx/hooks.json | head -20\n\n# 3. Run a hook manually to see if it errors\necho '{\"session_id\":\"test\"}' | ctx system check-context-size\n\n# 4. Check for PATH issues\nwhich ctx\nctx --version\n
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#tips","level":2,"title":"Tips","text":"
Start with nudge, graduate to relay: The nudge event covers user-facing VERBATIM relays. Add relay when you want full visibility into agent directives and hard gates.
Webhooks are your trust anchor: The agent can ignore a nudge, but it can't suppress the webhook. If the webhook fired and the agent didn't relay, you have proof of a compliance gap.
Hooks are throttled by design: Most check hooks fire once per day or use adaptive frequency. Don't expect a notification every prompt: Silence usually means the throttle is working, not that the hook is broken.
Daily markers live in .context/state/: Throttle files are stored in .context/state/ alongside other project-scoped state. If you need to force a hook to re-fire during testing, delete the corresponding marker file.
The QA reminder is intentionally noisy: Unlike other hooks, qa-reminder fires on every Edit call with no throttle. This is deliberate: The commit quality degrades when the reminder fades from salience.
Log files are safe to commit: .context/logs/ contains only timestamps, session IDs, and status keywords. No secrets, no code.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#next-up","level":2,"title":"Next Up","text":"
Detecting and Fixing Drift →: Keep context files accurate as your codebase evolves.
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/system-hooks-audit/#see-also","level":2,"title":"See Also","text":"
Troubleshooting: full diagnostic workflow using ctx doctor, event logs, and /ctx-doctor
Customizing Hook Messages: override what hooks say without changing what they do
Webhook Notifications: setting up and configuring the webhook system
Hook Output Patterns: understanding VERBATIM relays, agent directives, and hard gates
Detecting and Fixing Drift: structural checks that complement runtime hook auditing
CLI Reference: full ctx system command reference
","path":["Recipes","Hooks and Notifications","Auditing System Hooks"],"tags":[]},{"location":"recipes/task-management/","level":1,"title":"Tracking Work Across Sessions","text":"","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-problem","level":2,"title":"The Problem","text":"
You have work that spans multiple sessions. Tasks get added during one session, partially finished in another, and completed days later.
Without a system, follow-up items fall through the cracks, priorities drift, and you lose track of what was done versus what still needs doing. TASKS.md grows cluttered with completed checkboxes that obscure the remaining work.
How do you manage work items that span multiple sessions without losing context?
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#tldr","level":2,"title":"TL;DR","text":"
Read on for the full workflow and conversational patterns.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx add task Command Add a new task to TASKS.mdctx task complete Command Mark a task as done by number or text ctx task snapshot Command Create a point-in-time backup of TASKS.mdctx task archive Command Move completed tasks to archive file /ctx-add-task Skill AI-assisted task creation with validation /ctx-archive Skill AI-guided archival with safety checks /ctx-next Skill Pick what to work on based on priorities","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-1-add-tasks-with-priorities","level":3,"title":"Step 1: Add Tasks with Priorities","text":"
Every piece of follow-up work gets a task. Use ctx add task from the terminal or /ctx-add-task from your AI assistant. Tasks should start with a verb and be specific enough that someone unfamiliar with the session could act on them.
# High-priority bug found during code review\nctx add task \"Fix race condition in session cooldown\" --priority high\n\n# Medium-priority feature work\nctx add task \"Add --format json flag to ctx status for CI integration\" --priority medium\n\n# Low-priority cleanup\nctx add task \"Remove deprecated --raw flag from ctx load\" --priority low\n
The /ctx-add-task skill validates your task before recording it. It checks that the description is actionable, not a duplicate, and specific enough for someone else to pick up.
If you say \"fix the bug,\" it will ask you to clarify which bug and where.
Tasks Are Often Created Proactively
In practice, many tasks are created proactively by the agent rather than by explicit CLI commands.
After completing a feature, the agent will often identify follow-up work: tests, docs, edge cases, error handling, and offer to add them as tasks.
You do not need to dictate ctx add task commands; the agent picks up on work context and suggests tasks naturally.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-2-organize-with-phase-sections","level":3,"title":"Step 2: Organize with Phase Sections","text":"
Tasks live in phase sections inside TASKS.md.
Phases provide logical groupings that preserve order and enable replay.
A task does not move between sections. It stays in its phase permanently, and status is tracked via checkboxes and inline tags.
## Phase 1: Core CLI\n\n- [x] Implement ctx add command `#done:2026-02-01-143022`\n- [x] Implement ctx task complete command `#done:2026-02-03-091544`\n- [ ] Add --section flag to ctx add task `#priority:medium`\n\n## Phase 2: AI Integration\n\n- [ ] Implement ctx agent cooldown `#priority:high` `#in-progress`\n- [ ] Add ctx watch XML parsing `#priority:medium`\n - Blocked by: Need to finalize agent output format\n\n## Backlog\n\n- [ ] Performance optimization for large TASKS.md files `#priority:low`\n- [ ] Add metrics dashboard to ctx status `#priority:deferred`\n
Use --section when adding a task to a specific phase:
ctx add task \"Add ctx watch XML parsing\" --priority medium --section \\\n \"Phase 2: AI Integration\"\n
Without --section, the task is inserted before the first unchecked task in TASKS.md.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-3-pick-what-to-work-on","level":3,"title":"Step 3: Pick What to Work On","text":"
At the start of a session, or after finishing a task, use /ctx-next to get prioritized recommendations.
The skill reads TASKS.md, checks recent sessions, and ranks candidates using explicit priority, blocking status, in-progress state, momentum from recent work, and phase order.
You can also ask naturally: \"what should we work on?\" or \"what's the highest priority right now?\"
/ctx-next\n
The output looks like this:
**1. Implement ctx agent cooldown** `#priority:high`\n\n Still in-progress from yesterday's session. The tombstone file approach is\n half-built. Finishing is cheaper than context-switching.\n\n**2. Add --section flag to ctx add task** `#priority:medium`\n\n Last Phase 1 item. Quick win that unblocks organized task entry.\n\n---\n\n*Based on 8 pending tasks across 3 phases.\n\nLast session: agent-cooldown (2026-02-06).*\n
In-progress tasks almost always come first:
Finishing existing work takes priority over starting new work.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-4-complete-tasks","level":3,"title":"Step 4: Complete Tasks","text":"
When a task is done, mark it complete by number or partial text match:
# By task number (as shown in TASKS.md)\nctx task complete 3\n\n# By partial text match\nctx task complete \"agent cooldown\"\n
The task's checkbox changes from [ ] to [x] and a #done timestamp is added. Tasks are never deleted: they stay in their phase section so history is preserved.
Be Conversational
You rarely need to run ctx task complete yourself during an interactive session.
When you say something like \"the rate limiter is done\" or \"we finished that,\" the agent marks the task complete and moves on to suggesting what is next.
The CLI commands are most useful for manual housekeeping, scripted workflows, or when you want precision.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-5-snapshot-before-risky-changes","level":3,"title":"Step 5: Snapshot Before Risky Changes","text":"
Before a major refactor or any change that might break things, snapshot your current task state. This creates a copy of TASKS.md in .context/archive/ without modifying the original.
# Default snapshot\nctx task snapshot\n\n# Named snapshot (recommended before big changes)\nctx task snapshot \"before-refactor\"\n
This creates a file like .context/archive/tasks-before-refactor-2026-02-08-1430.md. If the refactor goes sideways, and you need to confirm what the task state looked like before you started, the snapshot is there.
Snapshots are cheap: Take them before any change you might want to undo or review later.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#step-6-archive-when-tasksmd-gets-cluttered","level":3,"title":"Step 6: Archive When TASKS.md Gets Cluttered","text":"
After several sessions, TASKS.md accumulates completed tasks that make it hard to see what is still pending.
Use ctx task archive to move all [x] items to a timestamped archive file.
Start with a dry run to preview what will be moved:
ctx task archive --dry-run\n
Then archive:
ctx task archive\n
Completed tasks move to .context/archive/tasks-2026-02-08.md. Phase headers are preserved in the archive for traceability. Pending tasks ([ ]) remain in TASKS.md.
The /ctx-archive skill adds two safety checks before archiving: it verifies that completed tasks are genuinely cluttering the view and that nothing was marked [x] prematurely.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#letting-the-agent-drive","level":2,"title":"Letting the Agent Drive","text":"
The steps above show the CLI commands for task management.
In practice, most task management happens conversationally:
An agent that has loaded the context files does not need you to type ctx add task. It tracks work naturally and offers the right operations.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#use-natural-language-instead-of-commands","level":3,"title":"Use Natural Language Instead of Commands","text":"
These conversational prompts replace explicit commands during interactive sessions:
Instead of typing... Just say... ctx add task \"Write tests for X\" \"We should add tests for this: track that?\" /ctx-next \"What should we work on?\" ctx task complete \"rate limiting\" \"The rate limiter is done, what's next?\" ctx task archive \"TASKS.md is getting long, can you clean it up?\" ctx add task ... && ctx add task ... \"Add follow-ups for what we just built.\"
The agent translates these into the right ctx operations behind the scenes.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-agent-creates-tasks-proactively","level":3,"title":"The Agent Creates Tasks Proactively","text":"
After completing a piece of work, the agent does not wait for you to dictate follow-up items. It identifies them from the work context and offers:
**Agent**: \"I've finished implementing the rate limiter middleware. I noticed\nthree follow-up items:\n\n1. Write integration tests for the rate limiter (the unit tests pass but\n there's no end-to-end coverage)\n2. Add rate limit headers to API responses (X-RateLimit-Remaining, etc.)\n3. Document the rate limit configuration in the ops runbook\n\nWant me to add these as tasks?\"\n
This is how an agent behaves once it has read the Agent Playbook.
The playbook's \"Work, Reflect, Persist\" cycle means the agent asks itself after every milestone: Did I just complete a task? Did new work emerge?
When the answer is yes, it acts.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#asking-the-agent-what-to-capture","level":3,"title":"Asking the Agent What to Capture","text":"
You don't always know what's worth persisting.
Instead of reviewing the session yourself and typing specific commands, ask the agent to do the thinking:
You say What happens \"What tasks came out of this session?\" Agent reviews the work, identifies follow-ups, and offers to add them \"Are there any tasks, learnings, or decisions worth adding?\" Agent scans the full session for all three types and proposes a batch \"What should we remember from today?\" Agent surfaces everything persist-worthy: not just tasks \"Anything we should track before we move on?\" Agent checks for loose ends specific to the work just completed
This is especially powerful after long or exploratory sessions where multiple threads were touched.
The agent has the full conversation in context and can spot follow-ups you might miss:
You: \"Are there any tasks, learnings, or decisions worth adding\n from this session?\"\n\nAgent: \"Yes. Here's what I'd suggest:\n\n Tasks:\n - Add retry logic to the webhook delivery (we discussed it when\n the timeout test failed but never tracked it)\n - Update the API docs to reflect the new rate limit headers\n\n Learnings:\n - net/http's default client has no timeout: we burned 15 minutes\n on that before adding an explicit 30s timeout\n\n Decisions:\n - We chose token-bucket over sliding-window for rate limiting\n because it handles bursts better and the library is simpler\n\n Want me to add all of these?\"\n\nYou: \"All of them.\"\n
The agent fills in structured fields from what it observed during the session. You get the same quality entries as if you typed verbose commands, without needing to remember what happened or how to format it.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#the-agent-completes-tasks-and-moves-on","level":3,"title":"The Agent Completes Tasks and Moves On","text":"
When you confirm that work is finished, or when the agent can see that it is, it marks the task done and suggests the next step:
You: \n\"The cooldown logic is working now.\"\n\nAgent: \n\"Marked 'Implement ctx agent cooldown' as done. Your highest-\npriority remaining task is 'Add --section flag to ctx add task'.\n\nIt's the last item in Phase 1 and a quick win. Want to start on that?\"\n
The agent chains completion into next-task selection.
One sentence from you triggers two operations.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#end-of-session-capture","level":3,"title":"End-of-Session Capture","text":"
At the end of a session, the agent reviews what happened and catches loose ends:
You: \n\"I think we're good for today.\"\n\nAgent: \n\"Before we wrap up: during the middleware work you mentioned\nwanting to add metrics for rate limit hits. That never made it into TASKS.md.\n\nWant me to add it as a task so it's there next session?\"\n
This is the proactive persistence model at work.
The agent monitors the conversation for mentioned-but-not-tracked work and surfaces it before context is lost.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#a-conversational-session-example","level":3,"title":"A Conversational Session Example","text":"
Here is what task management looks like in a real session when the agent is driving:
You: What's left to do?\nAgent: You have 5 pending tasks across 2 phases. The highest priority is\n \"Implement ctx agent cooldown\": it's marked in-progress from\n yesterday and half-built. Want to pick that up?\n\nYou: Yeah, let's finish it.\n [... 20 minutes of implementation work ...]\n\nYou: That's working. Tests pass.\nAgent: Nice. I'll mark \"Implement ctx agent cooldown\" as done.\n I noticed two things from the implementation:\n 1. The tombstone file cleanup should happen on a timer, not just\n at startup. Want me to add that as a task?\n 2. We learned that file locks don't work reliably on NFS. I'll\n add that to LEARNINGS.md.\n\nYou: Yes to both. What's next?\nAgent: Added the cleanup task as high priority. Next up is \"Add --section\n flag to ctx add task\": last item in Phase 1. It should be\n straightforward since the flag parsing pattern already exists\n in the codebase. Ready?\n
It's All Conversational
Notice what did not happen: The user never typed a ctx command.
The agent handled task completion, follow-up creation, learning capture, and next-task selection: all from natural conversation.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
# Add a task\nctx add task \"Implement rate limiting for API endpoints\" --priority high\n\n# Add to a specific phase\nctx add task \"Write integration tests for rate limiter\" --section \"Phase 2\"\n\n# See what to work on\n# (from AI assistant) /ctx-next\n\n# Mark done by text\nctx task complete \"rate limiting\"\n\n# Mark done by number\nctx task complete 5\n\n# Snapshot before a risky refactor\nctx task snapshot \"before-middleware-rewrite\"\n\n# Archive completed tasks when the list gets long\nctx task archive --dry-run # preview first\nctx task archive # then archive\n
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#tips","level":2,"title":"Tips","text":"
Start tasks with a verb: \"Add,\" \"Fix,\" \"Implement,\" \"Investigate\": not just a topic like \"Authentication.\"
Include the why in the task description. Future sessions lack the context of why you added the task. \"Add rate limiting\" is worse than \"Add rate limiting to prevent abuse on the public API after the load test showed 10x traffic spikes.\"
Use #in-progress sparingly. Only one or two tasks should carry this tag at a time. If everything is in-progress, nothing is.
Snapshot before, not after. The point of a snapshot is to capture the state before a change, not to celebrate what you just finished.
Archive regularly. Once completed tasks outnumber pending ones, it is time to archive. A clean TASKS.md helps both you and your AI assistant focus.
Never delete tasks. Mark them [x] (completed) or [-] (skipped with a reason). Deletion breaks the audit trail.
Trust the agent's task instincts. When the agent suggests follow-up items after completing work, it is drawing on the full context of what just happened.
Conversational prompts beat commands in interactive sessions. Saying \"what should we work on?\" is faster and more natural than running /ctx-next. Save explicit commands for scripts, CI, and unattended runs.
Let the agent chain operations. A single statement like \"that's done, what's next?\" can trigger completion, follow-up identification, and next-task selection in one flow.
Review proactive task suggestions before moving on. The best follow-ups come from items spotted in-context right after the work completes.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#next-up","level":2,"title":"Next Up","text":"
Using the Scratchpad →: Store short-lived sensitive notes in an encrypted scratchpad.
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/task-management/#see-also","level":2,"title":"See Also","text":"
The Complete Session: full session lifecycle including task management in context
Persisting Decisions, Learnings, and Conventions: capturing the \"why\" behind your work
Detecting and Fixing Drift: keeping TASKS.md accurate over time
CLI Reference: full documentation for ctx add, ctx task complete, ctx task
Context Files: TASKS.md: format and conventions for TASKS.md
","path":["Recipes","Knowledge and Tasks","Tracking Work Across Sessions"],"tags":[]},{"location":"recipes/troubleshooting/","level":1,"title":"Troubleshooting","text":"","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#the-problem","level":2,"title":"The Problem","text":"
Something isn't working: a hook isn't firing, nudges are too noisy, context seems stale, or the agent isn't following instructions. The information to diagnose it exists (across status, drift, event logs, hook config, and session history), but assembling it manually is tedious.
ctx doctor # structural health check\nctx system events --last 20 # recent hook activity\n# or ask: \"something seems off, can you diagnose?\"\n
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx doctor CLI command Structural health report ctx doctor --json CLI command Machine-readable health report ctx system events CLI command Query local event log /ctx-doctor Skill Agent-driven diagnosis with analysis","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#quick-check-ctx-doctor","level":3,"title":"Quick Check: ctx doctor","text":"
Run ctx doctor for an instant structural health report. It checks context initialization, required files, drift, hook configuration, event logging, webhooks, reminders, task completion ratio, and context token size: all in one pass:
ctx doctor\n
ctx doctor\n==========\n\nStructure\n ✓ Context initialized (.context/)\n ✓ Required files present (4/4)\n\nQuality\n ⚠ Drift: 2 warnings (stale path in ARCHITECTURE.md, high entry count in LEARNINGS.md)\n\nHooks\n ✓ hooks.json valid (14 hooks registered)\n ○ Event logging disabled (enable with event_log: true in .ctxrc)\n\nState\n ✓ No pending reminders\n ⚠ Task completion ratio high (18/22 = 82%): consider archiving\n\nSize\n ✓ Context size: ~4200 tokens (budget: 8000)\n\nSummary: 2 warnings, 0 errors\n
Warnings are non-critical but worth fixing. Errors need attention. Informational notes (○) flag optional features that aren't enabled.
For power users: ctx system events with filters gives direct access to the event log.
# Last 50 events (default)\nctx system events\n\n# Events from a specific session\nctx system events --session eb1dc9cd-0163-4853-89d0-785fbfaae3a6\n\n# Only QA reminder events\nctx system events --hook qa-reminder\n\n# Raw JSONL for jq processing\nctx system events --json | jq '.message'\n\n# Include rotated (older) events\nctx system events --all --last 100\n
Filters use AND logic: --hook qa-reminder --session abc123 returns only QA reminder events from that specific session.
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#common-problems","level":2,"title":"Common Problems","text":"","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#ctx-not-initialized","level":3,"title":"\"ctx: not initialized\"","text":"
Symptoms: Any ctx command fails with ctx: not initialized - run \"ctx init\" first.
Cause: You're running ctx in a directory without an initialized .context/ directory. This guard runs on all user-facing commands to prevent confusing downstream errors.
Fix:
ctx init # create .context/ with template files\nctx init --minimal # or just the essentials (CONSTITUTION, TASKS, DECISIONS)\n
Commands that work without initialization: ctx init, ctx setup, ctx doctor, and help-only grouping commands (ctx, ctx system).
Symptoms: No nudges appearing, webhook silent, event log shows no entries for the expected hook.
Diagnosis:
# 1. Check if ctx is installed and on PATH\nwhich ctx && ctx --version\n\n# 2. Check if the hook is registered\ngrep \"check-persistence\" ~/.claude/plugins/ctx/hooks.json\n\n# 3. Run the hook manually to see if it errors\necho '{\"session_id\":\"test\"}' | ctx system check-persistence\n\n# 4. Check event log for the hook (if enabled)\nctx system events --hook check-persistence\n
Common causes:
Plugin is not installed: run ctx init --claude to reinstall
PATH issue: the hook invokes ctx from PATH; ensure it resolves
Throttle active: most hooks fire once per day: check .context/state/ for daily marker files
Hook silenced: a custom message override may be an empty file: check ctx system message list for overrides
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#too-many-nudges","level":3,"title":"\"Too many nudges\"","text":"
Symptoms: The agent is overwhelmed with hook output. Context checkpoints, persistence reminders, and QA gates fire constantly.
Diagnosis:
# Check how often hooks fired recently\nctx system events --last 50\n\n# Count fires per hook\nctx system events --json | jq -r '.detail.hook // \"unknown\"' \\\n | sort | uniq -c | sort -rn\n
Common causes:
QA reminder is noisy by design: it fires on every Edit call with no throttle. This is intentional. If it's too much, silence it with an empty override: ctx system message edit qa-reminder gate, then empty the file
Long session: context checkpoint fires with increasing frequency after prompt 15. This is the system telling you the session is getting long: consider wrapping up
Short throttle window: if you deleted marker files in .context/state/, daily-throttled hooks will re-fire
Outdated Claude Code plugin: Update the plugin using Claude Code → /plugin → \"Marketplace\"
ctx version mismatch: Build (or download) and install the latest ctx vesion.
Symptoms: The agent references outdated information, paths that don't exist, or decisions that were reversed.
Diagnosis:
# Structural drift check\nctx drift\n\n# Full doctor check (includes drift + more)\nctx doctor\n\n# Check when context files were last modified\nctx status --verbose\n
Common causes:
Drift accumulated: stale path references in ARCHITECTURE.md or CONVENTIONS.md. Fix with ctx drift --fix or ask the agent to clean up.
Task backlog: too many completed tasks diluting active context. Archive with ctx task archive or ctx compact --archive.
Large context files: LEARNINGS.md with 40+ entries competes for attention. Consolidate with /ctx-consolidate.
Missing session ceremonies: if /ctx-remember and /ctx-wrap-up aren't being used, context doesn't get refreshed. See Session Ceremonies.
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/troubleshooting/#the-agent-isnt-following-instructions","level":3,"title":"\"The agent isn't following instructions\"","text":"
Symptoms: The agent ignores conventions, forgets decisions, or acts contrary to CONSTITUTION.md rules.
Diagnosis:
# Check context token size: Is it too large for the model?\nctx doctor --json | jq '.results[] | select(.name == \"context_size\")'\n\n# Check if context is actually being loaded\nctx system events --hook context-load-gate\n
Common causes:
Context too large: if total tokens exceed the model's effective attention, instructions get diluted. Check ctx doctor for the size check. Compact with ctx compact --archive.
Context not loading: if context-load-gate hasn't fired, the agent may not have received context. Verify the hook is registered.
Conflicting instructions: CONVENTIONS.md says one thing, AGENT_PLAYBOOK.md says another. Review both files for consistency.
Agent drift: the agent's behavior diverges from instructions over long sessions. This is normal. Use /ctx-reflect to re-anchor, or start a new session.
Event logging (optional but recommended): event_log: true in .ctxrc
ctx initialized: ctx init
Event logging is not required for ctx doctor or /ctx-doctor to work. Both degrade gracefully: structural checks run regardless, and the skill notes when event data is unavailable.
Start with ctx doctor: It's the fastest way to get a comprehensive health picture. Save event log inspection for when you need to understand when and how often something happened.
Enable event logging early: The log is opt-in and low-cost (~250 bytes per event, 1MB rotation cap). Enable it before you need it: Diagnosing a problem without historical data is much harder.
Use the skill for correlation: ctx doctor tells you what is wrong. /ctx-doctor tells you why by correlating structural findings with event patterns. The agent can spot connections that individual commands miss.
Event log is gitignored: It's machine-local diagnostic data, not project context. Different machines produce different event streams.
Auditing System Hooks: the complete hook catalog and webhook-based audit trails
Detecting and Fixing Drift: structural and semantic drift detection and repair
Webhook Notifications: push notifications for hook activity
ctx doctor CLI: full command reference
ctx system events CLI: event log query reference
/ctx-doctor skill: agent-driven diagnosis
","path":["Recipes","Maintenance","Troubleshooting"],"tags":[]},{"location":"recipes/webhook-notifications/","level":1,"title":"Webhook Notifications","text":"","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#the-problem","level":2,"title":"The Problem","text":"
Your agent runs autonomously (loops, implements, releases) while you are away from the terminal. You have no way to know when it finishes, hits a limit, or when a hook fires a nudge.
How do you get notified about agent activity without watching the terminal?
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#tldr","level":2,"title":"TL;DR","text":"
ctx notify setup # configure webhook URL (encrypted)\nctx notify test # verify delivery\n# Hooks auto-notify on: session-end, loop-iteration, resource-danger\n
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#commands-and-skills-used","level":2,"title":"Commands and Skills Used","text":"Tool Type Purpose ctx notify setup CLI command Configure and encrypt webhook URL ctx notify test CLI command Send a test notification ctx notify --event <name> \"msg\" CLI command Send a notification from scripts/skills .ctxrcnotify.events Configuration Filter which events reach your webhook","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-1-get-a-webhook-url","level":3,"title":"Step 1: Get a Webhook URL","text":"
Any service that accepts HTTP POST with JSON works. Common options:
Service How to get a URL IFTTT Create an applet with the \"Webhooks\" trigger Slack Create an Incoming Webhook Discord Channel Settings > Integrations > Webhooks ntfy.sh Use https://ntfy.sh/your-topic (no signup) Pushover Use API endpoint with your user key
The URL contains auth tokens. ctx encrypts it; it never appears in plaintext in your repo.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-2-configure-the-webhook","level":3,"title":"Step 2: Configure the Webhook","text":"
This encrypts the URL with AES-256-GCM using the same key as the scratchpad (~/.ctx/.ctx.key). The encrypted file (.context/.notify.enc) is safe to commit. The key lives outside the project and is never committed.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-3-test-it","level":3,"title":"Step 3: Test It","text":"
If you see No webhook configured, run ctx notify setup first.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-4-configure-events","level":3,"title":"Step 4: Configure Events","text":"
Notifications are opt-in: no events are sent unless you configure an event list in .ctxrc:
# .ctxrc\nnotify:\n events:\n - loop # loop completion or max-iteration hit\n - nudge # VERBATIM relay hooks (context checkpoint, persistence, etc.)\n - relay # all hook output (verbose, for debugging)\n - heartbeat # every-prompt session-alive signal with metadata\n
Only listed events fire. Omitting an event silently drops it.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#step-5-use-in-your-own-skills","level":3,"title":"Step 5: Use in Your Own Skills","text":"
Add ctx notify calls to any skill or script:
# In a release skill\nctx notify --event release \"v1.2.0 released successfully\" 2>/dev/null || true\n\n# In a backup script\nctx notify --event backup \"Nightly backup completed\" 2>/dev/null || true\n
The 2>/dev/null || true suffix ensures the notification never breaks your script: If there's no webhook or the HTTP call fails, it's a silent noop.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#event-types","level":2,"title":"Event Types","text":"
ctx fires these events automatically:
Event Source When loop Loop script Loop completes or hits max iterations nudge System hooks VERBATIM relay nudge is emitted (context checkpoint, persistence, ceremonies, journal, resources, knowledge, version) relay System hooks Any hook output (VERBATIM relays, agent directives, block responses) heartbeat System hook Every prompt: session-alive signal with prompt count and context modification status testctx notify test Manual test notification (custom) Your skills You wire ctx notify --event <name> in your own scripts
nudge vs relay: The nudge event fires only for VERBATIM relay hooks (the ones the agent is instructed to show verbatim). The relay event fires for all hook output: VERBATIM relays, agent directives, and hard gates. Subscribe to relay for debugging (\"did the agent get the post-commit nudge?\"), nudge for user-facing assurance (\"was the checkpoint emitted?\").
Webhooks as a Hook Audit Trail
Subscribe to relay events and you get an external record of every hook that fires, independent of the agent.
This lets you verify hooks are running and catch cases where the agent absorbs a nudge instead of surfacing it.
See Auditing System Hooks for the full workflow.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#payload-format","level":2,"title":"Payload Format","text":"
The detail field is a structured template reference containing the hook name, variant, and any template variables. This lets receivers filter by hook or variant without parsing rendered text. The field is omitted when no template reference applies (e.g. custom ctx notify calls).
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#heartbeat-payload","level":3,"title":"Heartbeat Payload","text":"
The heartbeat event fires on every prompt with session metadata and token usage telemetry:
The tokens, context_window, and usage_pct fields are included when token data is available from the session JSONL file. They are omitted when no usage data has been recorded yet (e.g. first prompt).
Unlike other events, heartbeat fires every prompt (not throttled). Use it for observability dashboards or liveness monitoring of long-running sessions.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#security-model","level":2,"title":"Security Model","text":"Component Location Committed? Permissions Encryption key ~/.ctx/.ctx.key No (user-level) 0600 Encrypted URL .context/.notify.enc Yes (safe) 0600 Webhook URL Never on disk in plaintext N/A N/A
The key is shared with the scratchpad. If you rotate the encryption key, re-run ctx notify setup to re-encrypt the webhook URL with the new key.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#key-rotation","level":2,"title":"Key Rotation","text":"
ctx checks the age of the encryption key once per day. If it's older than 90 days (configurable via key_rotation_days), a VERBATIM nudge is emitted suggesting rotation.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#worktrees","level":2,"title":"Worktrees","text":"
The webhook URL is encrypted with the same encryption key (~/.ctx/.ctx.key). Because the key lives at the user level, it is shared across all worktrees on the same machine - notifications work in worktrees automatically.
This means agents running in worktrees cannot send webhook alerts. For autonomous runs where worktree agents are opaque, monitor them from the terminal rather than relying on webhooks. Enrich journals and review results on the main branch after merging.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#event-log-the-local-complement","level":2,"title":"Event Log: The Local Complement","text":"
Don't need a webhook but want diagnostic visibility? Enable event_log: true in .ctxrc. The event log writes the same payload as webhooks to a local JSONL file (.context/state/events.jsonl) that you can query without any external service:
ctx system events --last 20 # recent hook activity\nctx system events --hook qa-reminder # filter by hook\n
Webhooks and event logging are independent: you can use either, both, or neither. Webhooks give you push notifications and an external audit trail. The event log gives you local queryability and ctx doctor integration.
See Troubleshooting for how they work together.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#tips","level":2,"title":"Tips","text":"
Fire-and-forget: Notifications never block. HTTP errors are silently ignored. No retry, no response parsing.
No webhook = no cost: When no webhook is configured, ctx notify exits immediately. System hooks that call notify.Send() add zero overhead.
Multiple projects: Each project has its own .notify.enc. You can point different projects at different webhooks.
Event filter is per-project: Configure notify.events in each project's .ctxrc independently.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#next-up","level":2,"title":"Next Up","text":"
Auditing System Hooks →: Verify your hooks are running, audit what they do, and get alerted when they go silent.
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/webhook-notifications/#see-also","level":2,"title":"See Also","text":"
CLI Reference: ctx notify: full command reference
Configuration: .ctxrc settings including notify options
Running an Unattended AI Agent: how loops work and how notifications fit in
Hook Output Patterns: understanding VERBATIM relays, agent directives, and hard gates
Auditing System Hooks: using webhooks as an external audit trail for hook execution
","path":["Recipes","Hooks and Notifications","Webhook Notifications"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/","level":1,"title":"When to Use a Team of Agents","text":"","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-problem","level":2,"title":"The Problem","text":"
You have a task, and you are wondering: \"should I throw more agents at it?\"
More agents can mean faster results, but they also mean coordination overhead, merge conflicts, divergent mental models, and wasted tokens re-reading context.
The wrong setup costs more than it saves.
This recipe is a decision framework: It helps you choose between a single agent, parallel worktrees, and a full agent team, and explains what ctx provides at each level.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#tldr","level":2,"title":"TL;DR","text":"
Single agent for most work;
Parallel worktrees when tasks touch disjoint file sets;
Agent teams only when tasks need real-time coordination. When in doubt, start with one agent.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-spectrum","level":2,"title":"The Spectrum","text":"
There are three modes, ordered by complexity:
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#1-single-agent-default","level":3,"title":"1. Single Agent (Default)","text":"
One agent, one session, one branch. This is correct for most work.
Use this when:
The task has linear dependencies (step 2 needs step 1's output);
Changes touch overlapping files;
You need tight feedback loops (review each change before the next);
The task requires deep understanding of a single area;
Total effort is less than a few hours of agent time.
ctx provides: Full .context/: tasks, decisions, learnings, conventions, all in one session.
The agent builds a coherent mental model and persists it as it goes.
Example tasks: Bug fixes, feature implementation, refactoring a module, writing documentation for one area, debugging.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#2-parallel-worktrees-independent-tracks","level":3,"title":"2. Parallel Worktrees (Independent Tracks)","text":"
2-4 agents, each in a separate git worktree on its own branch, working on non-overlapping parts of the codebase.
Use this when:
You have 5+ independent tasks in the backlog;
Tasks group cleanly by directory or package;
File overlap between groups is zero or near-zero;
Each track can be completed and merged independently;
You want parallelism without coordination complexity.
ctx provides: Shared .context/ via git (each worktree sees the same tasks, decisions, conventions). /ctx-worktree skill for setup and teardown. TASKS.md as a lightweight work queue.
Example tasks: Docs + new package + test coverage (three tracks that don't touch the same files). Parallel recipe writing. Independent module development.
See: Parallel Agent Development with Git Worktrees
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#3-agent-team-coordinated-swarm","level":3,"title":"3. Agent Team (Coordinated Swarm)","text":"
Multiple agents communicating via messages, sharing a task list, with a lead agent coordinating. Claude Code's team/swarm feature.
Use this when:
Tasks have dependencies but can still partially overlap;
You need research and implementation happening simultaneously;
The work requires different roles (researcher, implementer, tester);
A lead agent needs to review and integrate others' work;
The task is large enough that coordination cost is justified.
ctx provides: .context/ as shared state that all agents can read. Task tracking for work assignment. Decisions and learnings as team memory that survives individual agent turnover.
Example tasks: Large refactor across modules where a lead reviews merges. Research and implementation where one agent explores options while another builds. Multi-file feature that needs integration testing after parallel implementation.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-decision-framework","level":2,"title":"The Decision Framework","text":"
Ask these questions in order:
Can one agent do this in a reasonable time?\n YES → Single agent. Stop here.\n NO ↓\n\nCan the work be split into non-overlapping file sets?\n YES → Parallel worktrees (2-4 tracks)\n NO ↓\n\nDo the subtasks need to communicate during execution?\n YES → Agent team with lead coordination\n NO → Parallel worktrees with a merge step\n
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#the-file-overlap-test","level":3,"title":"The File Overlap Test","text":"
This is the critical decision point. Before choosing multi-agent, list the files each subtask would touch. If two subtasks modify the same file, they belong in the same track (or the same single-agent session).
You: \"I want to parallelize these tasks. Which files would each one touch?\"\n\nAgent: [reads `TASKS.md`, analyzes codebase]\n \"Task A touches internal/config/ and internal/cli/initialize/\n Task B touches docs/ and site/\n Task C touches internal/config/ and internal/cli/status/\n\n Tasks A and C overlap on internal/config/ # they should be\n in the same track. Task B is independent.\"\n
When in doubt, keep things in one track. A merge conflict in a critical file costs more time than the parallelism saves.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#when-teams-make-things-worse","level":2,"title":"When Teams Make Things Worse","text":"
\"More agents\" is not always better. Watch for these patterns:
Merge hell: If you are spending more time resolving conflicts than the parallel work saved, you split wrong: Re-group by file overlap.
Context divergence: Each agent builds its own mental model. After 30 minutes of independent work, agent A might make assumptions that contradict agent B's approach. Shorter tracks with frequent merges reduce this.
Coordination theater: A lead agent spending most of its time assigning tasks, checking status, and sending messages instead of doing work. If the task list is clear enough, worktrees with no communication are cheaper.
Re-reading overhead: Every agent reads .context/ on startup. A team of 4 agents each reading 4000 tokens of context = 16000 tokens before anyone does any work. For small tasks, that overhead dominates.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#what-ctx-gives-you-at-each-level","level":2,"title":"What ctx Gives You at Each Level","text":"ctx Feature Single Agent Worktrees Team .context/ files Full access Shared via git Shared via filesystem TASKS.md Work queue Split by track Assigned by lead Decisions/Learnings Persisted in session Persisted per branch Persisted by any agent /ctx-next Picks next task Picks within track Lead assigns /ctx-worktree N/A Setup + teardown Optional /ctx-commit Normal commits Per-branch commits Per-agent commits","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#team-composition-recipes","level":2,"title":"Team Composition Recipes","text":"
Four practical team compositions for common workflows.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#feature-development-3-agents","level":3,"title":"Feature Development (3 agents)","text":"Role Responsibility Architect Writes spec in specs/, breaks work into TASKS.md phases Implementer Picks tasks from TASKS.md, writes code, marks [x] done Reviewer Runs tests, ctx drift, lint; files issues as new tasks
Coordination: TASKS.md checkboxes. Architect writes tasks before implementer starts. Reviewer runs after each implementer commit.
Anti-pattern: All three agents editing the same file simultaneously. Sequence the work so only one agent touches a file at a time.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#consolidation-sprint-3-4-agents","level":3,"title":"Consolidation Sprint (3-4 agents)","text":"Role Responsibility Auditor Runs ctx drift, identifies stale paths and broken refs Code Fixer Updates source code to match context (or vice versa) Doc Writer Updates ARCHITECTURE.md, CONVENTIONS.md, and docs/ Test Fixer (Optional) Fixes tests broken by the fixer's changes
Coordination: Auditor's ctx drift output is the shared work queue. Each agent claims a subset of issues by adding #in-progress labels.
Anti-pattern: Fixer and doc writer both editing ARCHITECTURE.md. Assign file ownership explicitly.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#release-prep-2-agents","level":3,"title":"Release Prep (2 agents)","text":"Role Responsibility Release Notes Generates changelog from commits, writes release notes Validation Runs full test suite, lint, build across platforms
Coordination: Both read TASKS.md to identify what shipped. Release notes agent works from git log; validation agent works from make audit.
Anti-pattern: Release notes agent running tests \"to verify.\" Each agent stays in its lane.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#documentation-sprint-3-agents","level":3,"title":"Documentation Sprint (3 agents)","text":"Role Responsibility Content Writes new pages, expands existing docs Cross-linker Adds nav entries, cross-references, \"See Also\" sections Verifier Builds site, checks broken links, validates rendering
Coordination: Content agent writes files first. Cross-linker updates zensical.toml and index pages after content lands. Verifier builds after each batch.
Antipattern: Content and cross-linker both editing zensical.toml. Batch nav updates into the cross-linker's pass.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#tips","level":2,"title":"Tips","text":"
Start with one agent: Only add parallelism when you have identified the bottleneck. \"This would go faster with more agents\" is usually wrong for tasks under 2 hours.
The 3-4 agent ceiling is real: Coordination overhead grows quadratically. 2 agents = 1 communication pair. 4 agents = 6 pairs. Beyond 4, you are managing agents more than doing work.
Worktrees > teams for most parallelism needs: If agents don't need to talk to each other during execution, worktrees give you parallelism with zero coordination overhead.
Use ctx as the shared brain: Whether it's one agent or four, the .context/ directory is the single source of truth. Decisions go in DECISIONS.md, not in chat messages between agents.
Merge early, merge often: Long-lived parallel branches diverge. Merge a track as soon as it's done rather than waiting for all tracks to finish.
TASKS.md conflicts are normal: Multiple agents completing different tasks will conflict on merge. The resolution is always additive: accept all [x] completions from both sides.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#next-up","level":2,"title":"Next Up","text":"
Parallel Agent Development with Git Worktrees →: Run multiple agents on independent task tracks using git worktrees.
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#go-deeper","level":2,"title":"Go Deeper","text":"
CLI Reference: all commands and flags
Integrations: setup for Claude Code, Cursor, Aider
Session Journal: browse and search session history
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"recipes/when-to-use-agent-teams/#see-also","level":2,"title":"See Also","text":"
Parallel Agent Development with Git Worktrees: the mechanical \"how\" for worktree-based parallelism
Running an Unattended AI Agent: serial autonomous loops: a different scaling strategy
Tracking Work Across Sessions: managing the task backlog that feeds into any multi-agent setup
","path":["Recipes","Agents and Automation","When to Use a Team of Agents"],"tags":[]},{"location":"reference/","level":1,"title":"Reference","text":"
Technical reference for ctx commands, skills, and internals.
","path":["Reference"],"tags":[]},{"location":"reference/#the-system-explains-itself","level":3,"title":"The System Explains Itself","text":"
The 12 properties that must hold for any valid ctx implementation. Not features: constraints. The system's contract with its users and contributors.
","path":["Reference"],"tags":[]},{"location":"reference/audit-conventions/","level":1,"title":"Audit Conventions: Common Patterns and Fixes","text":"
This guide documents the code conventions enforced by internal/audit/ AST tests. Each section shows the violation pattern, the fix, and the rationale. When a test fails, find the matching section below.
All tests skip _test.go files. The patterns apply only to production code under internal/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#variable-shadowing-bare-err-reuse","level":2,"title":"Variable Shadowing (bare err := reuse)","text":"
Test: TestNoVariableShadowing
When a function has multiple := assignments to err, each shadows the previous one. This makes it impossible to tell which error a later if err != nil is checking.
Rule: Use descriptive error names (readErr, writeErr, parseErr, walkErr, absErr, relErr) so each error site is independently identifiable.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#import-name-shadowing","level":2,"title":"Import Name Shadowing","text":"
Test: TestNoImportNameShadowing
When a local variable has the same name as an imported package, the import becomes inaccessible in that scope.
Before:
import \"github.com/ActiveMemory/ctx/internal/session\"\n\nfunc process(session *entity.Session) { // param shadows import\n // session package is now unreachable here\n}\n
After:
import \"github.com/ActiveMemory/ctx/internal/session\"\n\nfunc process(sess *entity.Session) {\n // session package still accessible\n}\n
Rule: Parameters, variables, and return values must not reuse imported package names. Common renames: session -> sess, token -> tok, config -> cfg, entry -> ent.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#magic-strings","level":2,"title":"Magic Strings","text":"
Test: TestNoMagicStrings
String literals in function bodies are invisible to refactoring tools and cause silent breakage when the value changes in one place but not another.
Before (string literals):
func loadContext() {\n data := filepath.Join(dir, \"TASKS.md\")\n if strings.HasSuffix(name, \".yaml\") {\n // ...\n }\n}\n
After:
func loadContext() {\n data := filepath.Join(dir, config.FilenameTask)\n if strings.HasSuffix(name, config.ExtYAML) {\n // ...\n }\n}\n
func EntryHash(text string) string {\n h := sha256.Sum256([]byte(text))\n return hex.EncodeToString(h[:cfgFmt.HashPrefixLen])\n}\n
Before (URL schemes — also caught):
if strings.HasPrefix(target, \"https://\") ||\n strings.HasPrefix(target, \"http://\") {\n return target\n}\n
After:
if strings.HasPrefix(target, cfgHTTP.PrefixHTTPS) ||\n strings.HasPrefix(target, cfgHTTP.PrefixHTTP) {\n return target\n}\n
Exempt from this check:
Empty string \"\", single space \" \", indentation strings
Regex capture references ($1, ${name})
const and var definition sites (that's where constants live)
Struct tags
Import paths
Packages under internal/config/, internal/assets/tpl/
Rule: If a string is used for comparison, path construction, or appears in 3+ files, it belongs in internal/config/ as a constant. Format strings belong in internal/config/ as named constants (e.g., cfgGit.FlagLastN, cfgTrace.RefFormat). User-facing prose belongs in internal/assets/ YAML files accessed via desc.Text().
Common fix for fmt.Sprintf with format verbs:
Pattern Fix fmt.Sprintf(\"%d\", n)strconv.Itoa(n)fmt.Sprintf(\"%d\", int64Val)strconv.FormatInt(int64Val, 10)fmt.Sprintf(\"%x\", bytes)hex.EncodeToString(bytes)fmt.Sprintf(\"%q\", s)strconv.Quote(s)fmt.Sscanf(s, \"%d\", &n)strconv.Atoi(s)fmt.Sprintf(\"-%d\", n)fmt.Sprintf(cfgGit.FlagLastN, n)\"https://\"cfgHTTP.PrefixHTTPS\"<\" config constant in config/html/","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#direct-printf-calls","level":2,"title":"Direct Printf Calls","text":"
Test: TestNoPrintfCalls
cmd.Printf and cmd.PrintErrf bypass the write-package formatting pipeline and scatter user-facing text across the codebase.
Rule: All formatted output goes through internal/write/ which uses cmd.Print/cmd.Println with pre-formatted strings from desc.Text().
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#raw-time-format-strings","level":2,"title":"Raw Time Format Strings","text":"
Test: TestNoRawTimeFormats
Inline time format strings (\"2006-01-02\", \"15:04:05\") drift when one call site is updated but others are missed.
Rule: All time format strings must use constants from internal/config/time/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#direct-flag-registration","level":2,"title":"Direct Flag Registration","text":"
Test: TestNoFlagBindOutsideFlagbind
Direct cobra flag calls (.Flags().StringVar(), etc.) scatter flag wiring across dozens of cmd.go files. Centralizing through internal/flagbind/ gives one place to audit flag names, defaults, and description key lookups.
Before:
func Cmd() *cobra.Command {\n var output string\n c := &cobra.Command{Use: cmd.UseStatus}\n c.Flags().StringVarP(&output, \"output\", \"o\", \"\",\n \"output format\")\n return c\n}\n
After:
func Cmd() *cobra.Command {\n var output string\n c := &cobra.Command{Use: cmd.UseStatus}\n flagbind.StringFlagShort(c, &output, flag.Output,\n flag.OutputShort, cmd.DescKeyOutput)\n return c\n}\n
Rule: All flag registration goes through internal/flagbind/. If the helper you need doesn't exist, add it to flagbind/flag.go before using it.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#todo-comments","level":2,"title":"TODO Comments","text":"
Test: TestNoTODOComments
TODO, FIXME, HACK, and XXX comments in production code are invisible to project tracking. They accumulate silently and never get addressed.
Remove the comment and add a task to .context/TASKS.md:
- [ ] Handle pagination in listEntries (internal/task/task.go)\n
Rule: Deferred work lives in TASKS.md, not in source comments.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#dead-exports","level":2,"title":"Dead Exports","text":"
Test: TestNoDeadExports
Exported symbols with zero references outside their definition file are dead weight. They increase API surface, confuse contributors, and cost maintenance.
Fix: Either delete the export (preferred) or demote it to unexported if it's still used within the file.
If the symbol existed for historical reasons and might be needed again, move it to quarantine/deadcode/ with a .dead extension. This preserves the code in git without polluting the live codebase:
// Dead exports quarantined from internal/config/flag/flag.go\n// Quarantined: 2026-04-02\n// Restore from git history if needed.\n
Rule: If a test-only allowlist entry is needed (the export exists only for test use), add the fully qualified symbol to testOnlyExports in dead_exports_test.go. Keep this list small — prefer eliminating the export.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#core-package-structure","level":2,"title":"Core Package Structure","text":"
Test: TestCoreStructure
core/ directories under internal/cli/ must contain only doc.go and test files at the top level. All domain logic lives in subpackages. This prevents core/ from becoming a god package.
Rule: Extract each logical unit into its own subpackage under core/. Each subpackage gets a doc.go. The subpackage name should match the domain concept (golang, check, fix, store), not a generic label (util, helper).
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#cross-package-types","level":2,"title":"Cross-Package Types","text":"
Test: TestCrossPackageTypes
When a type defined in one package is used from a different module (e.g., cli/doctor importing a type from cli/notify), the type has crossed its module boundary. Cross-cutting types belong in internal/entity/ for discoverability.
Exempt: Types inside entity/, proto/, core/ subpackages, and config/ packages. Same-module usage (e.g., cli/doctor/cmd/ using cli/doctor/core/) is not flagged.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#type-file-convention","level":2,"title":"Type File Convention","text":"
Exported types in core/ subpackages should live in types.go (the convention from CONVENTIONS.md), not scattered across implementation files. This makes type definitions discoverable. TestTypeFileConventionReport generates a diagnostic summary of all type placements for triage.
Exception: entity/ organizes by domain (task.go, session.go), proto/ uses schema.go, and err/ packages colocate error types with their domain context.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#desckey-yaml-linkage","level":2,"title":"DescKey / YAML Linkage","text":"
Test: TestDescKeyYAMLLinkage
Every DescKey constant must have a corresponding key in the YAML asset files, and every YAML key must have a corresponding DescKey constant. Orphans in either direction mean dead text or runtime panics.
Fix for orphan YAML key: Delete the YAML entry, or add the corresponding DescKey constant in config/embed/{text,cmd,flag}/.
Fix for orphan DescKey: Delete the constant, or add the corresponding entry in the YAML file under internal/assets/commands/text/, cmd/, or flag/.
If the orphan YAML entry was once valid but the feature was removed, move the YAML entry to a .dead file in quarantine/deadcode/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#package-doc-quality","level":2,"title":"Package Doc Quality","text":"
Test: TestPackageDocQuality
Every package under internal/ must have a doc.go with a meaningful package doc comment (at least 8 lines of real content). One-liners and file-list patterns (// - foo.go, // Source files:) are flagged because they drift as files change.
Template:
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n\n// Package mypackage does X.\n//\n// It handles Y by doing Z. The main entry point is [FunctionName]\n// which accepts A and returns B.\n//\n// Configuration is read from [config.SomeConstant]. Output is\n// written through [write.SomeHelper].\n//\n// This package is used by [parentpackage] during the W lifecycle\n// phase.\npackage mypackage\n
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#inline-regex-compilation","level":2,"title":"Inline Regex Compilation","text":"
Test: TestNoInlineRegexpCompile
regexp.MustCompile and regexp.Compile inside function bodies recompile the pattern on every call. Compiled patterns belong at package level.
Before:
func parse(s string) bool {\n re := regexp.MustCompile(`\\d{4}-\\d{2}-\\d{2}`)\n return re.MatchString(s)\n}\n
After:
// In internal/config/regex/regex.go:\n// DatePattern matches ISO date format (YYYY-MM-DD).\nvar DatePattern = regexp.MustCompile(`\\d{4}-\\d{2}-\\d{2}`)\n\n// In calling package:\nfunc parse(s string) bool {\n return regex.DatePattern.MatchString(s)\n}\n
Rule: All compiled regexes live in internal/config/regex/ as package-level var declarations. Two tests enforce this: TestNoInlineRegexpCompile catches function-body compilation, and TestNoRegexpOutsideRegexPkg catches package-level compilation outside config/regex/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#doc-comments","level":2,"title":"Doc Comments","text":"
Test: TestDocComments
All functions (exported and unexported), structs, and package-level variables must have a doc comment. Config packages allow group doc comments for const blocks.
// buildIndex maps entry names to their position in the\n// ordered slice for O(1) lookup during reconciliation.\n//\n// Parameters:\n// - entries: ordered slice of entries to index\n//\n// Returns:\n// - map[string]int: name-to-position mapping\nfunc buildIndex(entries []Entry) map[string]int {\n
Rule: Every function, struct, and package-level var gets a doc comment in godoc format. Functions include Parameters: and Returns: sections. Structs with 2+ fields document every field. See CONVENTIONS.md for the full template.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#line-length","level":2,"title":"Line Length","text":"
Test: TestLineLength
Lines in non-test Go files must not exceed 80 characters. This is a hard check, not a suggestion.
Rule: Break at natural points: function arguments, struct fields, chained calls. Long strings (URLs, struct tags) are the rare acceptable exception.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#literal-whitespace","level":2,"title":"Literal Whitespace","text":"
Test: TestNoLiteralWhitespace
Bare whitespace string and byte literals (\"\\n\", \"\\r\\n\", \"\\t\") must not appear outside internal/config/token/. All other packages use the token constants.
Before:
output := strings.Join(lines, \"\\n\")\n
After:
output := strings.Join(lines, token.Newline)\n
Rule: Whitespace literals are defined once in internal/config/token/. Use token.Newline, token.Tab, token.CRLF, etc.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#magic-numeric-values","level":2,"title":"Magic Numeric Values","text":"
Test: TestNoMagicValues
Numeric literals in function bodies need constants, with narrow exceptions.
Before:
if len(entries) > 100 {\n entries = entries[:100]\n}\n
After:
if len(entries) > config.MaxEntries {\n entries = entries[:config.MaxEntries]\n}\n
Exempt: 0, 1, -1, 2–10, strconv radix/bitsize args (10, 32, 64 in strconv.Parse*/Format*), octal permissions (caught separately by TestNoRawPermissions), and const/var definition sites.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#inline-separators","level":2,"title":"Inline Separators","text":"
Test: TestNoInlineSeparators
strings.Join calls must use token constants for their separator argument, not string literals.
Before:
result := strings.Join(parts, \", \")\n
After:
result := strings.Join(parts, token.CommaSep)\n
Rule: Separator strings live in internal/config/token/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#stuttery-function-names","level":2,"title":"Stuttery Function Names","text":"
Test: TestNoStutteryFunctions
Function names must not redundantly include their package name as a PascalCase word boundary. Go callers already write pkg.Function, so pkg.PkgFunction stutters.
Before:
// In package write\nfunc WriteJournal(cmd *cobra.Command, ...) {\n
After:
// In package write\nfunc Journal(cmd *cobra.Command, ...) {\n
Exempt: Identity functions like write.Write / write.write.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#predicate-naming-no-ishascan-prefix","level":2,"title":"Predicate Naming (no Is/Has/Can prefix)","text":"
Test: None (manual review convention)
Exported methods that return bool must not use Is, Has, or Can prefixes. The predicate reads more naturally without them, especially at call sites where the package name provides context.
Rule: Drop the prefix. Private helpers may use prefixes when it reads more naturally (isValid in a local context is fine). This convention applies to exported methods and package-level functions. See CONVENTIONS.md \"Predicates\" section.
This is not yet enforced by an AST test — it requires semantic understanding of return types and naming intent that makes automated detection fragile. Apply during code review.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#mixed-visibility","level":2,"title":"Mixed Visibility","text":"
Test: TestNoMixedVisibility
Files with exported functions must not also contain unexported functions. Public API and private helpers live in separate files.
Exempt: Files with exactly one function, doc.go, test files.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#stray-errgo-files","level":2,"title":"Stray err.go Files","text":"
Test: TestNoStrayErrFiles
err.go files must only exist under internal/err/. Error constructors anywhere else create a broken-window pattern where contributors add local error definitions when they see a local err.go.
Fix: Move the error constructor to internal/err/<domain>/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#cli-cmd-structure","level":2,"title":"CLI Cmd Structure","text":"
Test: TestCLICmdStructure
Each cmd/$sub/ directory under internal/cli/ may contain only cmd.go, run.go, doc.go, and test files. Extra .go files (helpers, output formatters, types) belong in the corresponding core/ subpackage.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#desckey-namespace","level":2,"title":"DescKey Namespace","text":"
Three tests enforce DescKey/Use constant discipline:
Use* constants appear only in cobra Use: struct field assignments — never as arguments to desc.Text() or elsewhere.
DescKey* constants are passed only to assets.CommandDesc(), assets.FlagDesc(), or desc.Text() — never to cobra Use:.
No cross-namespace lookups — TextDescKey must not be passed to CommandDesc(), FlagDescKey must not be passed to Text(), etc.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#yaml-examples-registry-linkage","level":2,"title":"YAML Examples / Registry Linkage","text":"
Every key in examples.yaml and registry.yaml must match a known entry type constant. Prevents orphan entries that are never rendered.
Fix: Delete the orphan YAML entry, or add the corresponding constant in config/entry/.
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#other-enforced-patterns","level":2,"title":"Other Enforced Patterns","text":"
These tests follow the same fix approach — extract the operation to its designated package:
Test Violation Fix TestNoNakedErrorsfmt.Errorf/errors.New outside internal/err/ Add error constructor to internal/err/<domain>/TestNoRawFileIO Direct os.ReadFile, os.Create, etc. Use io.SafeReadFile, io.SafeWriteFile, etc. TestNoRawLogging Direct fmt.Fprintf(os.Stderr, ...) Use log/warn.Warn() or log/event.Append()TestNoExecOutsideExecPkgexec.Command outside internal/exec/ Add command to internal/exec/<domain>/TestNoCmdPrintOutsideWritecmd.Print* outside internal/write/ Add output helper to internal/write/<domain>/TestNoRawPermissions Octal literals (0644, 0755) Use config/fs.PermFile, config/fs.PermExec, etc. TestNoErrorsAserrors.As() Use errors.AsType() (generic, Go 1.23+) TestNoStringConcatPathsdir + \"/\" + file Use filepath.Join(dir, file)","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/audit-conventions/#general-fix-workflow","level":2,"title":"General Fix Workflow","text":"
When an audit test fails:
Read the error message. It includes file:line and a description of the violation.
Find the matching section above. The test name maps directly to a section.
Apply the pattern. Most fixes are mechanical: extract to the right package, rename a variable, or replace a literal with a constant.
Run make test before committing. Audit tests run as part of go test ./internal/audit/.
Don't add allowlist entries as a first resort. Fix the code. Allowlists exist only for genuinely unfixable cases (test-only exports, config packages that are definitionally exempt).
","path":["Audit Conventions: Common Patterns and Fixes"],"tags":[]},{"location":"reference/comparison/","level":1,"title":"Tool Ecosystem","text":"","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#high-level-mental-model","level":2,"title":"High-Level Mental Model","text":"
Many tools help AI think.
ctx helps AI remember.
Not by storing thoughts,
but by preserving intent.
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#how-ctx-differs-from-similar-tools","level":2,"title":"How ctx Differs from Similar Tools","text":"
There are many tools in the AI ecosystem that touch parts of the context problem:
Some manage prompts.
Some retrieve data.
Some provide runtime context objects.
Some offer enterprise platforms.
ctx focuses on a different layer entirely.
This page explains where ctx fits, and where it intentionally does not.
That single difference explains nearly all of ctx's design choices.
Question Most tools ctx Where does context live? In prompts or APIs In files How long does it last? One request / one session Across time Who can read it? The model Humans and tools How is it updated? Implicitly Explicitly Is it inspectable? Rarely Always","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#prompt-management-tools","level":2,"title":"Prompt Management Tools","text":"
Examples include:
prompt templates;
reusable system prompts;
prompt libraries;
prompt versioning tools.
These tools help you start a session.
They do not help you continue one.
Prompt tools:
inject text at session start;
are ephemeral by design;
do not evolve with the project.
ctx:
persists knowledge over time;
accumulates decisions and learnings;
makes the context part of the repository itself.
Prompt tooling and ctx are complementary; not competing. Yet, they operate in different layers.
Users often evaluate ctx against specific tools they already use. These comparisons clarify where responsibilities overlap, where they diverge, and where the tools are genuinely complementary.
Anthropic's auto-memory is tool-managed memory (L2): the model decides what to remember, stores it automatically, and retrieves it implicitly. ctx is system memory (L3): humans and agents explicitly curate decisions, learnings, and tasks in inspectable files.
Auto-memory is convenient - you do not configure anything. But it is also opaque: you cannot see what was stored, edit it precisely, or share it across tools. ctx files are plain Markdown in your repository, visible in diffs and code review.
The two are complementary. ctx can absorb auto-memory as an input source (importing what the model remembered into structured context files) while providing the durable, inspectable layer that auto-memory lacks.
Static rule files (.cursorrules, .claude/rules/) declare conventions: coding style, forbidden patterns, preferred libraries. They are effective for what to do and load automatically at session start.
ctx adds dimensions that rule files do not cover: architectural decisions with rationale, learnings discovered during development, active tasks, and a constitution that governs agent behavior. Critically, ctx context accumulates - each session can add to it, and token budgeting ensures only the most relevant context is injected.
Use rule files for static conventions. Use ctx for evolving project memory.
Aider's --read flag injects file contents at session start; --watch reloads them on change. The concept is similar to ctx's \"load\" step: make the agent aware of specific files.
The differences emerge beyond loading. Aider has no persistence model -- nothing the agent learns during a session is written back. There is no token budgeting (large files consume the full context window), no priority ordering across file types, and no structured format for decisions or learnings. ctx provides the full lifecycle: load, accumulate, persist, and budget.
GitHub Copilot's @workspace performs workspace-wide code search. It answers \"what code exists?\" - finding function definitions, usages, and file structure across the repository.
ctx answers a different question: \"what did we decide?\" It stores architectural intent, not code indices. Copilot's workspace search and ctx's project memory are orthogonal; one finds code, the other preserves the reasoning behind it.
Cline's memory bank stores session context within the Cline extension. The motivation is similar to ctx: help the agent remember across sessions.
The key difference is portability. Cline memory is tied to Cline - it does not transfer to Claude Code, Cursor, Aider, or any other tool. ctx is tool-agnostic: context lives in plain files that any editor, agent, or script can read. Switching tools does not mean losing memory.
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#when-ctx-is-a-good-fit","level":2,"title":"When ctx Is a Good Fit","text":"
ctx works best when:
you want AI work to compound over time;
architectural decisions matter;
context must be inspectable;
humans and AI must share the same source of truth;
Git history should include why, not just what.
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/comparison/#when-ctx-is-not-the-right-tool","level":2,"title":"When ctx Is Not the Right Tool","text":"
ctx is probably not what you want if:
you only need one-off prompts;
you rely exclusively on RAG;
you want autonomous agents without a human-readable state;
You Can't Import Expertise: why project-specific context matters more than generic best practices
","path":["Reference","Tool Ecosystem"],"tags":[]},{"location":"reference/design-invariants/","level":1,"title":"Invariants","text":"","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#the-system-explains-itself","level":1,"title":"The System Explains Itself","text":"
These are the properties that must hold for any valid ctx implementation.
These are not features.
These are constraints.
A change that violates an invariant is a category error, not an improvement.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#cognitive-state-tiers","level":2,"title":"Cognitive State Tiers","text":"
ctx distinguishes between three forms of state:
Authoritative state: Versioned, inspectable artifacts that define intent and survive time.
Delivery views: Deterministic assemblies of the authoritative state for a specific budget or workflow.
Ephemeral working state: Local, transient, or sensitive data that assists interaction but does not define system truth.
The invariants below apply primarily to the authoritative cognitive state.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#1-cognitive-state-is-explicit","level":2,"title":"1. Cognitive State Is Explicit","text":"
All authoritative context lives in artifacts that can be inspected, reviewed, and versioned.
If something is important, it must exist as a file: Not only in a prompt, a chat, or a model's hidden memory.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#2-assembly-is-reproducible","level":2,"title":"2. Assembly Is Reproducible","text":"
Given the same:
repository state,
configuration,
and inputs,
context assembly produces the same result.
Heuristics may rank or filter for delivery under constraints.
They do not alter the authoritative state.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#3-the-authoritative-state-is-human-readable","level":2,"title":"3. The Authoritative State Is Human-Readable","text":"
The authoritative cognitive state must be stored in formats that a human can:
read,
diff,
review,
and edit directly.
Sensitive working memory may be encrypted at rest. However, encryption must not become the only representation of authoritative knowledge.
Reasoning, decisions, and outcomes must remain available after the interaction that produced them has ended.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#5-authority-is-user-defined","level":2,"title":"5. Authority Is User-Defined","text":"
What enters the authoritative context is an explicit human decision.
Models may suggest.
Automation may assist.
Selection is never implicit.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#6-operation-is-local-first","level":2,"title":"6. Operation Is Local-First","text":"
The core system must function without requiring network access or a remote service.
External systems may extend ctx.
They must not be required for its operation.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#7-versioning-is-the-memory-model","level":2,"title":"7. Versioning Is the Memory Model","text":"
The evolution of the authoritative cognitive state must be:
preserved,
inspectable,
and branchable.
Ephemeral and sensitive working state may use different retention and diff strategies by design.
Understanding includes understanding how we arrived here.
Authoritative cognitive state must have a defined layout that:
communicates intent,
supports navigation,
and prevents drift.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#9-verification-is-the-scoreboard","level":2,"title":"9. Verification Is the Scoreboard","text":"
Claims without recorded outcomes are noise.
Reality (observed and captured) is the only signal that compounds.
This invariant defines a required direction:
The authoritative state must be able to record expectation and result.
Work that has already produced understanding must not be re-derived from scratch.
Explored paths, rejected options, and validated conclusions are permanent assets.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#11-policies-are-encoded-not-remembered","level":2,"title":"11. Policies Are Encoded, not Remembered","text":"
Alignment must not depend on recall or goodwill.
Constraints that matter must exist in machine-readable form and participate in context assembly.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#12-the-system-explains-itself","level":2,"title":"12. The System Explains Itself","text":"
From the repository state alone it must be possible to determine:
To avoid category errors, ctx does not attempt to be:
a skill,
a prompt management tool,
a chat history viewer,
an autonomous agent runtime,
a vector database,
a hosted memory service.
Such systems may integrate with ctx.
They do not define it.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/design-invariants/#implications-for-contributions","level":1,"title":"Implications for Contributions","text":"
Valid contributions:
strengthen an invariant,
reduce the cost of maintaining an invariant,
or extend the system without violating invariants.
Invalid contributions:
introduce hidden authoritative state,
replace reproducible assembly with non-reproducible behavior,
make core operation depend on external services,
reduce human inspectability of authoritative state,
or bypass explicit user authority over what becomes authoritative.
Everything else (commands, skills, layouts, integrations, optimizations) is an implementation detail.
These invariants are the system.
","path":["Reference","Invariants"],"tags":[]},{"location":"reference/scratchpad/","level":1,"title":"Scratchpad","text":"","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#what-is-ctx-scratchpad","level":2,"title":"What Is ctx Scratchpad?","text":"
A one-liner scratchpad, encrypted at rest, synced via git.
Quick notes that don't fit decisions, learnings, or tasks: reminders, intermediate values, sensitive tokens, working memory during debugging. Entries are numbered, reorderable, and persist across sessions.
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#encrypted-by-default","level":2,"title":"Encrypted by Default","text":"
Scratchpad entries are encrypted with AES-256-GCM before touching the disk.
Component Path Git status Encryption key ~/.ctx/.ctx.key User-level, 0600 permissions Encrypted data .context/scratchpad.enc Committed
The key is generated automatically during ctx init (256-bit via crypto/rand) and stored at ~/.ctx/.ctx.key. One key per machine, shared across all projects.
The ciphertext format is [12-byte nonce][ciphertext+tag]. No external dependencies: Go stdlib only.
Because the key is .gitignored and the data is committed, you get:
At-rest encryption: the .enc file is opaque without the key
Git sync: push/pull the encrypted file like any other tracked file
Key separation: the key never leaves the machine unless you copy it
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#commands","level":2,"title":"Commands","text":"Command Purpose ctx pad List all entries (numbered 1-based) ctx pad show N Output raw text of entry N (no prefix, pipe-friendly) ctx pad add \"text\" Append a new entry ctx pad rm N Remove entry at position N ctx pad edit N \"text\" Replace entry N with new text ctx pad edit N --append \"text\" Append text to the end of entry N ctx pad edit N --prepend \"text\" Prepend text to the beginning of entry N ctx pad add TEXT --file PATH Ingest a file as a blob entry (TEXT is the label) ctx pad show N --out PATH Write decoded blob content to a file ctx pad mv N M Move entry from position N to position M ctx pad resolve Show both sides of a merge conflict for resolution ctx pad import FILE Bulk-import lines from a file (or stdin with -) ctx pad import --blob DIR Import directory files as blob entries ctx pad export [DIR] Export all blob entries to a directory as files ctx pad merge FILE... Merge entries from other scratchpad files into current
All commands decrypt on read, operate on plaintext in memory, and re-encrypt on write. The key file is never printed to stdout.
# Add a note\nctx pad add \"check DNS propagation after deploy\"\n\n# List everything\nctx pad\n# 1. check DNS propagation after deploy\n# 2. staging API key: sk-test-abc123\n\n# Show raw text (for piping)\nctx pad show 2\n# sk-test-abc123\n\n# Compose entries\nctx pad edit 1 --append \"$(ctx pad show 2)\"\n\n# Reorder\nctx pad mv 2 1\n\n# Clean up\nctx pad rm 2\n
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#bulk-import-and-export","level":2,"title":"Bulk Import and Export","text":"
Import lines from a file in bulk (each non-empty line becomes an entry):
# Import from a file\nctx pad import notes.txt\n\n# Import from stdin\ngrep TODO *.go | ctx pad import -\n
Export all blob entries to a directory as files:
# Export to a directory\nctx pad export ./ideas\n\n# Preview without writing\nctx pad export --dry-run\n\n# Overwrite existing files\nctx pad export --force ./backup\n
Combine entries from other scratchpad files into your current pad. Useful when merging work from parallel worktrees, other machines, or teammates:
# Merge from a worktree's encrypted scratchpad\nctx pad merge worktree/.context/scratchpad.enc\n\n# Merge from multiple sources (encrypted and plaintext)\nctx pad merge pad-a.enc notes.md\n\n# Merge a foreign encrypted pad using its key\nctx pad merge --key /other/.ctx.key foreign.enc\n\n# Preview without writing\nctx pad merge --dry-run pad-a.enc pad-b.md\n
Each input file is auto-detected as encrypted or plaintext: decryption is attempted first, and on failure the file is parsed as plain text. Entries are deduplicated by exact content, so running merge twice with the same file is safe.
The scratchpad can store small files (up to 64 KB) as blob entries. Files are base64-encoded and stored with a human-readable label.
# Ingest a file: first argument is the label\nctx pad add \"deploy config\" --file ./deploy.yaml\n\n# Listing shows label with a [BLOB] marker\nctx pad\n# 1. check DNS propagation after deploy\n# 2. deploy config [BLOB]\n\n# Extract to a file\nctx pad show 2 --out ./recovered.yaml\n\n# Or print decoded content to stdout\nctx pad show 2\n
Blob entries are encrypted identically to text entries. The internal format is label:::base64data: You never need to construct this manually.
Constraint Value Max file size (pre-encoding) 64 KB Storage format label:::base64(content) Display label [BLOB] in listings
When Should You Use Blobs
Blobs are for small files you want encrypted and portable: config snippets, key fragments, deployment manifests, test fixtures. For anything larger than 64 KB, use the filesystem directly.
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#using-with-ai","level":2,"title":"Using with AI","text":"
Use Natural Language
As in many ctx features, the ctx scratchpad can also be used with natural langauge. You don't have to memorize the CLI commands.
CLI gives you \"precision\", whereas natural language gives you flow.
The /ctx-pad skill maps natural language to ctx pad commands. You don't need to remember the syntax:
You say What happens \"jot down: check DNS after deploy\" ctx pad add \"check DNS after deploy\" \"show my scratchpad\" ctx pad \"delete the third entry\" ctx pad rm 3 \"update entry 2 to include the new endpoint\" ctx pad edit 2 \"...\" \"move entry 4 to the top\" ctx pad mv 4 1 \"import my notes from notes.txt\" ctx pad import notes.txt \"export all blobs to ./backup\" ctx pad export ./backup \"merge the scratchpad from the worktree\" ctx pad merge worktree/.context/scratchpad.enc
The skill handles the translation. You describe what you want in plain English; the agent picks the right command.
The encryption key lives at ~/.ctx/.ctx.key (outside the project directory). Because all worktrees on the same machine share this path, ctx pad works in worktrees automatically - no special setup needed.
For projects where encryption is unnecessary, disable it in .ctxrc:
scratchpad_encrypt: false\n
In plaintext mode:
Entries are stored in .context/scratchpad.md instead of .enc.
No key is generated or required.
All ctx pad commands work identically.
The file is human-readable and diffable.
When Should You Use Plaintext
Plaintext mode is useful for non-sensitive projects, solo work where encryption adds friction, or when you want scratchpad entries visible in git diff.
","path":["Reference","Scratchpad"],"tags":[]},{"location":"reference/scratchpad/#when-should-you-use-scratchpad-versus-context-files","level":2,"title":"When Should You Use Scratchpad versus Context Files","text":"Use case Where it goes Temporary reminders (\"check X after deploy\") Scratchpad Working values during debugging Scratchpad Sensitive tokens or API keys (short-term) Scratchpad Quick notes that don't fit anywhere else Scratchpad Items that are not directly relevant to the project Scratchpad Things that you want to keep near, but also hidden Scratchpad Work items with completion tracking TASKS.md Trade-offs with rationale DECISIONS.md Reusable lessons with context/lesson/application LEARNINGS.md Codified patterns and standards CONVENTIONS.md
Rule of thumb:
If it needs structure or will be referenced months later, use a context file (i.e. DECISIONS.md, LEARNINGS.md, TASKS.md).
If it is working memory for the current session or week, use the scratchpad.
Session journals contain sensitive data such as file contents, commands, API keys, internal discussions, error messages with stack traces, and more.
The .context/journal-site/ and .context/journal-obsidian/ directories MUST be .gitignored.
DO NOT host your journal publicly.
DO NOT commit your journal files to version control.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#browse-your-session-history","level":2,"title":"Browse Your Session History","text":"
ctx's Session Journal turns your AI coding sessions into a browsable, searchable, and editable archive.
After using ctx for a couple of sessions, you can generate a journal site with:
# Import all sessions to markdown\nctx journal import --all\n\n# Generate and serve the journal site\nctx journal site --serve\n
Then open http://localhost:8000 to browse your sessions.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#what-you-get","level":2,"title":"What You Get","text":"
The Session Journal gives you:
Browsable history: Navigate through all your AI sessions by date
Full conversations: See every message, tool use, and result
Token usage: Track how many tokens each session consumed
Search: Find sessions by content, project, or date
Dark mode: Easy on the eyes for late-night archaeology
Each session page includes the following sections:
Section Content Metadata Date, time, duration, model, project, git branch Summary Space for your notes (editable) Tool Usage Which tools were used and how often Conversation Full transcript with timestamps","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#the-workflow","level":2,"title":"The Workflow","text":"","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#1-import-sessions","level":3,"title":"1. Import Sessions","text":"
# Import all sessions from current project (only new files)\nctx journal import --all\n\n# Import sessions from all projects\nctx journal import --all --all-projects\n\n# Import a specific session by ID (always writes)\nctx journal import abc123\n\n# Preview what would be imported\nctx journal import --all --dry-run\n\n# Re-import existing (regenerates conversation, preserves YAML frontmatter)\nctx journal import --all --regenerate\n\n# Discard frontmatter during regeneration\nctx journal import --all --regenerate --keep-frontmatter=false -y\n
Imported sessions go to .context/journal/ as editable Markdown files.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#2-generate-the-site","level":3,"title":"2. Generate the Site","text":"
# Generate site structure\nctx journal site\n\n# Generate and build static HTML\nctx journal site --build\n\n# Generate and serve locally\nctx journal site --serve\n\n# Custom output directory\nctx journal site --output ~/my-journal\n
The site is generated in .context/journal-site/ by default.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#3-browse-and-search","level":3,"title":"3. Browse and Search","text":"
Imported sessions are plain Markdown in .context/journal/. You can:
Add summaries: Fill in the ## Summary section
Add notes: Insert your own commentary anywhere
Highlight key moments: Use Markdown formatting
Delete noise: Remove irrelevant tool outputs
After editing, regenerate the site:
ctx journal site --serve\n
Safe by Default
Running ctx journal import --all only imports new sessions. Existing files are skipped entirely (your edits and enrichments are never touched).
Use --regenerate to re-import existing files. Conversation content is regenerated, but YAML frontmatter (topics, type, outcome, etc.) is preserved. You'll be prompted before any existing files are overwritten; add -y to skip the prompt.
Use --keep-frontmatter=false to discard enriched frontmatter during regeneration.
Locked entries (via ctx journal lock) are always skipped, regardless of flags. If you prefer to add locked: true to frontmatter during enrichment, run ctx journal sync to propagate the lock state to .state.json.
Claude Code generates \"suggestion\" sessions for auto-complete prompts. These are separated in the index under a \"Suggestions\" section to keep your main session list focused.
Raw imported sessions contain basic metadata (date, time, project) but lack the structured information needed for effective search, filtering, and analysis. Journal enrichment adds semantic metadata that transforms a flat archive into a searchable knowledge base.
Field Required Description title Yes Descriptive title (not the session slug) date Yes Session date (YYYY-MM-DD) type Yes Session type (see below) outcome Yes How the session ended (see below) topics No Subject areas discussed technologies No Languages, databases, frameworks libraries No Specific packages or libraries used key_files No Important files created or modified
Type values:
Type When to use feature Building new functionality bugfix Fixing broken behavior refactor Restructuring without behavior change exploration Research, learning, experimentation debugging Investigating issues documentation Writing docs, comments, README
Outcome values:
Outcome Meaning completed Goal achieved partial Some progress, work continues abandoned Stopped pursuing this approach blocked Waiting on external dependency","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#using-ctx-journal-enrich","level":3,"title":"Using /ctx-journal-enrich","text":"
The /ctx-journal-enrich skill automates enrichment by analyzing conversation content and proposing metadata.
Extract decisions, learnings, and tasks mentioned;
Show a diff and ask for confirmation before writing.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#before-and-after","level":3,"title":"Before and After","text":"
Before enrichment:
# twinkly-stirring-kettle\n\n**ID**: abc123-def456\n**Date**: 2026-01-24\n**Time**: 14:30:00\n...\n\n## Summary\n\n[Add your summary of this session]\n\n## Conversation\n...\n
After enrichment:
---\ntitle: \"Add Redis caching to API endpoints\"\ndate: 2026-01-24\ntype: feature\noutcome: completed\ntopics:\n - caching\n - api-performance\ntechnologies:\n - go\n - redis\nkey_files:\n - internal/api/middleware/cache.go\n - internal/cache/redis.go\n---\n\n# twinkly-stirring-kettle\n\n**ID**: abc123-def456\n**Date**: 2026-01-24\n**Time**: 14:30:00\n...\n\n## Summary\n\nImplemented Redis-based caching middleware for frequently accessed API endpoints.\nAdded cache invalidation on writes and configurable TTL per route. Reduced\n the average response time from 200ms to 15ms for cached routes.\n\n## Decisions\n\n* Used Redis over in-memory cache for horizontal scaling\n* Chose per-route TTL configuration over global setting\n\n## Learnings\n\n* Redis WATCH command prevents race conditions during cache invalidation\n\n## Conversation\n...\n
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#enrichment-and-site-generation","level":3,"title":"Enrichment and Site Generation","text":"
The journal site generator uses enriched metadata for better organization:
Titles appear in navigation instead of slugs
Summaries provide context in the index
Topics enable filtering (when using search)
Types allow grouping by work category
Future improvements will add topic-based navigation and outcome filtering to the generated site.
Use ctx journal site when you want a web-browsable archive with search and dark mode. Use ctx journal obsidian when you want graph view, backlinks, and tag-based navigation inside Obsidian. Both use the same enriched source entries: you can generate both.
The complete journal workflow has four stages. Each is idempotent: safe to re-run, and stages skip already-processed entries.
import → enrich → rebuild\n
Stage Command / Skill What it does Skips if Import ctx journal import --all Converts session JSONL to Markdown File already exists (safe default) Enrich /ctx-journal-enrich Adds frontmatter, summaries, topics Frontmatter already present Rebuild ctx journal site --build Generates static HTML site -- Obsidian ctx journal obsidian Generates Obsidian vault with wikilinks --
One-command pipeline
/ctx-journal-enrich-all handles import automatically - it detects unimported sessions and imports them before enriching. You only need to run ctx journal site --build afterward.
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#using-make-journal","level":3,"title":"Using make journal","text":"
If your project includes Makefile.ctx (deployed by ctx init), the first and last stages are combined:
make journal # import + rebuild\n
After it runs, it reminds you to enrich in Claude Code:
Next steps (in Claude Code):\n /ctx-journal-enrich-all # imports if needed + adds metadata per entry\n\nThen re-run: make journal\n
Rendering Issues?
If individual entries have rendering problems (broken fences, malformed lists), check the programmatic normalization in the import pipeline. Most cases are handled automatically during ctx journal import.
# Import, browse, then enrich in Claude Code\nmake journal && make journal-serve\n# Then in Claude Code: /ctx-journal-enrich <session>\n
After a productive session:
# Import just that session and add notes\nctx journal import <session-id>\n# Edit .context/journal/<session>.md\n# Regenerate: ctx journal site\n
Searching across all sessions:
# Use grep on the journal directory\ngrep -r \"authentication\" .context/journal/\n
","path":["Reference","Session Journal"],"tags":[]},{"location":"reference/session-journal/#requirements","level":2,"title":"Requirements","text":"Use pipx for zensical
pip install zensical may install a non-functional stub on system Python. Using venv has other issues too.
These issues especially happen on Mac OSX.
Use pipx install zensical, which creates an isolated environment and handles Python version management automatically.
The journal site uses zensical for static site generation:
Skills are slash commands that run inside your AI assistant (e.g., /ctx-next), as opposed to CLI commands that run in your terminal (e.g., ctx status).
Skills give your agent structured workflows: It knows what to read, what to run, and when to ask. Most wrap one or more ctx CLI commands with opinionated behavior on top.
Skills Are Best Used Conversationally
The beauty of ctx is that it's designed to be intuitive and conversational, allowing you to interact with your AI assistant naturally. That's why you don't have to memorize many of these skills.
See the Prompting Guide for natural-language triggers that invoke these skills conversationally.
However, when you need a more precise control, you have the option to invoke the relevant skills directly.
","path":["Reference","Skills"],"tags":[]},{"location":"reference/skills/#all-skills","level":2,"title":"All Skills","text":"Skill Description Type /ctx-remember Recall project context and present structured readback user-invocable /ctx-wrap-up End-of-session context persistence ceremony user-invocable /ctx-status Show context summary with interpretation user-invocable /ctx-agent Load full context packet for AI consumption user-invocable /ctx-next Suggest 1-3 concrete next actions with rationale user-invocable /ctx-commit Commit with integrated context persistence user-invocable /ctx-reflect Pause and reflect on session progress user-invocable /ctx-add-task Add actionable task to TASKS.md user-invocable /ctx-add-decision Record architectural decision with rationale user-invocable /ctx-add-learning Record gotchas and lessons learned user-invocable /ctx-add-convention Record coding convention for consistency user-invocable /ctx-archive Archive completed tasks from TASKS.md user-invocable /ctx-pad Manage encrypted scratchpad entries user-invocable /ctx-history Browse and import AI session history user-invocable /ctx-journal-enrich Enrich single journal entry with metadata user-invocable /ctx-journal-enrich-all Full journal pipeline: export if needed, then batch-enrich user-invocable /ctx-blog Generate blog post draft from project activity user-invocable /ctx-blog-changelog Generate themed blog post from a commit range user-invocable /ctx-consolidate Consolidate redundant learnings or decisions user-invocable /ctx-drift Detect and fix context drift user-invocable /ctx-prompt Apply, list, and manage saved prompt templates user-invocable /ctx-prompt-audit Analyze prompting patterns for improvement user-invocable /ctx-check-links Audit docs for dead internal and external links user-invocable /ctx-sanitize-permissions Audit Claude Code permissions for security risks user-invocable /ctx-brainstorm Structured design dialogue before implementation user-invocable /ctx-spec Scaffold a feature spec from a project template user-invocable /ctx-import-plans Import Claude Code plan files into project specs user-invocable /ctx-implement Execute a plan step-by-step with verification user-invocable /ctx-loop Generate autonomous loop script user-invocable /ctx-worktree Manage git worktrees for parallel agents user-invocable /ctx-architecture Build and maintain architecture maps user-invocable /ctx-remind Manage session-scoped reminders user-invocable /ctx-doctor Troubleshoot ctx behavior with health checks and event analysis user-invocable /ctx-skill-audit Audit skills against Anthropic prompting best practices user-invocable /ctx-skill-creator Create, improve, and test skills user-invocable /ctx-pause Pause context hooks for this session user-invocable /ctx-resume Resume context hooks after a pause user-invocable","path":["Reference","Skills"],"tags":[]},{"location":"reference/skills/#session-lifecycle","level":2,"title":"Session Lifecycle","text":"
Skills for starting, running, and ending a productive session.
Session Ceremonies
Two skills in this group are ceremony skills: /ctx-remember (session start) and /ctx-wrap-up (session end). Unlike other skills that work conversationally, these should be invoked as explicit slash commands for completeness. See Session Ceremonies.
Commit code with integrated context persistence: pre-commit checks, staged files, Co-Authored-By trailer, and a post-commit prompt to capture decisions and learnings.
Wraps: git add, git commit, optionally chains to /ctx-add-decision and /ctx-add-learning
End-of-session context persistence ceremony. Gathers signal from git diff, recent commits, and conversation themes. Proposes candidates (learnings, decisions, conventions, tasks) with complete structured fields for user approval, then persists via ctx add. Offers /ctx-commit if uncommitted changes remain. Ceremony skill: invoke explicitly at session end.
Record a project-specific gotcha, bug, or unexpected behavior. Filters for insights that are searchable, project-specific, and required real effort to discover.
Full journal pipeline: imports unimported sessions first, then batch-enriches all unenriched entries. Filters out short sessions and continuations. Can spawn subagents for large backlogs.
Generate a blog post draft from recent project activity: git history, decisions, learnings, tasks, and journal entries. Requires a narrative arc (problem, approach, outcome).
Consolidate redundant entries in LEARNINGS.md or DECISIONS.md. Groups overlapping entries by keyword similarity, presents candidates, and (with user approval) merges groups into denser combined entries. Originals are archived, not deleted.
Detect and fix context drift: stale paths, missing files, file age staleness, task accumulation, entry count warnings, and constitution violations via ctx drift. Also detects skill drift against canonical templates.
Analyze recent prompting patterns to identify vague or ineffective prompts. Reviews 3-5 journal entries and suggests rewrites with positive observations.
Troubleshoot ctx behavior. Runs structural health checks via ctx doctor, analyzes event log patterns via ctx system events, and presents findings with suggested actions. The CLI provides the structural baseline; the agent adds semantic analysis of event patterns and correlations.
Wraps: ctx doctor --json, ctx system events --json --last 100, ctx remind list, ctx system message list, reads .ctxrc
Graceful degradation: If event_log is not enabled, the skill still works but with reduced capability. It runs structural checks and notes: \"Enable event_log: true in .ctxrc for hook-level diagnostics.\"
See also: Troubleshooting, ctx doctor CLI, ctx system events CLI
Scan all markdown files under docs/ for broken links. Three passes: internal links (verify file targets exist on disk), external links (HTTP HEAD with timeout, report failures as warnings), and image references. Resolves relative paths, strips anchors before checking, and skips localhost/example URLs.
Wraps: Glob + Grep to scan, curl for external checks
Audit .claude/settings.local.json for dangerous permissions across four risk categories: hook bypass (Critical), destructive commands (High), config injection vectors (High), and overly broad patterns (Medium). Reports findings by severity and offers specific fix actions with user confirmation.
Wraps: reads .claude/settings.local.json, edits with confirmation
Transform raw ideas into clear, validated designs through structured dialogue before any implementation begins. Follows a gated process: understand context, clarify the idea (one question at a time), surface non-functional requirements, lock understanding with user confirmation, explore 2-3 design approaches with trade-offs, stress-test the chosen approach, and present the detailed design.
Wraps: reads DECISIONS.md, relevant source files; chains to /ctx-add-decision for recording design choices
Trigger phrases: \"let's brainstorm\", \"design this\", \"think through\", \"before we build\", \"what approach should we take?\"
Scaffold a feature spec from the project template and walk through each section with the user. Covers: problem, approach, happy path, edge cases, validation rules, error handling, interface, implementation, configuration, testing, and non-goals. Spends extra time on edge cases and error handling.
Wraps: reads specs/tpl/spec-template.md, writes to specs/, optionally chains to /ctx-add-task
Trigger phrases: \"spec this out\", \"write a spec\", \"create a spec\", \"design document\"
Import Claude Code plan files (~/.claude/plans/*.md) into the project's specs/ directory. Lists plans with dates and H1 titles, supports filtering (--today, --since, --all), slugifies headings for filenames, and optionally creates tasks referencing each imported spec.
Wraps: reads ~/.claude/plans/*.md, writes to specs/, optionally chains to /ctx-add-task
See also: Importing Claude Code Plans, Tracking Work Across Sessions
Execute a multi-step plan with build and test verification at each step. Loads a plan from a file or conversation context, breaks it into atomic steps, and checkpoints after every 3-5 steps.
Wraps: reads plan file, runs verification commands (go build, go test, etc.)
Generate a ready-to-run shell script for autonomous AI iteration. Supports Claude Code, Aider, and generic tool templates with configurable completion signals.
Manage git worktrees for parallel agent development. Create sibling worktrees on dedicated branches, analyze task blast radius for grouping, and tear down with merge.
Build and maintain architecture maps incrementally. Creates or refreshes ARCHITECTURE.md (succinct project map, loaded at session start) and DETAILED_DESIGN.md (deep per-module reference, consulted on-demand). Coverage is tracked in map-tracking.json so each run extends the map rather than re-analyzing everything.
Manage session-scoped reminders via natural language. Translates user intent (\"remind me to refactor swagger\") into the corresponding ctx remind command. Handles date conversion for --after flags.
Audit one or more skills against Anthropic prompting best practices. Checks audit dimensions: positive framing, motivation, phantom references, examples, subagent guards, scope, and descriptions. Reports findings by severity with concrete fix suggestions.
Wraps: reads internal/assets/claude/skills/*/SKILL.md or .claude/skills/*/SKILL.md, references anthropic-best-practices.md
Trigger phrases: \"audit this skill\", \"check skill quality\", \"review the skills\", \"are our skills any good?\"
Create, improve, and test skills. Guides the full lifecycle: capture intent, interview for edge cases, draft the SKILL.md, test with realistic prompts, review results with the user, and iterate. Applies core principles: the agent is already smart (only add what it does not know), the description is the trigger (make it specific and \"pushy\"), and explain the why instead of rigid directives.
Wraps: reads/writes .claude/skills/ and internal/assets/claude/skills/
Trigger phrases: \"create a skill\", \"turn this into a skill\", \"make a slash command\", \"this should be a skill\", \"improve this skill\", \"the skill isn't triggering\"
Pause all context nudge and reminder hooks for the current session. Security hooks still fire. Use for quick investigations or tasks that don't need ceremony overhead.
The ctx plugin ships the skills listed above. Teams can add their own project-specific skills to .claude/skills/ in the project root: These are separate from plugin-shipped skills and are scoped to the project.
Project-specific skills follow the same format and are invoked the same way.
MCP server for tool-agnostic AI integration. Memory bridge connecting Claude Code auto-memory to .context/. Complete CLI restructuring into cmd/ + core/ taxonomy. All user-facing strings externalized to YAML. fatih/color removed; two direct dependencies remain.
","path":["Reference","Version History"],"tags":[]},{"location":"reference/versions/#v060-the-integration-release","level":3,"title":"v0.6.0: The Integration Release","text":"
Plugin architecture: hooks and skills converted from shell scripts to Go subcommands, shipped as a Claude Code marketplace plugin. Multi-tool hook generation for Cursor, Aider, Copilot, and Windsurf. Webhook notifications with encrypted URL storage.
","path":["Reference","Version History"],"tags":[]},{"location":"reference/versions/#v030-the-discipline-release","level":3,"title":"v0.3.0: The Discipline Release","text":"
Journal static site generation via zensical. 49-skill audit and fix pass (positive framing, phantom reference removal, scope tightening). Context consolidation skill. golangci-lint v2 migration.
","path":["Reference","Version History"],"tags":[]},{"location":"reference/versions/#v020-the-archaeology-release","level":3,"title":"v0.2.0: The Archaeology Release","text":"
Session journal system: ctx journal import converts Claude Code JSONL transcripts to browsable Markdown. Constants refactor with semantic prefixes (Dir*, File*, Filename*). CRLF handling for Windows compatibility.
Trust model, vulnerability reporting, permission hygiene, and security design principles.
","path":["Security"],"tags":[]},{"location":"security/agent-security/","level":1,"title":"Securing AI Agents","text":"","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#defense-in-depth-securing-ai-agents","level":1,"title":"Defense in Depth: Securing AI Agents","text":"","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#the-problem","level":2,"title":"The Problem","text":"
An unattended AI agent with unrestricted access to your machine is an unattended shell with unrestricted access to your machine.
This is not a theoretical concern. AI coding agents execute shell commands, write files, make network requests, and modify project configuration. When running autonomously (overnight, in a loop, without a human watching), the attack surface is the full capability set of the operating system user account.
The risk is not that the AI is malicious. The risk is that the AI is controllable: it follows instructions from context, and context can be poisoned.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#threat-model","level":2,"title":"Threat Model","text":"","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#how-agents-get-compromised","level":3,"title":"How Agents Get Compromised","text":"
AI agents follow instructions from multiple sources: system prompts, project files, conversation history, and tool outputs. An attacker who can inject content into any of these sources can redirect the agent's behavior.
Vector How it works Prompt injection via dependencies A malicious package includes instructions in its README, changelog, or error output. The agent reads these during installation or debugging and follows them. Prompt injection via fetched content The agent fetches a URL (documentation, API response, Stack Overflow answer) containing embedded instructions. Poisoned project files A contributor adds adversarial instructions to CLAUDE.md, .cursorrules, or .context/ files. The agent loads these at session start. Self-modification between iterations In an autonomous loop, the agent modifies its own configuration files. The next iteration loads the modified config with no human review. Tool output injection A command's output (error messages, log lines, file contents) contains instructions the agent interprets and follows.","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#what-can-a-compromised-agent-do","level":3,"title":"What Can a Compromised Agent Do","text":"
Depends entirely on what permissions and access the agent has:
Access level Potential impact Unrestricted shell Execute any command, install software, modify system files Network access Exfiltrate source code, credentials, or context files to external servers Docker socket Escape container isolation by spawning privileged sibling containers SSH keys Pivot to other machines, push to remote repositories, access production systems Write access to own config Disable its own guardrails for the next iteration","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#the-defense-layers","level":2,"title":"The Defense Layers","text":"
No single layer is sufficient. Each layer catches what the others miss.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-1-soft-instructions-probabilistic","level":3,"title":"Layer 1: Soft Instructions (Probabilistic)","text":"
Markdown files like CONSTITUTION.md and the Agent Playbook tell the agent what to do and what not to do. These are probabilistic: the agent usually follows them, but there is no enforcement mechanism.
What it catches: Most common mistakes. An agent that has been told \"never delete production data\" will usually not delete production data.
What it misses: Prompt injection. A sufficiently crafted injection can override soft instructions. Long context windows dilute attention on rules stated early. Edge cases where instructions are ambiguous.
Verdict: Necessary but not sufficient. Good for the common case. Do not rely on it for security boundaries.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-2-application-controls-deterministic-at-runtime-mutable-across-iterations","level":3,"title":"Layer 2: Application Controls (Deterministic at Runtime, Mutable Across Iterations)","text":"
AI tool runtimes (Claude Code, Cursor, etc.) provide permission systems: tool allowlists, command restrictions, confirmation prompts.
For Claude Code, ctx init writes both an allowlist and an explicit deny list into .claude/settings.local.json. The golden images live in internal/assets/permissions/:
Allowlist (allow.txt): only these tools run without confirmation:
Bash(ctx:*)\nSkill(ctx-add-convention)\nSkill(ctx-add-decision)\n... # all bundled ctx-* skills\n
Deny list (deny.txt): these are blocked even if the agent requests them:
What it catches: The agent cannot run commands outside the allowlist, and the deny list blocks dangerous operations even if a future allowlist change were to widen access. If rm, curl, sudo, or docker are not allowed and sudo/curl/wget are explicitly denied, the agent cannot invoke them regardless of what any prompt says.
What it misses: The agent can modify the allowlist itself. In an autonomous loop, if the agent writes to .claude/settings.local.json, and the next iteration loads the modified config, then the protection is effectively lost. The application enforces the rules, but the application reads the rules from files the agent can write.
Verdict: Strong first layer. Must be combined with self-modification prevention (Layer 3).
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-3-os-level-isolation-deterministic-and-unbypassable","level":3,"title":"Layer 3: OS-Level Isolation (Deterministic and Unbypassable)","text":"
The operating system enforces access controls that no application-level trick can override. An unprivileged user cannot read files owned by root. A process without CAP_NET_RAW cannot open raw sockets. These are kernel boundaries.
Control Purpose Dedicated user account No sudo, no privileged group membership (docker, wheel, adm). The agent cannot escalate privileges. Filesystem permissions Project directory writable; everything else read-only or inaccessible. Agent cannot reach other projects, home directories, or system config. Immutable config files CLAUDE.md, .claude/settings.local.json, and .context/CONSTITUTION.md owned by a different user or marked immutable (chattr +i on Linux). The agent cannot modify its own guardrails.
What it catches: Privilege escalation, self-modification, lateral movement to other projects or users.
What it misses: Actions within the agent's legitimate scope. If the agent has write access to source code (which it needs to do its job), it can introduce vulnerabilities in the code itself.
Verdict: Essential. This is the layer that makes the other layers trustworthy.
OS-level isolation does not make the agent safe; it makes the other layers meaningful.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-4-network-controls","level":3,"title":"Layer 4: Network Controls","text":"
An agent that cannot reach the internet cannot exfiltrate data. It also cannot ingest new instructions mid-loop from external documents, API responses, or hostile content.
Scenario Recommended control Agent does not need the internet --network=none (container) or outbound firewall drop-all Agent needs to fetch dependencies Allow specific registries (npmjs.com, proxy.golang.org, pypi.org) via firewall rules. Block everything else. Agent needs API access Allow specific API endpoints only. Use an HTTP proxy with allowlisting.
What it catches: Data exfiltration, phone-home payloads, downloading additional tools, and instruction injection via fetched content.
What it misses: Nothing, if the agent genuinely does not need the network. The tradeoff is that many real workloads need dependency resolution, so a full airgap requires pre-populated caches.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#layer-5-infrastructure-isolation","level":3,"title":"Layer 5: Infrastructure Isolation","text":"
The strongest boundary is a separate machine (or something that behaves like one).
The moment you stop arguing about prompts and start arguing about kernels, you are finally doing security.
Critical: never mount the Docker socket (/var/run/docker.sock).
An agent with socket access can spawn sibling containers with full host access, effectively escaping the sandbox.
Use rootless Docker or Podman to eliminate this escalation path.
Virtual machines: The strongest isolation. The guest kernel has no visibility into the host OS. No shared folders, no filesystem passthrough, no SSH keys to other machines.
Resource limits: CPU, memory, and disk quotas prevent a runaway agent from consuming all resources. Use ulimit, cgroup limits, or container resource constraints.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#putting-it-all-together","level":2,"title":"Putting It All Together","text":"
A defense-in-depth setup for overnight autonomous runs:
Layer Implementation Stops Soft instructions CONSTITUTION.md with \"never delete tests\", \"always run tests before committing\" Common mistakes (probabilistic) Application allowlist .claude/settings.local.json with explicit tool permissions Unauthorized commands (deterministic within runtime) Immutable config chattr +i on CLAUDE.md, .claude/, CONSTITUTION.md Self-modification between iterations Unprivileged user Dedicated user, no sudo, no docker group Privilege escalation Container --cap-drop=ALL --network=none, rootless, no socket mount Host escape, network exfiltration Resource limits --memory=4g --cpus=2, disk quotas Resource exhaustion
Each layer is straightforward: The strength is in the combination.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#common-mistakes","level":2,"title":"Common Mistakes","text":"
\"I'll just use --dangerously-skip-permissions\": This disables Layer 2 entirely. Without Layers 3-5, you have no protection at all. Only use this flag inside a properly isolated container or VM.
\"The agent is sandboxed in Docker\": A Docker container with the Docker socket mounted, running as root, with --privileged, and full network access is not sandboxed. It is a root shell with extra steps.
\"CONSTITUTION.md says not to do that\": Markdown is a suggestion. It works most of the time. It is not a security boundary. Do not use it as one.
\"I reviewed the CLAUDE.md, it's fine\": The agent can modify CLAUDE.md during iteration N. Iteration N+1 loads the modified version. Unless the file is immutable, your review is stale.
\"The agent only has access to this one project\": Does the project directory contain .env files, SSH keys, API tokens, or credentials? Does it have a .git/config with push access to a remote? Filesystem isolation means isolating what is in the directory too.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#team-security-considerations","level":2,"title":"Team Security Considerations","text":"
When multiple developers share a .context/ directory, security considerations extend beyond single-agent hardening.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#code-review-for-context-files","level":3,"title":"Code Review for Context Files","text":"
Treat .context/ changes like code changes. Context files influence agent behavior (a modified CONSTITUTION.md or CONVENTIONS.md changes what every agent on the team will do next session). Review them in PRs with the same scrutiny you apply to production code.
New decisions that contradict existing ones without acknowledging it
Learnings that encode incorrect assumptions
Task additions that bypass the team's prioritization process
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#gitignore-patterns","level":3,"title":"Gitignore Patterns","text":"
ctx init configures .gitignore automatically, but verify these patterns are in place:
Team decision: scratchpad.enc (encrypted, safe to commit for shared scratchpad state); .gitignore if scratchpads are personal
Never committed: .env, credentials, API keys (enforced by drift secret detection)
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#multi-developer-context-sharing","level":3,"title":"Multi-Developer Context Sharing","text":"
CONSTITUTION.md is the shared contract. All team members and their agents inherit it. Changes require team consensus, not unilateral edits.
When multiple agents write to the same context files concurrently (e.g., two developers adding learnings simultaneously), git merge conflicts are expected. Resolution is typically additive: accept both additions. Destructive resolution (dropping one side) loses context.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#team-conventions-for-context-management","level":3,"title":"Team Conventions for Context Management","text":"
Establish and document:
Who reviews context changes: Same reviewers as code, or a designated context owner?
How to resolve conflicting decisions: If two sessions record contradictory decisions, which wins? Default: the later one must explicitly supersede the earlier one with rationale.
Frequency of context maintenance: Weekly ctx drift checks, monthly consolidation passes, archival after each milestone.
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#checklist","level":2,"title":"Checklist","text":"
Before running an unattended AI agent:
Agent runs as a dedicated unprivileged user (no sudo, no docker group)
Agent's config files are immutable or owned by a different user
Permission allowlist restricts tools to the project's toolchain
Container drops all capabilities (--cap-drop=ALL)
Docker socket is NOT mounted
Network is disabled or restricted to specific domains
Resource limits are set (memory, CPU, disk)
No SSH keys, API tokens, or credentials are accessible to the agent
Project directory does not contain .env or secrets files
Iteration cap is set (--max-iterations)
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/agent-security/#further-reading","level":2,"title":"Further Reading","text":"
Running an Unattended AI Agent: the ctx recipe for autonomous loops, including step-by-step permissions and isolation setup
Security: ctx's own trust model and vulnerability reporting
Autonomous Loops: full documentation of the loop pattern, prompt templates, and troubleshooting
","path":["Security","Securing AI Agents"],"tags":[]},{"location":"security/reporting/","level":1,"title":"Security Policy","text":"","path":["Security","Security Policy"],"tags":[]},{"location":"security/reporting/#reporting-vulnerabilities","level":2,"title":"Reporting Vulnerabilities","text":"
At ctx we take security very seriously.
If you discover a security vulnerability in ctx, please report it responsibly.
Do NOT open a public issue for security vulnerabilities.
If your report contains sensitive details (proof-of-concept exploits, credentials, or internal system information), you can encrypt your message with our PGP key:
In-repo: SECURITY_KEY.asc
Keybase: keybase.io/alekhinejose
# Import the key\ngpg --import SECURITY_KEY.asc\n\n# Encrypt your report\ngpg --armor --encrypt --recipient security@ctx.ist report.txt\n
Encryption is optional. Unencrypted reports to security@ctx.ist or via GitHub Private Reporting are perfectly fine.
","path":["Security","Security Policy"],"tags":[]},{"location":"security/reporting/#what-to-include","level":3,"title":"What to Include","text":"
We appreciate responsible disclosure and will acknowledge security researchers who report valid vulnerabilities (unless they prefer to remain anonymous).
ctx is a volunteer-maintained open source project.
The timelines below are guidelines, not guarantees, and depend on contributor availability.
We will address security reports on a best-effort basis and prioritize them by severity.
Stage Timeframe Acknowledgment Within 48 hours Initial assessment Within 7 days Resolution target Within 30 days (depending on severity)","path":["Security","Security Policy"],"tags":[]},{"location":"security/reporting/#trust-model","level":2,"title":"Trust Model","text":"
ctx operates within a single trust boundary: the local filesystem.
The person who authors .context/ files is the same person who runs the agent that reads them. There is no remote input, no shared state, and no server component.
This means:
ctx does not sanitize context files for prompt injection. This is a deliberate design choice, not an oversight. The files are authored by the developer who owns the machine: Sanitizing their own instructions back to them would be counterproductive.
If you place adversarial instructions in your own .context/ files, your agent will follow them. This is expected behavior. You control the context; the agent trusts it.
Shared Repositories
In shared repositories, .context/ files should be reviewed in code review (the same way you would review CI/CD config or Makefiles). A malicious contributor could add harmful instructions to CONSTITUTION.md or TASKS.md.
No secrets in context: The constitution explicitly forbids storing secrets, tokens, API keys, or credentials in .context/ files
Local only: ctx runs entirely locally with no external network calls
No code execution: ctx reads and writes Markdown files only; it does not execute arbitrary code
Git-tracked: Core context files are meant to be committed, so they should never contain sensitive data. Exception: sessions/ and journal/ contain raw conversation data and should be gitignored
Claude Code evaluates permissions in deny → ask → allow order. ctx init automatically populates permissions.deny with rules that block dangerous operations before the allow list is ever consulted.
Hook state files (throttle markers, prompt counters, pause markers) are stored in .context/state/, which is project-scoped and gitignored. State files are automatically managed by the hooks that create them; no manual cleanup is needed.
For agentic workers: REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (- [ ]) syntax for tracking.
Goal: Link every git commit back to the decisions, tasks, learnings, and sessions that motivated it via ctx trace.
Architecture: New internal/trace package provides the core logic (pending context recording, three-source detection, history/override storage, reference resolution). A new internal/cli/trace package wires it into the Cobra CLI as ctx trace. Existing commands (ctx add, ctx complete) gain a one-line trace.Record() side-effect. A ctx trace hook subcommand generates a prepare-commit-msg shell script that delegates to ctx trace collect.
In internal/config/dir/dir.go, add the Trace constant:
// Trace is the subdirectory for commit context tracing within .context/.\nTrace = \"trace\"\n
Add it after the State constant in the same const block.
Step 2: Create trace package doc
Create internal/trace/doc.go:
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n\n// Package trace provides commit context tracing — linking git commits\n// back to the decisions, tasks, learnings, and sessions that motivated them.\npackage trace\n
Step 3: Create shared types
Create internal/trace/types.go:
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n\npackage trace\n\nimport \"time\"\n\n// PendingEntry is a single pending context reference accumulated\n// between commits.\ntype PendingEntry struct {\n Ref string `json:\"ref\"`\n Timestamp time.Time `json:\"timestamp\"`\n}\n\n// HistoryEntry is a permanent record of a commit's context references.\ntype HistoryEntry struct {\n Commit string `json:\"commit\"`\n Refs []string `json:\"refs\"`\n Message string `json:\"message\"`\n Timestamp time.Time `json:\"timestamp\"`\n}\n\n// OverrideEntry is a manual context tag added to an existing commit.\ntype OverrideEntry struct {\n Commit string `json:\"commit\"`\n Refs []string `json:\"refs\"`\n Timestamp time.Time `json:\"timestamp\"`\n}\n\n// ResolvedRef holds a resolved context reference with its display text.\ntype ResolvedRef struct {\n Raw string // Original ref string (e.g., \"decision:12\")\n Type string // \"decision\", \"learning\", \"task\", \"convention\", \"session\", \"note\"\n Number int // Entry number (0 for session/note types)\n Title string // Resolved title or content\n Detail string // Additional detail (rationale, status, etc.)\n Found bool // Whether the reference was resolved\n}\n
Run: cd /Users/parlakisik/projects/github/ctx && go test ./internal/trace/ -run TestRecord -v Expected: FAIL — functions not defined
Step 6: Implement pending.go
Create internal/trace/pending.go:
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n\npackage trace\n\nimport (\n \"bufio\"\n \"encoding/json\"\n \"os\"\n \"path/filepath\"\n \"time\"\n\n \"github.com/ActiveMemory/ctx/internal/config/fs\"\n)\n\nconst pendingFile = \"pending-context.jsonl\"\n\n// Record appends a context reference to the pending context file.\n// This is best-effort: errors are returned but callers should treat\n// them as non-fatal.\n//\n// Parameters:\n// - ref: Context reference string (e.g., \"decision:12\", \"task:3\")\n// - stateDir: Path to the state directory (.context/state/)\n//\n// Returns:\n// - error: Non-nil if the file cannot be opened or written\nfunc Record(ref, stateDir string) error {\n if err := os.MkdirAll(stateDir, fs.PermRestrictedDir); err != nil {\n return err\n }\n\n p := filepath.Join(stateDir, pendingFile)\n\n f, err := os.OpenFile(p, os.O_APPEND|os.O_CREATE|os.O_WRONLY, fs.PermFile)\n if err != nil {\n return err\n }\n defer f.Close()\n\n entry := PendingEntry{Ref: ref, Timestamp: time.Now().UTC()}\n return json.NewEncoder(f).Encode(entry)\n}\n\n// ReadPending reads all pending context entries from the state directory.\n// Returns an empty slice if the file does not exist.\n//\n// Parameters:\n// - stateDir: Path to the state directory (.context/state/)\n//\n// Returns:\n// - []PendingEntry: Parsed entries\n// - error: Non-nil on read or parse failure\nfunc ReadPending(stateDir string) ([]PendingEntry, error) {\n p := filepath.Join(stateDir, pendingFile)\n\n f, err := os.Open(filepath.Clean(p))\n if err != nil {\n if os.IsNotExist(err) {\n return nil, nil\n }\n return nil, err\n }\n defer f.Close()\n\n var entries []PendingEntry\n scanner := bufio.NewScanner(f)\n for scanner.Scan() {\n line := scanner.Text()\n if line == \"\" {\n continue\n }\n var entry PendingEntry\n if jsonErr := json.Unmarshal([]byte(line), &entry); jsonErr != nil {\n continue // skip malformed lines\n }\n entries = append(entries, entry)\n }\n\n return entries, scanner.Err()\n}\n\n// TruncatePending clears the pending context file after a commit.\n//\n// Parameters:\n// - stateDir: Path to the state directory (.context/state/)\n//\n// Returns:\n// - error: Non-nil if truncation fails\nfunc TruncatePending(stateDir string) error {\n p := filepath.Join(stateDir, pendingFile)\n return os.Truncate(p, 0)\n}\n
Step 7: Run tests to verify they pass
Run: cd /Users/parlakisik/projects/github/ctx && go test ./internal/trace/ -v Expected: All PASS
","path":["Commit Context Tracing Implementation Plan"],"tags":[]},{"location":"superpowers/plans/2026-03-31-commit-context-tracing/#task-2-history-and-override-storage","level":2,"title":"Task 2: History and Override Storage","text":"
Run: cd /Users/parlakisik/projects/github/ctx && go test ./internal/trace/ -run TestParseAdded -v Expected: FAIL — functions not defined
Step 3: Implement staged.go
Create internal/trace/staged.go:
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n\npackage trace\n\nimport (\n \"fmt\"\n \"os/exec\"\n \"path/filepath\"\n \"strings\"\n\n \"github.com/ActiveMemory/ctx/internal/config/ctx\"\n \"github.com/ActiveMemory/ctx/internal/config/regex\"\n)\n\n// StagedRefs detects context references from staged .context/ files\n// by examining git diff output.\n//\n// Parameters:\n// - contextDir: Path to the .context/ directory\n//\n// Returns:\n// - []string: Detected references (e.g., \"decision:1\", \"task:3\")\nfunc StagedRefs(contextDir string) []string {\n var refs []string\n\n files := []struct {\n name string\n entryType string\n }{\n {ctx.Decision, \"decision\"},\n {ctx.Learning, \"learning\"},\n {ctx.Convention, \"convention\"},\n }\n\n for _, f := range files {\n diff := stagedDiff(filepath.Join(contextDir, f.name))\n if diff == \"\" {\n continue\n }\n refs = append(refs, ParseAddedEntries(diff, f.entryType)...)\n }\n\n // Check TASKS.md for newly completed tasks\n taskDiff := stagedDiff(filepath.Join(contextDir, ctx.Task))\n if taskDiff != \"\" {\n refs = append(refs, ParseCompletedTasks(taskDiff)...)\n }\n\n return refs\n}\n\n// ParseAddedEntries extracts entry numbers from added lines in a diff.\n// Only lines prefixed with \"+\" that match the entry header pattern are counted.\n//\n// Parameters:\n// - diff: Git diff output\n// - entryType: The reference type prefix (\"decision\", \"learning\", \"convention\")\n//\n// Returns:\n// - []string: Refs like \"decision:1\", \"decision:2\"\nfunc ParseAddedEntries(diff, entryType string) []string {\n var refs []string\n count := 0\n\n for _, line := range strings.Split(diff, \"\\n\") {\n if !strings.HasPrefix(line, \"+\") {\n continue\n }\n // Remove the leading \"+\" to match the regex\n content := line[1:]\n if regex.EntryHeader.MatchString(content) {\n count++\n refs = append(refs, fmt.Sprintf(\"%s:%d\", entryType, count))\n }\n }\n\n return refs\n}\n\n// ParseCompletedTasks extracts task refs from newly completed tasks\n// in a diff. Lines that are added (\"+\") and contain \"[x]\" are counted.\n//\n// Parameters:\n// - diff: Git diff output for TASKS.md\n//\n// Returns:\n// - []string: Refs like \"task:1\", \"task:2\"\nfunc ParseCompletedTasks(diff string) []string {\n var refs []string\n count := 0\n\n for _, line := range strings.Split(diff, \"\\n\") {\n if !strings.HasPrefix(line, \"+\") {\n continue\n }\n content := line[1:]\n match := regex.Task.FindStringSubmatch(content)\n if match != nil && (len(match) > 2 && match[2] == \"x\") {\n count++\n refs = append(refs, fmt.Sprintf(\"task:%d\", count))\n }\n }\n\n return refs\n}\n\n// stagedDiff returns the staged diff for a specific file.\n// Returns empty string if the file is not staged or git is not available.\nfunc stagedDiff(filePath string) string {\n cmd := exec.Command(\"git\", \"diff\", \"--cached\", \"--\", filePath)\n out, err := cmd.Output()\n if err != nil {\n return \"\"\n }\n return string(out)\n}\n
Step 4: Run tests to verify they pass
Run: cd /Users/parlakisik/projects/github/ctx && go test ./internal/trace/ -run \"TestParseAdded|TestParseCompleted|TestParseNo\" -v Expected: All PASS
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n\npackage trace\n\nimport (\n \"os\"\n \"path/filepath\"\n \"testing\"\n)\n\nfunc TestWorkingRefsInProgressTasks(t *testing.T) {\n tmpDir := t.TempDir()\n contextDir := tmpDir\n\n tasksContent := `# Tasks\n\n- [ ] Implement auth handler\n- [x] Write unit tests\n- [ ] Add rate limiting\n`\n if err := os.WriteFile(\n filepath.Join(contextDir, \"TASKS.md\"),\n []byte(tasksContent), 0644,\n ); err != nil {\n t.Fatal(err)\n }\n\n refs := WorkingRefs(contextDir)\n\n // Should find 2 in-progress tasks: task:1 and task:2\n found := map[string]bool{}\n for _, r := range refs {\n found[r] = true\n }\n\n if !found[\"task:1\"] {\n t.Error(\"expected task:1 for 'Implement auth handler'\")\n }\n if !found[\"task:2\"] {\n t.Error(\"expected task:2 for 'Add rate limiting'\")\n }\n if found[\"task:3\"] {\n t.Error(\"should not find task:3 — completed tasks are excluded\")\n }\n}\n\nfunc TestWorkingRefsSessionEnv(t *testing.T) {\n tmpDir := t.TempDir()\n contextDir := tmpDir\n\n // Write empty TASKS.md\n if err := os.WriteFile(\n filepath.Join(contextDir, \"TASKS.md\"),\n []byte(\"# Tasks\\n\"), 0644,\n ); err != nil {\n t.Fatal(err)\n }\n\n t.Setenv(\"CTX_SESSION_ID\", \"test-session-42\")\n\n refs := WorkingRefs(contextDir)\n\n found := false\n for _, r := range refs {\n if r == \"session:test-session-42\" {\n found = true\n }\n }\n if !found {\n t.Error(\"expected session:test-session-42 from env\")\n }\n}\n\nfunc TestWorkingRefsNoTasksFile(t *testing.T) {\n tmpDir := t.TempDir()\n refs := WorkingRefs(tmpDir)\n\n // No TASKS.md should not panic, just return empty or session-only\n _ = refs\n}\n
Step 2: Run test to verify it fails
Run: cd /Users/parlakisik/projects/github/ctx && go test ./internal/trace/ -run TestWorkingRefs -v Expected: FAIL — function not defined
Step 3: Implement working.go
Create internal/trace/working.go:
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n\npackage trace\n\nimport (\n \"fmt\"\n \"os\"\n \"path/filepath\"\n \"strings\"\n\n ctxCfg \"github.com/ActiveMemory/ctx/internal/config/ctx\"\n \"github.com/ActiveMemory/ctx/internal/config/regex\"\n \"github.com/ActiveMemory/ctx/internal/task\"\n)\n\nconst envSessionID = \"CTX_SESSION_ID\"\n\n// WorkingRefs detects context references from the current working state.\n// This includes in-progress tasks and the active AI session.\n//\n// Parameters:\n// - contextDir: Path to the .context/ directory\n//\n// Returns:\n// - []string: Detected references\nfunc WorkingRefs(contextDir string) []string {\n var refs []string\n\n refs = append(refs, inProgressTaskRefs(contextDir)...)\n\n if sessionID := os.Getenv(envSessionID); sessionID != \"\" {\n refs = append(refs, \"session:\"+sessionID)\n }\n\n return refs\n}\n\n// inProgressTaskRefs reads TASKS.md and returns refs for in-progress\n// (pending, non-subtask) tasks.\nfunc inProgressTaskRefs(contextDir string) []string {\n tasksPath := filepath.Join(contextDir, ctxCfg.Task)\n content, err := os.ReadFile(filepath.Clean(tasksPath))\n if err != nil {\n return nil\n }\n\n var refs []string\n pendingCount := 0\n lines := strings.Split(string(content), \"\\n\")\n\n for _, line := range lines {\n match := regex.Task.FindStringSubmatch(line)\n if match == nil {\n continue\n }\n if task.Sub(match) {\n continue // skip subtasks\n }\n if task.Pending(match) {\n pendingCount++\n refs = append(refs, fmt.Sprintf(\"task:%d\", pendingCount))\n }\n }\n\n return refs\n}\n
Step 4: Run tests to verify they pass
Run: cd /Users/parlakisik/projects/github/ctx && go test ./internal/trace/ -run TestWorkingRefs -v Expected: All PASS
Step 5: Commit
git add internal/trace/working.go internal/trace/working_test.go\ngit commit -m \"feat(trace): add working state detection for in-progress tasks and sessions\"\n
","path":["Commit Context Tracing Implementation Plan"],"tags":[]},{"location":"superpowers/plans/2026-03-31-commit-context-tracing/#task-5-collect-merge-and-deduplicate-from-all-sources","level":2,"title":"Task 5: Collect — Merge and Deduplicate from All Sources","text":"
Step 1: Understand entry numbering for add command
The ctx add command does not return an entry number. Since entries are prepended (newest first), a newly added decision is always entry #1 in the file. We need to count entries after write to determine the new entry's number.
Step 2: Modify add command to record pending context
In internal/cli/add/cmd/root/run.go, add the trace recording after the successful write. The entry number is determined by counting entries in the file after write, and the new entry is always #1 (prepended for decisions/learnings) or the last entry (appended for tasks/conventions).
After writeAdd.Added(cmd, fName) and before return nil, add:
// Best-effort: record pending context for commit tracing.\n // Decisions and learnings are prepended (newest = #1).\n // Tasks and conventions are appended (newest = last).\n if fType == cfgEntry.Decision || fType == cfgEntry.Learning ||\n fType == cfgEntry.Convention {\n _ = trace.Record(fType+\":1\", state.Dir())\n }\n
Note: We record as entry #1 for prepended types because new entries are always inserted at the top. For tasks, recording happens in the complete command instead, since tasks are tracked by completion, not creation.
Step 3: Modify complete command to record pending context
In internal/cli/task/cmd/complete/run.go, add trace recording after a successful completion.
Step 2: Add trace command descriptions to commands.yaml
In internal/assets/commands/commands.yaml, add the trace command descriptions:
trace:\n long: |-\n Show the context behind git commits.\n\n ctx trace links commits back to the decisions, tasks, learnings,\n and sessions that motivated them.\n\n Usage:\n ctx trace <commit> Show context for a specific commit\n ctx trace --last 5 Show context for last N commits\n ctx trace file <path> Show context trail for a file\n ctx trace tag <commit> Manually tag a commit with context\n ctx trace collect Collect context refs (used by hook)\n ctx trace hook enable Install prepare-commit-msg hook\n\n Examples:\n ctx trace abc123\n ctx trace --last 10\n ctx trace file src/auth.go\n ctx trace tag HEAD --note \"Hotfix for production outage\"\n short: Show context behind git commits\ntrace.file:\n long: |-\n Show the context trail for a file.\n\n Combines git log with trailer resolution to show what decisions,\n tasks, and learnings motivated changes to a specific file.\n\n Supports optional line range with colon syntax:\n ctx trace file src/auth.go:42-60\n\n Examples:\n ctx trace file src/auth.go\n ctx trace file src/auth.go:42-60\n short: Show context trail for a file\ntrace.tag:\n long: |-\n Manually tag a commit with context.\n\n For commits made without the hook, or to add extra context\n after the fact. Tags are stored in .context/trace/overrides.jsonl\n since git trailers cannot be modified without rewriting history.\n\n Examples:\n ctx trace tag HEAD --note \"Hotfix for production outage\"\n ctx trace tag abc123 --note \"Part of Q1 compliance initiative\"\n short: Manually tag a commit with context\ntrace.collect:\n long: |-\n Collect context references from all sources.\n\n Gathers pending context, staged file analysis, and working state,\n then outputs a ctx-context trailer line. Used by the\n prepare-commit-msg hook.\n\n This command is not typically called directly.\n short: Collect context refs for hook\ntrace.hook:\n long: |-\n Enable or disable the prepare-commit-msg hook for automatic\n context tracing. The hook injects ctx-context trailers into\n commit messages.\n\n Usage:\n ctx trace hook enable Install the hook\n ctx trace hook disable Remove the hook\n\n Examples:\n ctx trace hook enable\n ctx trace hook disable\n short: Manage prepare-commit-msg hook\n
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n\n// Package trace provides error constructors for trace operations.\npackage trace\n
Create internal/err/trace/trace.go:
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n\npackage trace\n\nimport (\n \"errors\"\n \"fmt\"\n)\n\n// CommitNotFound returns an error when a commit hash cannot be found.\n//\n// Parameters:\n// - hash: The commit hash that was not found\n//\n// Returns:\n// - error: Descriptive error\nfunc CommitNotFound(hash string) error {\n return fmt.Errorf(\"commit not found: %s\", hash)\n}\n\n// NotInGitRepo returns an error when the command is run outside a git repo.\n//\n// Returns:\n// - error: Descriptive error\nfunc NotInGitRepo() error {\n return errors.New(\"not in a git repository\")\n}\n\n// NoteRequired returns an error when --note flag is missing.\n//\n// Returns:\n// - error: Descriptive error\nfunc NoteRequired() error {\n return errors.New(\"--note is required\")\n}\n
The prepare-commit-msg hook injects the trailer before the commit is finalized. But we also need to record the commit in history.jsonl after the commit succeeds. This is done by adding a post-commit behavior to the collect flow.
The prepare-commit-msg hook passes the commit message file path. After outputting the trailer, we need to also record it. However, since the commit hasn't happened yet at prepare-commit-msg time, we need a separate mechanism.
Update the collect command to also accept a --record flag that writes to history after the fact. Or, more pragmatically, enhance the hook script to also call ctx trace collect --record in a post-commit hook.
Actually, the simpler approach: modify the hook script to write the history entry at prepare-commit-msg time (before commit), using a temporary marker. Then the trailer is the canonical source for the data. The ctx trace command already reads from trailers as a fallback.
Let's keep it simple: the hook injects the trailer, and ctx trace reads from the trailer at query time. The history.jsonl is a performance optimization we can add in a follow-up. For now, we'll only write history from the ctx trace collect --record <hash> subcommand, which can be called from a post-commit hook.
Update internal/cli/trace/cmd/collect/cmd.go:
// / ctx: https://ctx.ist\n// ,'`./ do you remember?\n// `.,'\\\n// \\ Copyright 2026-present Context contributors.\n// SPDX-License-Identifier: Apache-2.0\n\npackage collect\n\nimport (\n \"github.com/spf13/cobra\"\n\n \"github.com/ActiveMemory/ctx/internal/assets/read/desc\"\n \"github.com/ActiveMemory/ctx/internal/config/embed/cmd\"\n)\n\n// Cmd returns the trace collect subcommand.\n//\n// Returns:\n// - *cobra.Command: Configured trace collect command\nfunc Cmd() *cobra.Command {\n var record string\n\n short, long := desc.Command(cmd.DescKeyTraceCollect)\n\n c := &cobra.Command{\n Use: \"collect\",\n Short: short,\n Long: long,\n Hidden: true,\n RunE: func(cmd *cobra.Command, args []string) error {\n if record != \"\" {\n return RecordCommit(cmd, record)\n }\n return Run(cmd)\n },\n }\n\n c.Flags().StringVar(&record, \"record\", \"\", \"Record history entry for commit hash (called from post-commit)\")\n\n return c\n}\n
Run: cd /Users/parlakisik/projects/github/ctx && make test Expected: All PASS
Step 2: Run linter
Run: cd /Users/parlakisik/projects/github/ctx && make lint Expected: No errors
Step 3: Run build
Run: cd /Users/parlakisik/projects/github/ctx && make build Expected: BUILD SUCCESS
Step 4: Manual smoke test
Run these commands to verify the feature works end-to-end:
# Build and install\ncd /Users/parlakisik/projects/github/ctx && go build -o /tmp/ctx ./cmd/ctx/\n\n# Test trace --last (should show existing commits with no context)\n/tmp/ctx trace --last 5\n\n# Test trace tag\n/tmp/ctx trace tag HEAD --note \"Test: commit context tracing feature\"\n\n# Verify tag was written\ncat .context/trace/overrides.jsonl\n\n# Test trace on HEAD (should show the manual tag)\n/tmp/ctx trace $(git rev-parse --short HEAD)\n\n# Test hook enable (don't actually enable in this repo)\n# /tmp/ctx trace hook enable\n
Step 5: Final commit (if any fixes needed)
git add -A\ngit commit -m \"fix(trace): final adjustments from smoke testing\"\n
","path":["Commit Context Tracing Implementation Plan"],"tags":[]},{"location":"thesis/","level":1,"title":"Context as State","text":"","path":["The Thesis"],"tags":[]},{"location":"thesis/#a-persistence-layer-for-human-ai-cognition","level":2,"title":"A Persistence Layer for Human-AI Cognition","text":"
As AI tools evolve from code-completion utilities into reasoning collaborators, the knowledge that governs their behavior becomes as important as the code they produce; yet, that knowledge is routinely discarded at the end of every session.
AI-assisted development systems assemble context at prompt time using heuristic retrieval from mutable sources: recent files, semantic search results, session history. These approaches optimize relevance at the moment of generation but do not persist the cognitive state that produced decisions. Reasoning is not reproducible, intent is lost across sessions, and teams cannot audit the knowledge that constrains automated behavior.
This paper argues that context should be treated as deterministic, version-controlled state rather than as a transient query result. We ground this argument in three sources of evidence: a landscape analysis of 17 systems spanning AI coding assistants, agent frameworks, and knowledge stores; a taxonomy of five primitive categories that reveals irrecoverable architectural trade-offs; and an experience report from ctx, a persistence layer for AI-assisted development, which developed itself using its own persistence model across 389 sessions over 33 days. We define a three-tier model for cognitive state: authoritative knowledge, delivery views, and ephemeral state. Then we present six design invariants empirically validated by 56 independent rejection decisions observed across the analyzed landscape. We show that context determinism applies to assembly, not to model output, and that the curation cost this model requires is offset by compounding returns in reproducibility, auditability, and team cognition.
The introduction of large language models into software development has shifted the primary interface from code execution to interactive reasoning. In this environment, the correctness of an output depends not only on source code but on the context supplied to the model: the conventions, decisions, architectural constraints, and domain knowledge that bound the space of acceptable responses.
Current systems treat context as a query result assembled at the moment of interaction. A developer begins a session; the tool retrieves what it estimates to be relevant from chat history, recent files, and vector stores; the model generates output conditioned on this transient assembly; the session ends, and the context evaporates. The next session begins the cycle again.
This model has improved substantially over the past year. CLAUDE.md files, Cursor rules, Copilot's memory system, and tools such as Mem0, Letta, and Kindex each address aspects of the persistence problem. Yet across 17 systems we analyzed spanning AI coding assistants, agent frameworks, autonomous coding agents, and purpose-built knowledge stores, no system provides all five of the following properties simultaneously: deterministic context assembly, human-readable file-based persistence, token-budgeted delivery, zero runtime dependencies, and local-first operation.
This paper does not propose a universal replacement for retrieval-centric workflows. It defines a persistence layer (embodied in ctx (https://ctx.ist)) whose advantages emerge under specific operational conditions: when reproducibility is a requirement, when knowledge must outlive sessions and individuals, when teams require shared cognitive authority, or when offline operation is necessary.
The trade-offs (manual curation cost, reduced automatic recall, coarser granularity) are intentional and mirror the trade-offs accepted by systems that favor reproducibility over convenience, such as reproducible builds and immutable infrastructure 16.
The contribution is threefold: a three-tier model for cognitive state that resolves the ambiguity between authoritative knowledge and ephemeral session artifacts; six design invariants empirically grounded in a cross-system landscape analysis; and an experience report demonstrating that the model produces compounding returns when applied to its own development.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#2-the-limits-of-prompt-time-context","level":2,"title":"2. The Limits of Prompt-Time Context","text":"
Prompt-time assembly pipelines typically consist of corpus selection, retrieval, ranking, and truncation. These pipelines are probabilistic and time-dependent, producing three failure modes that compound over the lifetime of a project.
If context is derived from mutable sources using heuristic ranking, identical requests at different times receive different inputs. A developer who asks \"What is our authentication strategy?\" on Tuesday may receive a different context window than the same question on Thursday: Not because the strategy changed, but because the retrieval heuristic surfaced different fragments.
Reproducibility (the ability to reconstruct the exact inputs that produced a given output) is a foundational property of reliable systems. Its loss in AI-assisted development mirrors the historical evolution from ad-hoc builds to deterministic build systems 12. The build community learned that when outputs depend on implicit state (environment variables, system clocks, network-fetched dependencies), debugging becomes archaeology. The same principle applies when AI outputs depend on non-deterministic context retrieval.
Embedding-based memory increases recall but reduces inspectability. When a vector store determines that a code snippet is \"similar\" to the current query, the ranking function is opaque: the developer cannot inspect why that snippet was chosen, whether a more relevant artifact was excluded, or whether the ranking will remain stable. This prevents deterministic debugging, policy auditing, and causal attribution (properties that information retrieval theory identifies as fundamental trade-offs of probabilistic ranking) 3.
In practice, this opacity manifests as a compliance ceiling. In our experience developing a context management system (detailed in Section 7), soft instructions (directives that ask an AI agent to read specific files or follow specific procedures) achieve approximately 75-85% compliance. The remaining 15-25% represents cases where the agent exercises judgment about whether the instruction applies, effectively applying a second ranking function on top of the explicit directive. When 100% compliance is required, instruction is insufficient; the content must be injected directly, removing the agent's option to skip it.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#23-loss-of-intent","level":3,"title":"2.3 Loss of Intent","text":"
Session transcripts record interaction but not cognition. A transcript captures what was said but not which assumptions were accepted, which alternatives were rejected, or which constraints governed the decision. The distinction matters: a decision to use PostgreSQL recorded as a one-line note (\"Use PostgreSQL\") teaches a model what was decided; a structured record with context, rationale, and consequences teaches it why (and why is what prevents the model from unknowingly reversing the decision in a future session) 4.
Session transcripts provide history. Cognitive state requires something more: the persistent, structured representation of the knowledge required for correct decision-making.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#3-cognitive-state-a-three-tier-model","level":2,"title":"3. Cognitive State: A Three-Tier Model","text":"","path":["The Thesis"],"tags":[]},{"location":"thesis/#31-definitions","level":3,"title":"3.1 Definitions","text":"
We define cognitive state as the authoritative, persistent representation of the knowledge required for correct decision-making within a project. It is human-authored or human-ratified, versioned, inspectable, and reproducible. It is distinct from logs, transcripts, retrieval results, and model-generated summaries.
Previous formulations of this idea have treated cognitive state as a monolithic concept. In practice, a three-tier model better captures the operational reality:
Tier 1: Authoritative State: The canonical knowledge that the system treats as ground truth. In a concrete implementation, this corresponds to a set of human-curated files with defined schemas: a constitution (inviolable rules), conventions (code patterns), an architecture document (system structure), decision records (choices with rationale), learnings (captured experience), a task list (current work), a glossary (domain terminology), and an agent playbook (operating instructions). Each file has a single purpose, a defined lifecycle, and a distinct update frequency. Authoritative state is version-controlled alongside code and reviewed through the same mechanisms (diffs, pull requests, blame annotations).
Tier 2: Delivery Views: Derived representations of authoritative state, assembled for consumption by a model. A delivery view is produced by a deterministic assembly function that takes the authoritative state, a token budget, and an inclusion policy as inputs and produces a context window as output. The same authoritative state, budget, and policy must always produce the same delivery view. Delivery views are ephemeral (they exist only for the duration of a session), but their construction is reproducible.
Tier 3: Ephemeral State: Session transcripts, scratchpad notes, draft journal entries, and other artifacts that exist during or immediately after a session but are not authoritative. Ephemeral state is the raw material from which authoritative state may be extracted through human review, but it is never consumed directly by the assembly function.
This three-tier model resolves confusion present in earlier formulations: the claim that AI output is a deterministic function of the repository state. The corrected claim is that context selection is deterministic (the delivery view is a function of authoritative state), but model output remains stochastic, conditioned on the deterministic context. Formally:
The persistence layer's contribution is making assemble reproducible, not making model deterministic.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#32-separation-of-concerns","level":3,"title":"3.2 Separation of Concerns","text":"
The decision to separate authoritative state into distinct files with distinct purposes is not cosmetic. Different types of knowledge have different lifecycles:
Knowledge Type Update Frequency Read Frequency Load Priority Example Constitution Rarely Every session Always \"Never commit secrets to git\" Tasks Every session Session start Always \"Implement token budget CLI flag\" Conventions Weekly Before coding High \"All errors use structured logging with severity levels\" Decisions When decided When questioning Medium \"Use PostgreSQL over MySQL (see ADR-003)\" Learnings When learned When stuck Medium \"Hook scripts >50ms degrade interactive UX\" Architecture When changed When designing On demand \"Three-layer pipeline: ingest → enrich → assemble\" Journal Every session Rarely Never auto \"Session 247: Removed dead-end session copy layer\"
A monolithic context file would force the assembly function to load everything or nothing. Separation enables progressive disclosure: the minimum context that matters for the current moment, with the option to load more when needed. A normal session loads the constitution, tasks, and conventions; a deep investigation loads decision history and journal entries from specific dates.
The budget mechanism is the constraint that makes separation valuable. Without a budget, the default behavior is to load everything, which destroys the attention density that makes loaded context useful. With a budget, the assembly function must prioritize ruthlessly: constitution first (always full), then tasks and conventions (budget-capped), then decisions and learnings (scored by recency). Entries that do not fit receive title-only summaries rather than being silently dropped (an application of the \"tell me what you don't know\" pattern identified independently by four systems in our landscape analysis).
The following six invariants define the constraints that a cognitive state persistence layer must satisfy. They are not axioms chosen a priori; they are empirically grounded properties whose violation was independently identified as producing complexity costs across the 17 systems we analyzed.
Context files must be human-readable, git-diffable, and editable with any text editor. No database. No binary storage.
Validation: 11 independent rejection decisions across the analyzed landscape protected this property. Systems that adopted embedded records, binary serialization, or knowledge graphs as their core primitive consistently traded away the ability for a developer to run cat DECISIONS.md and understand the system's knowledge. The inspection cost of opaque storage compounds over the lifetime of a project: every debugging session, every audit, every onboarding conversation requires specialized tooling to access knowledge that could have been a text file.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#invariant-2-zero-runtime-dependencies","level":3,"title":"Invariant 2: Zero Runtime Dependencies","text":"
The tool must work with no installed runtimes, no running services, and no API keys for core functionality.
Validation: 13 independent rejection decisions protected this property (the most frequently defended invariant). Systems that required databases (PostgreSQL, SQLite, Redis), embedding models, server daemons, container runtimes, or cloud APIs for core operation introduced failure modes proportional to their dependency count. A persistence layer that depends on infrastructure is not a persistence layer; it is a service. Services have uptime requirements, version compatibility matrices, and operational costs that simple file operations do not.
The same files plus the same budget must produce the same output. No embedding-based retrieval, no LLM-driven selection, no wall-clock-dependent scoring in the assembly path.
Validation: 6 independent rejection decisions protected this property. Non-deterministic assembly (whether from embedding variance, LLM-based selection, or time-dependent scoring) destroys the ability to reproduce a context window and therefore to diagnose why a model produced a given output. Determinism in the assembly path is what makes the persistence layer auditable.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#invariant-4-human-authority-over-persistent-state","level":3,"title":"Invariant 4: Human Authority Over Persistent State","text":"
The agent may propose changes to context files but must not unilaterally modify them. All persistent changes go through human-reviewable git commits.
Validation: 6 independent rejection decisions protected this property. Systems that allowed agents to self-modify their memory (writing freeform notes, auto-pruning old entries, generating summaries as ground truth) consistently produced lower-quality persistent context than systems that enforced human review. Structure is a feature, not a limitation: across the landscape, the pattern \"structured beats freeform\" was independently discovered by four systems that evolved from freeform LLM summaries to typed schemas with required fields.
Core functionality must work offline with no network access. Cloud services may be used for optional features but never for core context management.
Validation: 7 independent rejection decisions protected this property. Infrastructure-dependent memory systems cannot operate in classified environments, isolated networks, or disaster-recovery scenarios. A filesystem-native model continues to function under all conditions where the repository is accessible.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#invariant-6-no-default-telemetry","level":3,"title":"Invariant 6: No Default Telemetry","text":"
Any analytics, if ever added, must be strictly opt-in.
Validation: 4 independent rejection decisions protected this property. Default telemetry erodes the trust model that a persistence layer depends on. If developers must trust the system with their architectural decisions, operational learnings, and project constraints, the system cannot simultaneously be reporting usage data to external services.
These six invariants collectively define a design space. Each feature proposal can be evaluated against them: a feature that violates any invariant is rejected regardless of how many other systems implement it. The discipline of constraint (refusing to add capabilities that compromise foundational properties) is itself an architectural contribution. Across the 17 analyzed systems, 56 patterns were explicitly rejected for violating these invariants. The rejection count per invariant (11, 13, 6, 6, 7, 4) provides a rough measure of each property's vulnerability to architectural erosion. A representative sample of these rejections is provided in Appendix A.1
The 17 systems were selected to cover the architectural design space rather than to achieve completeness. Each included system satisfies three criteria: it represents a distinct architectural primitive for AI-assisted development, it is actively maintained or widely referenced, and it provides sufficient public documentation or source code for architectural inspection. The goal was to ensure that every major category of primitive (document, embedded record, state snapshot, event/message, construction/derivation) was represented by multiple systems, enabling cross-system pattern detection.
The resulting set spans six categories: AI coding assistants (Continue, Sourcegraph/Cody, Aider, Claude Code), AI agent frameworks (CrewAI, AutoGen, LangGraph, LlamaIndex, Letta/MemGPT), autonomous coding agents (OpenHands, Sweep), session provenance tools (Entire), data versioning systems (Dolt, Pachyderm), pipeline/build systems (Dagger), and purpose-built knowledge stores (QubicDB, Kindex). Each system was analyzed from its source code and documentation, producing 34 individual analysis artifacts (an architectural profile and a set of insights per system) that yielded 87 adopt/adapt recommendations, 56 explicit rejection decisions, and 52 watch items.
Every system in the AI-assisted development landscape operates on a core primitive: an atomic unit around which the entire architecture revolves. Our analysis of 17 systems reveals five categories of primitives, each making irrecoverable trade-offs:
Group A: Document/File Primitives: Human-readable documents as the primary unit. Documents are authored by humans, version-controlled in git, and consumed by AI tools. The invariant of this group is that the primitive is always human-readable and version-controllable with standard tools. Three systems participate in this pattern: the system described in this paper as a pure expression, and Continue (via its rules directory) and Claude Code (via CLAUDE.md files) as partial participants: both use document-based context as an input but organize around different core primitives.
Group B: Embedded Record Primitives: Vector-embedded records stored with numerical embeddings for similarity search, metadata for filtering, and scoring mechanisms for ranking. Five systems use this approach (LlamaIndex, CrewAI, Letta/MemGPT, QubicDB, Kindex). The invariant is that the primitive requires an embedding model or vector database for core operations: a dependency that precludes offline and air-gapped use.
Group C: State Snapshot Primitives: Point-in-time captures of the complete system state. The invariant is that any past state can be reconstructed at any historical point. Three systems use this approach (LangGraph, Entire, Dolt).
Group D: Event/Message Primitives: Sequential events or messages forming an append-only log with causal relationships. Four systems use this approach (OpenHands, AutoGen, Claude Code, Sweep). The invariant is temporal ordering and append-only semantics.
Group E: Construction/Derivation Primitives: Derived or constructed values that encode how they were produced. The invariant is that the primitive is a function of its inputs; re-executing the same inputs produces the same primitive. Three systems use this approach (Dagger, Pachyderm, Aider).
The five primitive categories differ along seven dimensions:
Property Document Embedded Record State Snapshot Event/Message Construction Human-readable Yes No Varies Partially No Version-controllable Yes No Varies Yes Yes Queryable by meaning No Yes No No No Rewindable Via git No Yes Yes (replay) Yes Deterministic Yes No Yes Yes Yes Zero-dependency Yes No Varies Varies Varies Offline-capable Yes No Varies Varies Yes
The document primitive is the only one that simultaneously satisfies human-readability, version-controllability, determinism, zero dependencies, and offline capability. This is not because documents are superior in general (embedded records provide semantic queryability that documents lack) but because the combination of all five properties is what the persistence layer requires. The choice between primitive categories is not a matter of capability but of which properties are considered invariant.
Across the 17 analyzed systems, six design patterns were independently discovered. These convergent patterns carry extra validation weight because they emerged from different problem spaces:
Pattern 1: \"Tell me what you don't know\": When context is incomplete, explicitly communicate to the model what information is missing and what confidence level the provided context represents. Four systems independently converged on this pattern: inserting skip markers, tracking evidence gaps, annotating provenance, or naming output quality tiers.
Pattern 2: \"Freshness matters\": Information relevance decreases over time. Three systems independently chose exponential decay with different half-lives (30 days, 90 days, and LRU ordering). Static priority ordering with no time dimension leaves relevant recent knowledge at the same priority as stale entries. This pattern is in productive tension with the persistence model's emphasis on determinism: the claim is not that time-dependence is irrelevant, but that it belongs in the curation step (a human deciding to consolidate or archive stale entries) rather than in the assembly function (an algorithm silently down-ranking entries based on age).
Pattern 3: \"Content-address everything\": Compute a hash of content at creation time for deduplication, cache invalidation, integrity verification, and change detection. Five systems independently implement content hashing, each discovering it solves different problems 5.
Pattern 4: \"Structured beats freeform\": When capturing knowledge or session state, a structured schema with required fields produces more useful data than freeform text. Four systems evolved from freeform summaries to typed schemas: one moving from LLM-generated prose to a structured condenser with explicit fields for completed tasks, pending tasks, and files modified.
Pattern 5: \"Protocol convergence\": The Model Context Protocol (MCP) is emerging as a standard tool integration layer. Nine of 17 systems support it, spanning every category in the analysis. MCP's significance for the persistence model is that it provides a transport mechanism for context delivery without dictating how context is stored or assembled. This makes the approach compatible with both retrieval-centric and persistence-centric architectures.
Pattern 6: \"Human-in-the-loop for memory\": Critical memory decisions should involve human judgment. Fully automated memory management produces lower-quality persistent context than human-reviewed systems. Four systems independently converged on variants of this pattern: ceremony-based consolidation, interrupt/resume for human input, confirmation mode for high-risk actions, and separated \"think fast\" vs. \"think slow\" processing paths.
Pattern 6 directly validates the ceremony model described in this paper. The persistence layer requires human curation not because automation is impossible, but because the quality of persistent knowledge degrades when the curation step is removed. The improvement opportunity is to make curation easier, not to automate it away.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#6-worked-example-architectural-decision-under-two-models","level":2,"title":"6. Worked Example: Architectural Decision Under Two Models","text":"
We now instantiate the three-tier model in a concrete system (ctx) and illustrate the difference between prompt-time retrieval and cognitive state persistence using a real scenario from its development.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#61-the-problem","level":3,"title":"6.1 The Problem","text":"
During development, the system accumulated three overlapping storage layers for session data: raw transcripts (owned by the AI tool), session copies (JSONL copies plus context snapshots), and enriched journal entries (Markdown summaries). The middle layer (session copies) was a dead-end write sink. An auto-save hook copied transcripts to a directory that nothing read from, because the journal pipeline already read directly from the raw transcripts. Approximately 15 source files, a shell hook, 20 configuration constants, and 30 documentation references supported infrastructure with no consumers.
In a retrieval-based system, the decision to remove the middle layer depends on whether the retrieval function surfaces the relevant context:
The developer asks: \"Should we simplify the session storage?\" The retrieval system must find and rank the original discussion thread where the three layers were designed, the usage statistics showing zero reads from the middle layer, the journal pipeline documentation showing it reads from raw transcripts directly, and the dependency analysis showing 15 files, a hook, and 30 doc references. If any of these fragments are not retrieved (because they are in old chat history, because the embedding similarity score is low, or because the token budget was consumed by more recent but less relevant context), the model may recommend preserving the middle layer, or may not realize it exists.
Six months later, a new team member asks the same question. The retrieval results will differ: the original discussion has aged out of recency scoring, the usage statistics are no longer in recent history, and the model may re-derive the answer or arrive at a different conclusion.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#63-cognitive-state-model","level":3,"title":"6.3 Cognitive State Model","text":"
In the persistence model, the decision is recorded as a structured artifact at write time:
## [2026-02-11] Remove .context/sessions/ storage layer\n\n**Status**: Accepted\n\n**Context**: The session/recall/journal system had three overlapping\nstorage layers. The recall pipeline reads directly from raw transcripts,\nmaking .context/sessions/ a dead-end write sink that nothing reads from.\n\n**Decision**: Remove .context/sessions/ entirely. Two stores remain:\nraw transcripts (global, tool-owned) and enriched journal\n(project-local).\n\n**Rationale**: Dead-end write sinks waste code surface, maintenance\neffort, and user attention. The recall pipeline already proved that\nreading directly from raw transcripts is sufficient. Context snapshots\nare redundant with git history.\n\n**Consequence**: Deleted internal/cli/session/ (15 files), removed\nauto-save hook, removed --auto-save from watch, removed pre-compact\nauto-save, removed /ctx-save skill, updated ~45 documentation files.\nFour earlier decisions superseded.\n
This artifact is:
Deterministically included in every subsequent session's delivery view (budget permitting, with title-only fallback if budget is exceeded)
Human-readable and reviewable as a diff in the commit that introduced it
Permanent: it persists in version control regardless of retrieval heuristics
Causally linked: it explicitly supersedes four earlier decisions, creating an auditable chain
When the new team member asks \"Why don't we store session copies?\" six months later, the answer is the same artifact, at the same revision, with the same rationale. The reasoning is reconstructible because it was persisted at write time, not discovered at query time.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#64-the-diff-when-policy-changes","level":3,"title":"6.4 The Diff When Policy Changes","text":"
If a future requirement re-introduces session storage (for example, to support multi-agent session correlation), the change appears as a diff to the decision record:
- **Status**: Accepted\n+ **Status**: Superseded by [2026-08-15] Reintroduce session storage\n+ for multi-agent correlation\n
The new decision record references the old one, creating a chain of reasoning visible in git log. In the retrieval model, the old decision would simply be ranked lower over time and eventually forgotten.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#7-experience-report-a-system-that-designed-itself","level":2,"title":"7. Experience Report: A System That Designed Itself","text":"
The persistence model described in this paper was developed and tested by using it on its own development. Over 33 days and 389 sessions, the system's context files accumulated a detailed record of decisions made, reversed, and consolidated: providing quantitative and qualitative evidence for the model's properties.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#71-scale-and-structure","level":3,"title":"7.1 Scale and Structure","text":"
The development produced the following authoritative state artifacts:
8 consolidated decision records covering 24 original decisions spanning context injection architecture, hook design, task management, security, agent autonomy, and webhook systems
18 consolidated learning records covering 75 original observations spanning agent compliance, hook behavior, testing patterns, documentation drift, and tool integration
A constitution with 13 inviolable rules across 4 categories (security, quality, process, context preservation)
389 enriched journal entries providing a complete session-level audit trail
The consolidation ratio (24 decisions compressed to 8 records, 75 learnings compressed to 18) illustrates the curation cost and its return: authoritative state becomes denser and more useful over time as related entries are merged, contradictions are resolved, and superseded decisions are marked.
Three architectural reversals during development provide evidence that the persistence model captures and communicates reasoning effectively:
Reversal 1: The two-tier persistence model: The original design included a middle storage tier for session copies. After 21 days of development, the middle tier was identified as a dead-end write sink (described in Section 6). The decision record captured the full context, and the removal was executed cleanly: 15 source files, a shell hook, and 45 documentation references. The pattern of a \"dead-end write sink\" was subsequently observed in 7 of 17 systems in our landscape analysis that store raw transcripts alongside structured context.
Reversal 2: The prompt-coach hook: An early design included a hook that analyzed user prompts and offered improvement suggestions. After deployment, the hook produced zero useful tips, its output channel was invisible to users, and it accumulated orphan temporary files. The hook was removed, and the decision record captured the failure mode for future reference.
Reversal 3: The soft-instruction compliance model: The original context injection strategy relied on soft instructions: directives asking the AI agent to read specific files. After measuring compliance across multiple sessions, we found a consistent 75-85% compliance ceiling. The revised strategy injects content directly, bypassing the agent's judgment about whether to comply. The learning record captures the ceiling measurement and the rationale for the architectural change.
Each reversal was captured as a structured decision record with context, rationale, and consequences. In a retrieval-based system, these reversals would exist only in chat history, discoverable only if the retrieval function happens to surface them. In the persistence model, they are permanent, indexable artifacts that inform future decisions.
The 75-85% compliance ceiling for soft instructions is the most operationally significant finding from the experience report. It means that any context management strategy relying on agent compliance with instructions (\"read this file,\" \"follow this convention,\" \"check this list\") has a hard ceiling on reliability.
The root cause is structural: the instruction \"don't apply judgment\" is itself evaluated by judgment. When an agent receives a directive to read a file, it first assesses whether the directive is relevant to the current task (and that assessment is the judgment the directive was trying to prevent).
The architectural response maps directly to the formal model defined in Section 3.1. Content requiring 100% compliance is included in authoritative_state and injected by the deterministic assemble function, bypassing the agent entirely. Content where 80% compliance is acceptable is delivered as instructions within the delivery view. The three-tier architecture makes this distinction explicit: authoritative state is injected; delivery views are assembled deterministically; ephemeral state is available but not pushed.
Over 33 days, we observed a qualitative shift in the development experience. Early sessions (days 1-7) spent significant time re-establishing context: explaining conventions, re-stating constraints, re-deriving past decisions. Later sessions (days 25-33) began with the agent loading curated context and immediately operating within established constraints, because the constraints were in files rather than in chat history.
This compounding effect (where each session's context curation improves all subsequent sessions) is the primary return on the curation investment. The cost is borne once (writing a decision record, capturing a learning, updating the task list); the benefit is collected on every subsequent session load.
The effect is analogous to compound interest in financial systems: the knowledge base grows not linearly with effort but with increasing marginal returns as new knowledge interacts with existing context. A learning captured on day 5 prevents a mistake on day 12, which avoids a debugging session that would have consumed a day 12 session, freeing that session for productive work that generates new learnings. The growth is not literally exponential (it is bounded by project scope and subject to diminishing returns as the knowledge base matures), but within the observed 33-day window, the returns were consistently accelerating.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#75-scope-and-generalizability","level":3,"title":"7.5 Scope and Generalizability","text":"
This experience report is self-referential by design: the system was developed using its own persistence model. This circularity strengthens the internal validity of the findings (the model was stress-tested under authentic conditions) but limits external generalizability. The two-week crossover point was observed on a single project of moderate complexity with a small team already familiar with the model's assumptions. Whether the same crossover holds for larger teams, for codebases with different characteristics, or for teams adopting the model without having designed it remains an open empirical question. The quantitative claims in this section should be read as existence proofs (demonstrating that the model can produce compounding returns) rather than as predictions about specific adoption scenarios.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#8-situating-the-persistence-layer","level":2,"title":"8. Situating the Persistence Layer","text":"
The persistence layer occupies a specific position in the stack of AI-assisted development:
Application Logic\nAI Interaction / Agents\nContext Retrieval Systems\nCognitive State Persistence Layer\nVersion Control / Storage\n
Current systems innovate primarily in the retrieval layer (improving how context is discovered, ranked, and delivered at query time). The persistence layer sits beneath retrieval and above version control. Its role is to maintain the authoritative state that retrieval systems may query but do not own. The relationship is complementary: retrieval answers \"What in the corpus might be relevant?\"; cognitive state answers \"What must be true for this system to operate correctly?\" A mature system uses both: retrieval for discovery, persistence for authority.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#9-applicability-and-trade-offs","level":2,"title":"9. Applicability and Trade-Offs","text":"","path":["The Thesis"],"tags":[]},{"location":"thesis/#91-when-to-use-this-model","level":3,"title":"9.1 When to Use This Model","text":"
A cognitive state persistence layer is most appropriate when:
Reproducibility is a requirement: If a system must be able to answer \"Why did this output occur, and can it be produced again?\" then deterministic, version-controlled context becomes necessary. This is relevant in regulated environments, safety-critical systems, long-lived infrastructure, and security-sensitive deployments.
Knowledge must outlive sessions and individuals: Projects with multi-year lifetimes accumulate architectural decisions, domain interpretations, and operational policy. If this knowledge is stored only in chat history, issue trackers, and institutional memory, it decays. The persistence model converts implicit knowledge into branchable, reviewable artifacts.
Teams require shared cognitive authority: In collaborative environments, correctness depends on a stable answer to \"What does the system believe to be true?\" When this answer is derived from retrieval heuristics, authority shifts to ranking algorithms. When it is versioned and human-readable, authority remains with the team.
Offline or air-gapped operation is required: Infrastructure-dependent memory systems cannot operate in classified environments, isolated networks, or disaster-recovery scenarios.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#92-when-not-to-use-this-model","level":3,"title":"9.2 When Not to Use This Model","text":"
Zero-configuration personal workflows: For short-lived or exploratory tasks, the cost of explicit knowledge curation outweighs its benefits. Heuristic retrieval is sufficient when correctness is non-critical, outputs are disposable, and historical reconstruction is unnecessary.
Maximum automatic recall from large corpora: Vector retrieval systems provide superior performance when the primary task is searching vast, weakly structured information spaces. The persistence model assumes that what matters can be decided and that this decision is valuable to record.
Fully autonomous agent architectures: Agent runtimes that generate and discard state continuously, optimizing for local goal completion, do not benefit from a model that centers human ratification of knowledge.
The transition does not require full system replacement. An incremental path:
Step 1: Record decisions as versioned artifacts: Instead of allowing conclusions to remain in discussion threads, persist them in reviewable form with context, rationale, and consequences 4. This alone converts ephemeral reasoning into the cognitive state.
Step 2: Make inclusion deterministic: Define explicit assembly rules. Retrieval may still exist, but it is no longer authoritative.
Step 3: Move policy into cognitive state: When system behavior depends on stable constraints, encode those constraints as versioned knowledge. Behavior becomes reproducible.
Step 4: Optimize assembly, not retrieval: Once the authoritative layer exists, performance improvements come from budgeting, caching, and structural refinement rather than from improving ranking heuristics.
","path":["The Thesis"],"tags":[]},{"location":"thesis/#94-the-curation-cost","level":3,"title":"9.4 The Curation Cost","text":"
The primary objection to this model is the cost of explicit knowledge curation. This cost is real. Writing a structured decision record takes longer than letting a chatbot auto-summarize a conversation. Maintaining a glossary requires discipline. Consolidating 75 learnings into 18 records requires judgment.
The response is not that the cost is negligible but that it is amortized. A decision record written once is loaded hundreds of times. A learning captured today prevents repeated mistakes across all future sessions. The curation cost is paid once; the benefit compounds.
The experience report provides rough order-of-magnitude numbers. Across 389 sessions over 33 days, curation activities (writing decision records, capturing learnings, updating the task list, consolidating entries) averaged approximately 3-5 minutes per session. In early sessions (days 1-7), before curated context existed, re-establishing context consumed approximately 10-15 minutes per session: re-explaining conventions, re-stating architectural constraints, re-deriving decisions that had been made but not persisted. By the final week (days 25-33), the re-explanation overhead had dropped to near zero: the agent loaded curated context and began productive work immediately.
At ~12 sessions per day, the curation cost was roughly 35-60 minutes daily. The re-explanation cost in the first week was roughly 120-180 minutes daily. By the third week, that cost had fallen to under 15 minutes daily while the curation cost remained stable. The crossover (where cumulative curation cost was exceeded by cumulative time saved) occurred around day 10. These figures are approximate and derived from a single project with a small team already familiar with the model; the crossover point will vary with project complexity, team size, and curation discipline.
Several directions are compatible with the model described here:
Section-level deterministic budgeting: Current assembly operates at file granularity. Section-level budgeting would allow finer-grained control (including specific decision records while excluding others within the same file) without sacrificing determinism.
Causal links between decisions: The experience report shows that decisions frequently reference earlier decisions (superseding, extending, or qualifying them). Formal causal links would enable traversal of the decision graph and automatic detection of orphaned or contradictory constraints.
Content-addressed context caches: Five systems in our landscape analysis independently discovered that content hashing provides cache invalidation, integrity verification, and change detection. Applying content addressing to the assembly output would enable efficient cache reuse when the authoritative state has not changed.
Conditional context inclusion: Five systems independently suggest that context entries could carry activation conditions (file patterns, task keywords, or explicit triggers) that control whether they are included in a given assembly. This would reduce the per-session budget cost of large knowledge bases without sacrificing determinism.
Provenance metadata: Linking context entries to the sessions, decisions, or learnings that motivated them would strengthen the audit trail. Optional provenance fields on Markdown entries (session identifier, cause reference, motivation) would be lightweight and compatible with the existing file-based model.
AI-assisted development has treated context as a \"query result\" assembled at the moment of interaction, discarded at the session end. This paper identifies a complementary layer: the persistence of authoritative cognitive state as deterministic, version-controlled artifacts.
The contribution is grounded in three sources of evidence. A landscape analysis of 17 systems reveals five categories of primitives and shows that no existing system provides the combination of human-readability, determinism, zero dependencies, and offline capability that the persistence layer requires. Six design invariants, validated by 56 independent rejection decisions, define the constraints of the design space. An experience report over 389 sessions and 33 days demonstrates compounding returns: later sessions start faster, decisions are not re-derived, and architectural reversals are captured with full context.
The core claim is this: persistent cognitive state enables causal reasoning across time. A system built on this model can explain not only what is true, but why it became true and when it changed.
When context is the state:
Reasoning is reproducible: the same authoritative state, budget, and policy produce the same delivery view.
Knowledge is auditable: decisions are traceable to explicit artifacts with context, rationale, and consequences.
Understanding compounds: each session's curation improves all subsequent sessions.
The choice between retrieval-centric workflows and a persistence layer is not a matter of capability but of time horizon. Retrieval optimizes for relevance at the moment of interaction. Persistence optimizes for the durability of understanding across the lifetime of a project.
🐸🖤 \"Gooood... let the deterministic context flow through the repository...\" - Kermit the Sidious, probably
The 56 rejection decisions referenced in Section 4 were cataloged across all 17 system analyses, grouped by the invariant they would violate. This appendix provides a representative sample (two per invariant) to illustrate the methodology.
Invariant 1: Markdown-on-Filesystem (11 rejections): CrewAI's vector embedding storage was rejected because embeddings are not human-readable, not git-diff-friendly, and require external services. Kindex's knowledge graph as core primitive was rejected because it requires specialized commands to inspect content that could be a text file (kin show <id> vs. cat DECISIONS.md).
Invariant 2: Zero Runtime Dependencies (13 rejections): Letta/MemGPT's PostgreSQL-backed architecture was rejected because it conflicts with local-first, no-database, single-binary operation. Pachyderm's Kubernetes-based distributed architecture was rejected as the antithesis of a single-binary design for a tool that manages text files.
Invariant 3: Deterministic Assembly (6 rejections): LlamaIndex's embedding-based retrieval as the primary selection mechanism was rejected because it destroys determinism, requires an embedding model, and removes human judgment from the selection process. QubicDB's wall-clock-dependent scoring was rejected because it directly conflicts with the \"same inputs produce same output\" property.
Invariant 4: Human Authority (6 rejections): Letta/MemGPT's agent self-modification of memory was rejected as fundamentally opposed to human-curated persistence. Claude Code's unstructured auto-memory (where the agent writes freeform notes) was rejected because structured files with defined schemas produce higher-quality persistent context than unconstrained agent output.
Invariant 5: Local-First / Air-Gap Capable (7 rejections): Sweep's cloud-dependent architecture was rejected as fundamentally incompatible with the local-first, offline-capable model. LangGraph's managed cloud deployment was rejected because cloud dependencies for core functionality violate air-gap capability.
Invariant 6: No Default Telemetry (4 rejections): Continue's telemetry-by-default (PostHog) was rejected because it contradicts the local-first, privacy-respecting trust model. CrewAI's global telemetry on import (Scarf tracking pixel) was rejected because it violates user trust and breaks air-gap capability.
The remaining 9 rejections did not map to a specific invariant but were rejected on other architectural grounds: for example, Aider's full-file-content-in-context approach (which defeats token budgeting), AutoGen's multi-agent orchestration as core primitive (scope creep), and Claude Code's 30-day transcript retention limit (institutional knowledge should have no automatic expiration).
Reproducible Builds Project, \"Reproducible Builds: Increasing the Integrity of Software Supply Chains\", 2017. https://reproducible-builds.org/docs/definition/ ↩↩↩
S. McIntosh et al., \"The Impact of Build System Evolution on Software Quality\", ICSE, 2015. https://doi.org/10.1109/ICSE.2015.70 ↩
C. Manning, P. Raghavan, H. Schütze, Introduction to Information Retrieval, Cambridge University Press, 2008. https://nlp.stanford.edu/IR-book/ ↩
M. Nygard, \"Documenting Architecture Decisions\", Cognitect Blog, 2011. https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions ↩↩
L. Torvalds et al., Git Internals - Git Objects (content-addressed storage concepts). https://git-scm.com/book/en/v2/Git-Internals-Git-Objects ↩
Kief Morris, Infrastructure as Code, O'Reilly, 2016. ↩
J. Kreps, \"The Log: What every software engineer should know about real-time data's unifying abstraction\", 2013. https://engineering.linkedin.com/distributed-systems/log ↩
P. Hunt et al., \"ZooKeeper: Wait-free coordination for Internet-scale systems\", USENIX ATC, 2010. https://www.usenix.org/legacy/event/atc10/tech/full_papers/Hunt.pdf ↩
For agentic workers: REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (- [ ]) syntax for tracking.
+
+
Goal: Link every git commit back to the decisions, tasks, learnings, and sessions that motivated it via ctx trace.
+
Architecture: New internal/trace package provides the core logic (pending context recording, three-source detection, history/override storage, reference resolution). A new internal/cli/trace package wires it into the Cobra CLI as ctx trace. Existing commands (ctx add, ctx complete) gain a one-line trace.Record() side-effect. A ctx trace hook subcommand generates a prepare-commit-msg shell script that delegates to ctx trace collect.
In internal/config/dir/dir.go, add the Trace constant:
+
// Trace is the subdirectory for commit context tracing within .context/.
+Trace="trace"
+
+
Add it after the State constant in the same const block.
+
+
Step 2: Create trace package doc
+
+
Create internal/trace/doc.go:
+
// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+// Package trace provides commit context tracing — linking git commits
+// back to the decisions, tasks, learnings, and sessions that motivated them.
+packagetrace
+
+
+
Step 3: Create shared types
+
+
Create internal/trace/types.go:
+
// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+packagetrace
+
+import"time"
+
+// PendingEntry is a single pending context reference accumulated
+// between commits.
+typePendingEntrystruct{
+Refstring`json:"ref"`
+Timestamptime.Time`json:"timestamp"`
+}
+
+// HistoryEntry is a permanent record of a commit's context references.
+typeHistoryEntrystruct{
+Commitstring`json:"commit"`
+Refs[]string`json:"refs"`
+Messagestring`json:"message"`
+Timestamptime.Time`json:"timestamp"`
+}
+
+// OverrideEntry is a manual context tag added to an existing commit.
+typeOverrideEntrystruct{
+Commitstring`json:"commit"`
+Refs[]string`json:"refs"`
+Timestamptime.Time`json:"timestamp"`
+}
+
+// ResolvedRef holds a resolved context reference with its display text.
+typeResolvedRefstruct{
+Rawstring// Original ref string (e.g., "decision:12")
+Typestring// "decision", "learning", "task", "convention", "session", "note"
+Numberint// Entry number (0 for session/note types)
+Titlestring// Resolved title or content
+Detailstring// Additional detail (rationale, status, etc.)
+Foundbool// Whether the reference was resolved
+}
+
Run: cd /Users/parlakisik/projects/github/ctx && go test ./internal/trace/ -run TestWorkingRefs -v
+Expected: FAIL — function not defined
+
+
Step 3: Implement working.go
+
+
Create internal/trace/working.go:
+
// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+packagetrace
+
+import(
+"fmt"
+"os"
+"path/filepath"
+"strings"
+
+ctxCfg"github.com/ActiveMemory/ctx/internal/config/ctx"
+"github.com/ActiveMemory/ctx/internal/config/regex"
+"github.com/ActiveMemory/ctx/internal/task"
+)
+
+constenvSessionID="CTX_SESSION_ID"
+
+// WorkingRefs detects context references from the current working state.
+// This includes in-progress tasks and the active AI session.
+//
+// Parameters:
+// - contextDir: Path to the .context/ directory
+//
+// Returns:
+// - []string: Detected references
+funcWorkingRefs(contextDirstring)[]string{
+varrefs[]string
+
+refs=append(refs,inProgressTaskRefs(contextDir)...)
+
+ifsessionID:=os.Getenv(envSessionID);sessionID!=""{
+refs=append(refs,"session:"+sessionID)
+}
+
+returnrefs
+}
+
+// inProgressTaskRefs reads TASKS.md and returns refs for in-progress
+// (pending, non-subtask) tasks.
+funcinProgressTaskRefs(contextDirstring)[]string{
+tasksPath:=filepath.Join(contextDir,ctxCfg.Task)
+content,err:=os.ReadFile(filepath.Clean(tasksPath))
+iferr!=nil{
+returnnil
+}
+
+varrefs[]string
+pendingCount:=0
+lines:=strings.Split(string(content),"\n")
+
+for_,line:=rangelines{
+match:=regex.Task.FindStringSubmatch(line)
+ifmatch==nil{
+continue
+}
+iftask.Sub(match){
+continue// skip subtasks
+}
+iftask.Pending(match){
+pendingCount++
+refs=append(refs,fmt.Sprintf("task:%d",pendingCount))
+}
+}
+
+returnrefs
+}
+
+
+
Step 4: Run tests to verify they pass
+
+
Run: cd /Users/parlakisik/projects/github/ctx && go test ./internal/trace/ -run TestWorkingRefs -v
+Expected: All PASS
+
+
Step 5: Commit
+
+
gitaddinternal/trace/working.gointernal/trace/working_test.go
+gitcommit-m"feat(trace): add working state detection for in-progress tasks and sessions"
+
+
+
Task 5: Collect — Merge and Deduplicate from All Sources¶
Step 1: Understand entry numbering for add command
+
+
The ctx add command does not return an entry number. Since entries are prepended (newest first), a newly added decision is always entry #1 in the file. We need to count entries after write to determine the new entry's number.
+
+
Step 2: Modify add command to record pending context
+
+
In internal/cli/add/cmd/root/run.go, add the trace recording after the successful write. The entry number is determined by counting entries in the file after write, and the new entry is always #1 (prepended for decisions/learnings) or the last entry (appended for tasks/conventions).
After writeAdd.Added(cmd, fName) and before return nil, add:
+
// Best-effort: record pending context for commit tracing.
+// Decisions and learnings are prepended (newest = #1).
+// Tasks and conventions are appended (newest = last).
+iffType==cfgEntry.Decision||fType==cfgEntry.Learning||
+fType==cfgEntry.Convention{
+_=trace.Record(fType+":1",state.Dir())
+}
+
+
Note: We record as entry #1 for prepended types because new entries are always inserted at the top. For tasks, recording happens in the complete command instead, since tasks are tracked by completion, not creation.
+
+
Step 3: Modify complete command to record pending context
+
+
In internal/cli/task/cmd/complete/run.go, add trace recording after a successful completion.
Step 2: Add trace command descriptions to commands.yaml
+
+
In internal/assets/commands/commands.yaml, add the trace command descriptions:
+
trace:
+long:|-
+Show the context behind git commits.
+
+ctx trace links commits back to the decisions, tasks, learnings,
+and sessions that motivated them.
+
+Usage:
+ctx trace <commit> Show context for a specific commit
+ctx trace --last 5 Show context for last N commits
+ctx trace file <path> Show context trail for a file
+ctx trace tag <commit> Manually tag a commit with context
+ctx trace collect Collect context refs (used by hook)
+ctx trace hook enable Install prepare-commit-msg hook
+
+Examples:
+ctx trace abc123
+ctx trace --last 10
+ctx trace file src/auth.go
+ctx trace tag HEAD --note "Hotfix for production outage"
+short:Show context behind git commits
+trace.file:
+long:|-
+Show the context trail for a file.
+
+Combines git log with trailer resolution to show what decisions,
+tasks, and learnings motivated changes to a specific file.
+
+Supports optional line range with colon syntax:
+ctx trace file src/auth.go:42-60
+
+Examples:
+ctx trace file src/auth.go
+ctx trace file src/auth.go:42-60
+short:Show context trail for a file
+trace.tag:
+long:|-
+Manually tag a commit with context.
+
+For commits made without the hook, or to add extra context
+after the fact. Tags are stored in .context/trace/overrides.jsonl
+since git trailers cannot be modified without rewriting history.
+
+Examples:
+ctx trace tag HEAD --note "Hotfix for production outage"
+ctx trace tag abc123 --note "Part of Q1 compliance initiative"
+short:Manually tag a commit with context
+trace.collect:
+long:|-
+Collect context references from all sources.
+
+Gathers pending context, staged file analysis, and working state,
+then outputs a ctx-context trailer line. Used by the
+prepare-commit-msg hook.
+
+This command is not typically called directly.
+short:Collect context refs for hook
+trace.hook:
+long:|-
+Enable or disable the prepare-commit-msg hook for automatic
+context tracing. The hook injects ctx-context trailers into
+commit messages.
+
+Usage:
+ctx trace hook enable Install the hook
+ctx trace hook disable Remove the hook
+
+Examples:
+ctx trace hook enable
+ctx trace hook disable
+short:Manage prepare-commit-msg hook
+
// / ctx: https://ctx.ist
+// ,'`./ do you remember?
+// `.,'\
+// \ Copyright 2026-present Context contributors.
+// SPDX-License-Identifier: Apache-2.0
+
+packagetrace
+
+import(
+"errors"
+"fmt"
+)
+
+// CommitNotFound returns an error when a commit hash cannot be found.
+//
+// Parameters:
+// - hash: The commit hash that was not found
+//
+// Returns:
+// - error: Descriptive error
+funcCommitNotFound(hashstring)error{
+returnfmt.Errorf("commit not found: %s",hash)
+}
+
+// NotInGitRepo returns an error when the command is run outside a git repo.
+//
+// Returns:
+// - error: Descriptive error
+funcNotInGitRepo()error{
+returnerrors.New("not in a git repository")
+}
+
+// NoteRequired returns an error when --note flag is missing.
+//
+// Returns:
+// - error: Descriptive error
+funcNoteRequired()error{
+returnerrors.New("--note is required")
+}
+
+
+
Step 2: Commit
+
+
gitaddinternal/err/trace/
+gitcommit-m"feat(trace): add error package for trace operations"
+
The prepare-commit-msg hook injects the trailer before the commit is finalized. But we also need to record the commit in history.jsonl after the commit succeeds. This is done by adding a post-commit behavior to the collect flow.
The prepare-commit-msg hook passes the commit message file path. After outputting the trailer, we need to also record it. However, since the commit hasn't happened yet at prepare-commit-msg time, we need a separate mechanism.
+
Update the collect command to also accept a --record flag that writes to history after the fact. Or, more pragmatically, enhance the hook script to also call ctx trace collect --record in a post-commit hook.
+
Actually, the simpler approach: modify the hook script to write the history entry at prepare-commit-msg time (before commit), using a temporary marker. Then the trailer is the canonical source for the data. The ctx trace command already reads from trailers as a fallback.
+
Let's keep it simple: the hook injects the trailer, and ctx trace reads from the trailer at query time. The history.jsonl is a performance optimization we can add in a follow-up. For now, we'll only write history from the ctx trace collect --record <hash> subcommand, which can be called from a post-commit hook.
Run: cd /Users/parlakisik/projects/github/ctx && make test
+Expected: All PASS
+
+
Step 2: Run linter
+
+
Run: cd /Users/parlakisik/projects/github/ctx && make lint
+Expected: No errors
+
+
Step 3: Run build
+
+
Run: cd /Users/parlakisik/projects/github/ctx && make build
+Expected: BUILD SUCCESS
+
+
Step 4: Manual smoke test
+
+
Run these commands to verify the feature works end-to-end:
+
# Build and install
+cd/Users/parlakisik/projects/github/ctx&&gobuild-o/tmp/ctx./cmd/ctx/
+
+# Test trace --last (should show existing commits with no context)
+/tmp/ctxtrace--last5
+
+# Test trace tag
+/tmp/ctxtracetagHEAD--note"Test: commit context tracing feature"
+
+# Verify tag was written
+cat.context/trace/overrides.jsonl
+
+# Test trace on HEAD (should show the manual tag)
+/tmp/ctxtrace$(gitrev-parse--shortHEAD)
+
+# Test hook enable (don't actually enable in this repo)
+# /tmp/ctx trace hook enable
+
+
+
Step 5: Final commit (if any fixes needed)
+
+
gitadd-A
+gitcommit-m"fix(trace): final adjustments from smoke testing"
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file