You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fix: vexor test mocking, site responsiveness, and README consolidation
- Add missing _is_vexor_local_functional mock to installer tests (CI fix)
- Add tests for _get_uv_tool_vexor_bin and _is_vexor_local_functional
- Add mock audit rule to testing.md to prevent unmocked dependency issues
- Move Smart Model Routing from Usage to Under the Hood (site + README)
- Merge Why I Built This and Why This Approach Works into single section
- Fix install command box wrapping on mobile
- Fix NavBar tablet responsiveness (desktop nav at lg: breakpoint)
I'm a senior IT freelancer from Germany. My clients hire me to ship production-quality code — tested, typed, formatted, and reviewed. When something goes into production under my name, quality isn't optional.
31
+
I'm Max, a senior IT freelancer from Germany. My clients hire me to ship production-quality code — tested, typed, formatted, and reviewed. When something goes into production under my name, quality isn't optional.
32
32
33
-
Claude Code writes code fast. But without structure, it skips tests, loses context, and produces inconsistent results — especially on complex, established codebases where there are real conventions to follow and real regressions to catch. I tried other frameworks — they burned tokens on bloated prompts without adding real value. Some added process without enforcement. Others were prompt templates that Claude ignored when context got tight. None made Claude reliably produce production-grade code.
33
+
Claude Code writes code fast. But without structure, it skips tests, loses context, and produces inconsistent results — especially on complex, established codebases where there are real conventions to follow and real regressions to catch. I tried other frameworks. Most of them add complexity — dozens of agents, elaborate scaffolding, thousands of lines of instruction files — but the output doesn't get better. You just burn more tokens, wait longer, and deal with more things breaking.
34
34
35
-
So I built Pilot Shell. Instead of adding process on top, it bakes quality into every interaction. Linting, formatting, and type checking run as enforced hooks on every edit. TDD is mandatory, not suggested. Context is monitored and preserved across sessions. Every piece of work goes through verification before it's marked done.
35
+
So I built Pilot Shell. Instead of adding process on top, it bakes quality into every interaction. Linting, formatting, and type checking run as enforced hooks on every edit. TDD is mandatory, not suggested. Context is preserved across sessions. Every rule exists because I hit a real problem: a bug that slipped through, a regression that shouldn't have happened, a session where Claude cut corners and nobody caught it.
36
+
37
+
This isn't a vibe coding tool, it's true agentic engineering, made simple. You install it in any existing project, run `pilot`, then `/sync` to learn your codebase. The guardrails are just there. The end result is that you can walk away — start a `/spec` task, approve the plan, go grab a coffee. When you come back, the work is tested, verified, formatted, and ready to ship.
36
38
37
39
---
38
40
@@ -61,7 +63,7 @@ Each `/spec` prompt one-shotted a complete feature — plan, TDD implementation,
61
63
| Writes code, skips tests | TDD enforced — RED, GREEN, REFACTOR on every feature |
62
64
| No quality checks | Hooks auto-lint, format, type-check on every file edit |
63
65
| Context degrades mid-task | Hooks preserve and restore state across compaction cycles |
64
-
| Every session starts fresh | Persistent memory across sessions via Pilot Shell Console |
66
+
| Every session starts fresh | Persistent memory across sessions via Pilot Shell Console |
65
67
| Hope it works | Verifier sub-agents perform code review before marking complete |
66
68
| No codebase knowledge | Production-tested rules loaded into every session |
67
69
| Generic suggestions | Coding standards activated conditionally by file type |
@@ -71,20 +73,6 @@ Each `/spec` prompt one-shotted a complete feature — plan, TDD implementation,
71
73
72
74
---
73
75
74
-
## Why This Approach Works
75
-
76
-
There are other AI coding frameworks out there. I tried them. They add complexity — dozens of agents, elaborate scaffolding, thousands of lines of instruction files — but the output doesn't improve proportionally. More machinery burns more tokens, increases latency, and creates more failure modes. Complexity is not a feature.
77
-
78
-
**Pilot Shell optimizes for output quality, not system complexity.** The rules are minimal and focused. There's no big learning curve, no project scaffolding to set up, no state files to manage. You install it in any existing project — no matter how complex — run `pilot`, then `/sync` to learn your codebase, and the quality guardrails are just there — hooks, TDD, type checking, formatting — enforced automatically on every edit, in every session.
79
-
80
-
This isn't a vibe coding tool. It's built for developers who ship to production and need code that actually works. Every rule in the system comes from daily professional use: real bugs caught, real regressions prevented, real sessions where the AI cut corners and the hooks stopped it. The rules are continuously refined based on what measurably improves output.
81
-
82
-
**The result: you can actually walk away.** Start a `/spec` task, approve the plan, then go grab a coffee. When you come back, the work is done — tested, verified, formatted, and ready to ship. Hooks preserve state across compaction cycles, persistent memory carries context between sessions, quality hooks catch every mistake along the way, and verifier agents review the code before marking it complete. No babysitting required.
83
-
84
-
The system stays fast because it stays simple. Quick mode is direct execution with zero overhead — no sub-agents, no plan files, no directory scaffolding. You describe the task and it gets done. `/spec` adds structure only when you need it: plan verification, TDD enforcement, independent code review, automated quality checks. Both modes share the same quality hooks. Both modes benefit from persistent memory and hooks that preserve state across compaction.
|**Planning**| Opus | Exploring your codebase, designing architecture, and writing the spec requires deep reasoning. A good plan is the foundation of everything. |
241
-
|**Plan Verification**| Opus | Catching gaps, missing edge cases, and requirement mismatches before implementation saves expensive rework. |
242
-
|**Implementation**| Sonnet | With a solid plan, writing code is straightforward. Sonnet is fast, cost-effective, and produces high-quality code when guided by a clear spec. |
243
-
|**Code Verification**| Opus | Independent code review against the plan requires the same reasoning depth as planning — catching subtle bugs, logic errors, and spec deviations. |
244
-
245
-
**The insight:** Implementation is the easy part when the plan is good and verification is thorough. Pilot Shell invests reasoning power where it has the highest impact — planning and verification — and uses fast execution where a clear spec makes quality predictable.
246
-
247
-
**Configurable:** All model assignments are configurable per-component via the Pilot Shell Console settings. Choose between Sonnet 4.6 and Opus 4.6 for the main session, each command, and sub-agents. A global "Extended Context (1M)" toggle enables the 1M token context window across all models simultaneously. **Note:** 1M context models require a Max (20x) or Enterprise subscription — not available to all users.
248
-
249
222
### Quick Mode
250
223
251
224
Just chat. No plan file, no approval gate. All quality hooks and TDD enforcement still apply. Best for small tasks, exploration, and quick questions.
|`session_end.py`| Blocking | Stops the worker daemon when no other Pilot Shell sessions are active. Sends real-time dashboard notification. |
395
368
396
369
### Context Preservation
@@ -404,6 +377,21 @@ Pilot Shell preserves context automatically across compaction boundaries:
404
377
405
378
**Effective context display:** Claude Code reserves ~16.5% of the context window as a compaction buffer, triggering auto-compaction at ~83.5% raw usage. Pilot Shell rescales this to an **effective 0–100% range** so the status bar fills naturally to 100% right before compaction fires. A `▓` buffer indicator at the end of the bar shows the reserved zone. The context monitor warns at ~80% effective (informational) and ~90%+ effective (caution) — no confusing raw percentages.
406
379
380
+
### Smart Model Routing
381
+
382
+
Pilot Shell uses the right model for each phase — Opus where reasoning quality matters most, Sonnet where speed and cost matter:
|**Planning**| Opus | Exploring your codebase, designing architecture, and writing the spec requires deep reasoning. A good plan is the foundation of everything. |
387
+
|**Plan Verification**| Opus | Catching gaps, missing edge cases, and requirement mismatches before implementation saves expensive rework. |
388
+
|**Implementation**| Sonnet | With a solid plan, writing code is straightforward. Sonnet is fast, cost-effective, and produces high-quality code when guided by a clear spec. |
389
+
|**Code Verification**| Opus | Independent code review against the plan requires the same reasoning depth as planning — catching subtle bugs, logic errors, and spec deviations. |
390
+
391
+
**The insight:** Implementation is the easy part when the plan is good and verification is thorough. Pilot Shell invests reasoning power where it has the highest impact — planning and verification — and uses fast execution where a clear spec makes quality predictable.
392
+
393
+
**Configurable:** All model assignments are configurable per-component via the Pilot Shell Console settings. Choose between Sonnet 4.6 and Opus 4.6 for the main session, each command, and sub-agents. A global "Extended Context (1M)" toggle enables the 1M token context window across all models simultaneously. **Note:** 1M context models require a Max (20x) or Enterprise subscription — not available to all users.
394
+
407
395
### Built-in Rules & Standards
408
396
409
397
Production-tested best practices loaded into **every session**. These aren't suggestions — they're enforced standards. Coding standards activate conditionally by file type.
@@ -533,11 +521,11 @@ Details and licensing at [pilot-shell.com](https://pilot-shell.com).
533
521
534
522
Pilot Shell makes external calls **only for licensing**. Here is the complete list:
That's it — three calls total, each sent at most once (validation re-checks daily). No OS, no architecture, no Python version, no locale, no analytics, no heartbeats. The validation result is cached locally, and Pilot Shell works fully offline for up to 7 days between checks. Beyond these licensing calls, the only external communication is between Claude Code and Anthropic's API — using your own subscription or API key.
0 commit comments