Powerful extensions for the Pi coding agent that solve real development workflow problems.
See it in action:
cd demo
pi
/workflow ../examples/specs/user-api.mdWatches Pi build a complete FastAPI user management API with tests, automatically iterating through write → test → review → fix cycles with clean, unbiased code reviews.
The workflow that solves the real problem.
Stop manually prompting "now review", "now test", "now fix" at every step. Stop getting biased reviews because the LLM sees all the implementation details. Let Pi handle the entire development cycle automatically with clean, unbiased code reviews.
/workflow spec.mdWhat makes it special:
- Context compaction before review - LLM reviews with fresh eyes, no implementation bias
- Deterministic test validation - Parses actual exit codes, no guessing
- Automated iteration cycle - write → test → review → fix → verify (all automatic)
- State persistence - Handles long tasks, survives restarts
- Flexible input - Spec file, prompt, or editor
The problem it solves:
BEFORE:
You: Write this feature
LLM: [writes code]
You: Now review it (Manual)
LLM: [reviews but context polluted] (Biased - "I just wrote this!")
You: Fix these issues (Manual)
LLM: [fixes]
You: Run tests (Manual)
...endless manual orchestration
AFTER:
You: /workflow spec.md
LLM: [writes → tests → COMPACTS CONTEXT → reviews with clean eyes →
finds real issues → fixes → verifies → done!]
You: [just watched]
Make waiting for Pi more entertaining.
Replaces boring "Working..." with random hilarious messages like:
- "Consulting the void..."
- "Bribing the compiler..."
- "Teaching old code new tricks..."
- "Debugging the matrix..."
- ...and 27 more!
# Just install - works automaticallyZero configuration. Pure entertainment. 30 different messages.
git clone https://github.com/owainlewis/pi-extensions.git
cd pi-extensions
./install.sh./install.sh context-workflow
./install.sh funny-statuscp extensions/context-workflow/context-workflow.ts ~/.pi/agent/extensions/
cp extensions/funny-status/funny-status.ts ~/.pi/agent/extensions/Then in Pi:
/reload
Most "workflow" extensions are just fancy prompts. Context-workflow actually solves real problems:
The key innovation:
// After tests pass, BEFORE review:
ctx.compact({
customInstructions: "Remove all implementation details.
Keep only: spec, file list, summary."
});
// Now LLM reviews with FRESH context
// No bias from "I just wrote this"
// Catches real issues// Not: "I think tests passed" (unreliable)
// Yes: Parse actual exit code (reliable)
workflow_test_result({ exitCode: 0 }) // Pass
workflow_test_result({ exitCode: 1 }) // FailNo more manual prompting:
write → test → review → fix → test → verify
↓ ↓ ↓ ↓ ↓ ↓
auto auto auto auto auto auto
Tracks everything across long tasks:
- Current stage
- Iteration count
- Test status
- Review issues list
- Context state (compacted or full)
$ pi
> /workflow "Create a calculator with add/subtract/multiply/divide. Include tests."
Context-Isolated Workflow Started
[Pi automatically:]
Stage 1: Writing implementation (1/10)
- Creates calculator.py
- Creates test_calculator.py
→ Calls workflow_next
Stage 2: Running tests (2/10)
- Runs: pytest tests/
- Exit code: 1 (tests failed)
→ Calls workflow_test_result({ exitCode: 1 })
Stage 4: Fixing issues (3/10)
- Fixes the bug
→ Calls workflow_next
Stage 2: Re-testing (4/10)
- Runs: pytest tests/
- Exit code: 0 (tests passed)
→ Calls workflow_test_result({ exitCode: 0 })
[CONTEXT COMPACTION - Removes all implementation details]
Stage 3: Code review (clean context) (5/10)
- Reviews with fresh eyes
- Finds: missing docstrings, no div-by-zero check
→ Calls workflow_review_result({ issues: [...] })
Stage 4: Fixing issues (6/10)
- Adds docstrings
- Adds div-by-zero error handling
→ Calls workflow_next
Stage 2: Re-testing (7/10)
- Tests still pass
→ workflow_test_result({ exitCode: 0 })
[CONTEXT COMPACTION AGAIN]
Stage 3: Code review (clean context) (8/10)
- Reviews again
- No issues found
→ workflow_review_result({ issues: [] })
Stage 5: Final verification (9/10)
- Final test run
- Everything works
→ workflow_complete
Workflow Complete!
Iterations: 9
Tests: All passing
Review: No issues
You: [just watched it happen]- Context-Workflow README - Complete documentation
- Funny Status README - Usage and customization
- TUTORIAL.md - Step-by-step walkthrough
- CONTRIBUTING.md - How to contribute
See TUTORIAL.md for a complete walkthrough:
- Installation (2 min)
- First workflow run (5 min)
- Understanding stages (5 min)
- Tips for best results
/workflow "Create a REST API for user management with:
- CRUD operations (create, read, update, delete)
- Input validation
- Error handling
- Comprehensive tests"Create spec-auth.md:
# Authentication System
## Requirements
- Password hashing with bcrypt
- JWT token generation
- Login/logout endpoints
- Session management
- Rate limiting
## Tests
- Valid/invalid credentials
- Token validation
- Session expiry
- Rate limit enforcementThen:
/workflow spec-auth.md/workflow
# Editor opens - write your spec
# Save and exit
# Watch it executecd demo
/workflow ../examples/specs/user-api.mdBuilds a complete FastAPI user management API with CRUD operations, validation, tests, and documentation.
/workflow [spec]- Start workflow/workflow:status- Check progress/workflow:cancel- Cancel workflow
Watch the footer:
Writing implementation (1/10)
Running tests (2/10)
Code review (clean context) (5/10)
Improving based on review (6/10)
Final verification (9/10)
Complete
No! It has:
- State management (tracks stage, iteration, issues)
- Context compaction (removes bias before review)
- Deterministic validation (parses exit codes)
- Automated progression (no manual steps)
Before review, it compacts the context - removing all implementation details. The LLM only sees:
- Original spec
- List of files
- Brief summary
- The actual code to review
Not:
- Implementation conversation
- Debugging thoughts
- Decision-making process
- "I just wrote this" bias
Max 10 iterations. It will complete automatically and show you where it got stuck.
Yes! Edit context-workflow.ts to add stages, change logic, or customize behavior.
Auto-detects:
- pytest (Python)
- npm test (JavaScript)
- cargo test (Rust)
- go test (Go)
- mvn test (Java)
- Any command that returns exit codes
You write prompt
→ LLM writes code
→ You: "now review"
→ LLM reviews (biased - sees everything)
→ You: "fix these issues"
→ LLM fixes
→ You: "run tests"
→ LLM tests
→ You: "fix failures"
→ ...endless back and forth
Problems:
- Manual orchestration
- Polluted context (biased reviews)
- Easy to skip steps
- Time-consuming
- LLM gets lost in long tasks
You: /workflow spec.md
→ Pi does everything automatically
→ With clean reviews
→ Deterministic validation
→ State tracking
Benefits:
- Fully automated
- Clean, unbiased reviews
- Reliable progression
- Fast
- Handles long tasks
- Pi coding agent (Installation guide)
- Node.js (for Pi)
Contributions welcome! See CONTRIBUTING.md for guidelines.
Ideas for extensions:
- Git workflow automation
- Deployment pipelines
- Documentation generation
- Performance profiling
MIT License - see LICENSE for details
- Pi coding agent by @badlogic
- Built to solve real development workflow problems
- Community feedback welcome
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Pi Discord: Join
Made for developers tired of manual workflow orchestration