High-performance CLI for managing AI coding sessions in tmux.
- Fast startup - Native Go binary, ~20x faster than Node.js
- Single binary - No runtime dependencies
- TUI - Interactive terminal UI built with Bubbletea
- Session management - Spawn, list, attach, kill sessions
- Redis integration - Real-time status via heartbeats and promises
cd packages/go
make build
./bin/coders --helpDownload from GitHub Releases.
coders tuiInteractive terminal UI for managing sessions:
↑↓/jk- NavigateEnter/a- Attach to sessions- Spawn new sessionK- Kill selected sessionC- Kill all completed sessionsR- Resume completed sessionr- Refreshq- Quit (switches to orchestrator and kills TUI session if orchestrator exists)
coders spawn claude --task "Fix the login bug"
coders spawn claude --task "Add tests" --cwd ~/projects/myapp
coders spawn claude --model sonnet --attach # Spawn and attach immediatelyCreate isolated git worktrees for feature development:
coders spawn claude --worktree --task "Add new feature"This creates:
- A new worktree in
.coders/worktrees/<session-name> - A new branch named
session/<session-name> - Session runs in the isolated worktree directory
Benefits:
- Work on features without affecting the main working directory
- Each session gets its own branch automatically
- Easy cleanup with
git worktree remove - Multiple sessions can work on different branches simultaneously
Run sessions using Ollama instead of Anthropic's API:
# Set environment variables
export CODERS_OLLAMA_BASE_URL="https://ollama.example.com"
export CODERS_OLLAMA_AUTH_TOKEN="your-token"
# Spawn with --ollama flag
coders spawn claude --ollama --model qwen3-coder:30b --task "Fix lint errors"The --ollama flag maps CODERS_OLLAMA_* env vars to ANTHROPIC_* vars for that session only, so you can run Anthropic and Ollama sessions side by side.
coders list # Pretty print
coders list --json # JSON output
coders list --status active # Filter by statuscoders version# Build and run TUI
make run
# Build and run list
make list
# Run tests
make test
# Build for all platforms
make build-all
# Watch and rebuild on changes (requires watchexec)
make watchpackages/go/
├── cmd/coders/ # CLI entry points
│ ├── main.go # Root command
│ ├── tui.go # TUI subcommand
│ ├── list.go # List subcommand
│ └── version.go # Version subcommand
├── internal/
│ ├── tui/ # Bubbletea TUI implementation
│ │ ├── model.go # Main model and update logic
│ │ ├── views.go # View rendering
│ │ └── styles.go # Lipgloss styles
│ ├── tmux/ # Tmux integration
│ ├── redis/ # Redis integration
│ └── types/ # Shared types
├── Makefile
└── go.mod
The Go CLI is designed to work alongside the TypeScript Claude plugin:
┌─────────────────────┐ ┌─────────────────────┐
│ Go Binary │◄───│ Claude Code Plugin │
│ (coders) │ │ (TypeScript) │
│ │ │ │
│ • Fast CLI commands │ │ • /coders:spawn │
│ • TUI │ │ • /coders:promise │
│ • Background tasks │ │ │
└──────────┬──────────┘ └─────────────────────┘
│
▼
┌──────┴──────┐
│ tmux │ Redis
│ sessions │ (state)
└─────────────┘
The plugin calls the Go binary for operations, providing:
- Instant command response (Go startup: ~2ms)
- Single binary distribution
- Shared state via Redis
The loop runner automatically processes tasks from a todolist file, spawning a fresh coder session for each task.
# Create a todolist
cat > tasks.txt << 'EOF'
[ ] Add input validation to the API endpoints
[ ] Write unit tests for the auth module
[ ] Update README with API documentation
EOF
# Run the loop
coders loop --todolist tasks.txt --cwd ~/projectThe loop:
- Reads uncompleted tasks (
[ ]format) - Spawns a coder session for each task
- Waits for the session to publish a completion promise
- Marks the task complete (
[x]) in the file - Moves to the next task
- Auto-switches from Claude to Codex if usage limits are hit
The --wait flag enables recursive task decomposition. A coder can spawn sub-loops and wait for them to complete:
# From within a coder session working on a complex task:
coders loop --wait --todolist subtasks.txt --cwd .
# Blocks until all subtasks complete
# Then the parent coder continues its workThis creates task decomposition trees:
Orchestrator
└── Loop A (feature tasks)
├── Coder A1 (simple task) → completes
├── Coder A2 (complex task)
│ └── Loop A2 --wait (subtasks)
│ ├── Coder A2a → completes
│ └── Coder A2b → completes
│ # A2 continues after sub-loop completes
└── Coder A3 → completes
# Check loop status
coders loop-status
coders loop-status --loop-id loop-1234567890
# View log
tail -f /tmp/coders-loop-loop-1234567890.logRalph loops are a popular technique for iterative AI development using a bash while loop that repeatedly feeds Claude the same prompt until completion. Coders loops take a fundamentally different approach that's more powerful for complex work.
| Aspect | Ralph Loop | Coders Loop |
|---|---|---|
| Session model | Single session, same prompt repeated | Fresh session per task |
| Context | Accumulates over iterations, eventually hits limits | Clean context for each task |
| Task structure | One monolithic prompt | Multiple discrete tasks from todolist |
| Parallelization | Sequential only | Parallel support (planned) |
| Delegation | Cannot spawn sub-agents | Recursive loops with --wait |
| Tool switching | Manual | Auto-switches on rate limits |
| State | Files only | Redis + files, survives crashes |
| Visibility | None | TUI, dashboard, loop-status |
| Completion | String matching in output | Explicit promise system |
Ralph loops are good for:
- Single, well-defined tasks with clear completion criteria
- Tasks where context accumulation helps (iterative refinement)
- Simple "keep trying until it works" scenarios
Coders loops are better for:
- Multiple related tasks (feature implementation with tests, docs, etc.)
- Complex work requiring task decomposition
- Long-running work where context limits matter
- Work requiring different tools for different subtasks
- Team/orchestration scenarios with visibility needs
- Work that might hit API rate limits
The real power of coders loops comes from recursive decomposition. When a coder encounters a complex task, it can:
- Analyze the task and identify subtasks
- Write a subtask todolist
- Spawn a sub-loop with
--wait - Each subtask gets a fresh coder with full context
- Sub-loop completes, parent coder continues
- Parent coder integrates results and completes its own task
This is impossible with Ralph loops, which are limited to a single session repeatedly executing the same prompt. Coders loops enable true hierarchical task decomposition with clean context boundaries at each level.