A Rust coding-agent harness with a full-screen TUI, headless CLI, JSON-RPC serve mode, reusable worker threads, and structured traces.
ai provider adapters, model catalog, auth, streaming primitives
agent generic agent runtime, context compaction, stats, orchestration
coding-agent tau binary: TUI/CLI/serve modes, tools, permissions, sessions, traces
ai and agent stay generic. coding-agent is the built-in coding harness on top.
- Full-screen TUI with chat, sidebar, slash commands, thread inspection, and inline edit/create diffs
- Headless
--promptmode for scripting and benchmarks servemode for JSON-RPC orchestration over stdio- Built-in tool suite for file edits, shell, search, web access, planning, and orchestration
- Reusable in-process threads, episodes, shared documents, and a persistent Python REPL tool
- Tool permissions (
allow/deny/ask) plus--yoloauto-approve mode - Session persistence, resume support, and always-on structured trace capture
- Skill discovery from project-local and user-global
.tau/skills/directories
| Category | Tools |
|---|---|
| Filesystem + shell | bash, file_read, file_edit, file_write, glob, grep |
| Web | web_fetch, web_search |
| Planning + delegation | subagent, todo |
| Orchestration | thread, query, document, log, from_id, py_repl |
thread, query, document, log, from_id, and py_repl are backed by shared in-process orchestration state, so threads can reuse prior episodes and coordinate through virtual documents.
Requires Rust toolchain (1.75+).
# Clone and install
git clone https://github.com/tnguyen21/tau.git
cd tau
cargo install --path coding-agent
# Or install directly from GitHub without cloning
cargo install --git https://github.com/tnguyen21/tau.git coding-agentThis puts tau on your $PATH.
Tagged releases publish a static x86_64-unknown-linux-musl binary:
curl -fsSL \
https://github.com/tnguyen21/tau/releases/latest/download/tau-x86_64-unknown-linux-musl \
-o /usr/local/bin/tau
chmod +x /usr/local/bin/tauOverride the release source with TAU_BINARY_VERSION, TAU_BINARY_REPO, or TAU_BINARY_URL when needed. See Release and container install.
# Anthropic
export ANTHROPIC_API_KEY=sk-ant-...
# OpenAI-family models
export OPENAI_API_KEY=sk-...
# or use Codex OAuth
codex login
# Interactive TUI
tau
# Choose a model
tau --model claude-sonnet-4-6
# One-shot / headless mode
tau --prompt "Summarize this repo"
# Restrict tools
tau --tools file_read,grep,glob
# List models
tau models --provider anthropic
# Run as a JSON-RPC backend
tau serve --cwd .Tau supports:
- Anthropic Messages API
- OpenAI Responses API
- OpenAI-compatible chat backends, including OpenRouter and other compatible providers in the built-in model catalog
Auth comes from provider-specific environment variables such as ANTHROPIC_API_KEY, OPENAI_API_KEY, OPENROUTER_API_KEY, GROQ_API_KEY, TOGETHER_API_KEY, and DEEPSEEK_API_KEY.
For OpenAI-family models, tau can also fall back to Codex OAuth from ~/.codex/auth.json when OPENAI_API_KEY is not set.
cargo test
cargo benchcargo testis offline by default; live provider tests require explicit opt-in- Criterion benches cover core runtime pieces such as SSE parsing, serde, and agent construction
- Broader harness evals and microbenchmarks live under
benchmarks/— see Benchmarking strategy
The Python benchmark suite targets Python 3.12+ and uv.
Tau reads global config from ~/.tau/config.toml:
model = "claude-sonnet-4-6"
thinking = "medium"
tools = ["file_read", "file_edit", "file_write", "glob", "grep", "bash"]
skills = true
[permissions]
bash = "ask"
file_edit = "ask"
file_write = "ask"
web_search = "allow"
[models]
search = "claude-haiku-4-5"
subagent = "claude-haiku-4-5"
reasoning = "claude-opus-4-6"Model slots let orchestration tools route work to different models for cheap search, deeper reasoning, or subagents.
Tau auto-discovers skills from:
- project-local
.tau/skills/directories (walking up from the current directory toward the git root) - user-global
~/.tau/skills/
In the TUI, invoke skills with /skill:<name>. You can also load explicit skill files with repeated --skill PATH flags.
- Interactive TUI runs create sessions by default in
~/.tau/sessions/ - Resume with
--resumeor--session <id> - Use
--no-sessionfor ephemeral runs - Traces are written to
~/.tau/traces/<session_id>/by default asrun.jsonandtrace.jsonl - Override trace output with
--trace-output <dir>
For trace inspection, see tools/tau-trace and Trace analysis.
ai/
src/
providers/ # Anthropic, OpenAI Responses, OpenAI-compatible chat
catalog.rs # built-in model catalog
models.rs # model registry helpers
codex_auth.rs # Codex OAuth / ChatGPT backend auth
stream.rs, types.rs # streaming + shared types
agent/
src/
agent.rs, loop_.rs # core agent runtime
context.rs # mechanical context compaction
orchestrator.rs # shared thread/document state
thread.rs # thread + episode types
stats.rs # runtime statistics
coding-agent/
src/
main.rs # TUI + headless CLI entrypoint
serve.rs # JSON-RPC stdio server
cli.rs, config.rs # CLI parsing + config loading
permissions.rs # allow/deny/ask tool policy layer
session.rs # JSONL session persistence
trace.rs # run.json + trace.jsonl output
skills.rs # slash-command skill loading
rpc/ # serve-mode transport + handlers
tools/ # built-in tools and orchestration tools
tui/ # panes, chat UI, sidebar, thread modal
tools/
tau-trace/ # TUI viewer for tau trace files
benchmarks/ # eval adapters, microbenchmarks, fixtures
- Architecture overview
- Benchmarking strategy
- Orchestration design
- Context management
- Trace analysis
- Release and container install
- Feature comparison
- Benchmarks landscape
- Harness lit review
Tau is still a hackable research harness. Near-term work is centered on:
- Daily-driver polish for the TUI, permissions UX, skills, and orchestration workflows
- Better trace observability and benchmark coverage
- RL trajectory generation and harness-aware post-training experiments
MIT