Autonomous Rust coding agent powered by local LLMs via rig-core. Generates multi-module Rust projects from natural language descriptions, fixes compilation errors with tooled LLM agents, and maintains a persistent local documentation database via LanceDB embeddings.
-
Rust 1.75+ (
rustup update stable) -
LM Studio running at
http://localhost:1234with two models loaded:- A code model (e.g.,
gpt-oss-20b-mlx,qwen2.5-coder:7b) - An embedding model (e.g.,
text-embedding-nomic-embed-text-v2-moe)
Or Ollama running at
http://localhost:11434with a model pulled. - A code model (e.g.,
-
Build RustCoder:
cd rustyralph cargo build --releaseThe binary is at
target/release/rustcoder.
rustcoder --lm-studio scaffold "A terminal spreadsheet with ratatui, formula support, and CSV import"This will:
- Decompose the description into modules
- Generate stubs with public API contracts
- Have the LLM implement each module
- Infer Cargo.toml from the generated source
- Run an iterative fix loop until it compiles
Output goes to ~/Desktop/rustcoder_experiments/scaffold_<timestamp>/.
cd /path/to/broken/project
rustcoder --lm-studio fixThe LLM agent reads compiler errors one at a time (highest priority first), edits files, and re-checks until clean or max iterations hit.
# Implement a feature in an existing project
rustcoder --lm-studio implement "add retry logic to the http client"
# Run tests until they pass
rustcoder --lm-studio test
# Ask a question about the codebase
rustcoder --lm-studio ask "how does the formula parser work?"
# Search indexed crate docs (uses embeddings)
rustcoder --lm-studio docs-search ratatui "table widget rendering"
# Fast semantic analysis (no cargo check, uses rust-analyzer libs)
rustcoder analyze
rustcoder analyze --file src/main.rs
# Check that the LLM is reachable
rustcoder --lm-studio ping
# See what context the LLM gets for this project
rustcoder context| Flag | Default | Description |
|---|---|---|
--lm-studio |
off (uses Ollama) | Use LM Studio instead of Ollama |
--lm-studio-url |
http://localhost:1234/v1 |
LM Studio endpoint |
--model |
qwen2.5-coder:7b |
Model name |
--context-window |
8192 |
Model context window in tokens |
--max-iterations |
5 |
Max fix/test loop iterations |
--embed-model |
text-embedding-nomic-embed-text-v2-moe |
Embedding model for docs RAG |
-p, --project |
current directory | Project directory to operate on |
-v, --verbose |
off | Show full prompts and LLM responses |
- Download LM Studio
- Load a code model (e.g.,
mlx-community/gpt-oss-20b-mlx) - Load an embedding model (e.g.,
nomic-ai/nomic-embed-text-v2-moe) - Start the server (default port 1234)
- Always pass
--lm-studioto rustcoder
ollama serve
ollama pull qwen2.5-coder:7b
rustcoder --model qwen2.5-coder:7b fixNote: Ollama does not serve embeddings on the same endpoint, so docs RAG features will still try to reach LM Studio at localhost:1234 for embeddings.
- Phase 1: Decompose - LLM breaks the description into modules with dependency graph and public API stubs
- Phase 2: Structure - Creates project dir,
src/main.rswith module stubs,WORKING_MEMORY.md - Phase 3: Implement - LLM implements each module in dependency order. Each module gets: its API contract, dependency APIs, working memory notes from previously implemented modules, and contextual Rust gotcha hints
- Phase 4: Cargo.toml - Deterministic: scans generated source for
usestatements, resolves crate versions from crates.io, writesCargo.toml - Phase 4.5: Pre-index docs - Downloads and indexes all dependency crates into local docs RAG (
~/.rustcoder/docs_rag/) - Phase 5: Fix loop -
cargo check→ extract highest-priority error → extract surrounding function via tree-sitter → LLM fixes with tools (file edit, read, cargo check, docs search) → repeat
Errors are triaged by priority:
- Syntax errors (missing
;, unmatched braces) - Import/module errors (
unresolved import,module not found) - Type errors (
mismatched types,expected X found Y) - Trait/impl errors (
method not found,trait bound not satisfied) - Borrow/lifetime errors (
cannot borrow,lifetime mismatch)
Each iteration targets the highest-priority error. After the LLM applies a fix, cargo check runs again to see if that specific error was resolved. Output looks like:
Targeting error [1/12]: E0432 at src/main.rs:5 (priority 2: import/module)
...
✅ E0432 resolved (11 errors remaining)
All LLM tool calls are logged so you can see what the agent is doing:
🔧 read_file(src/main.rs)
🔧 patch_file(src/main.rs)
🔧 cargo_check()
🔍 search_crate_docs(ratatui, "Widget trait render")
🔍 → 3 results
src/
main.rs CLI (clap), command dispatch, provider setup
rig_agent.rs Agent struct, LLM orchestration, scaffold/fix/implement
rig_tools.rs Tool impls: FileEditor, PatchFile, ReadFile, CargoCheck, CodeSearch, CratesIo
docs_rag.rs DocsRag + SearchDocsTool: LanceDB crate docs with embeddings
scaffold.rs Module decomposition, stub gen, Cargo.toml inference, working memory
tree_sitter_extract.rs Function extraction and error priority via tree-sitter
analyzer.rs ra_ap_* semantic analysis wrapper
context.rs Project file enumeration
pipeline.rs Multi-phase implementation workflow (alternative to scaffold)
gotchas.rs Rust gotchas knowledge base for LLM guidance
experiment_db.rs SQLite logging of prompts/responses for debugging
| Path | Purpose |
|---|---|
~/.rustcoder/docs_rag/ |
LanceDB vector store for crate documentation |
~/Desktop/rustcoder_experiments/ |
Default output for scaffold/pipeline projects |
WORKING_MEMORY.md (in generated projects) |
Inter-module notes from implementation phase |
| Component | Crate | Purpose |
|---|---|---|
| LLM Orchestration | rig-core 0.29 | Agent framework with typed tools |
| Semantic Analysis | ra_ap_* 0.0.317 | rust-analyzer libraries |
| Vector Storage | lancedb 0.23 | Persistent docs RAG |
| Embeddings | arrow 56 | Arrow arrays for LanceDB |
| Code Parsing | tree-sitter 0.24 | Function extraction from source |
| CLI | clap 4 | Argument parsing |
| HTTP | reqwest 0.12 | crates.io API, LM Studio embeddings |
| Experiment DB | rusqlite 0.32 | Prompt/response logging |