Skip to content

Divhanthelion/rustcoder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RustCoder

Autonomous Rust coding agent powered by local LLMs via rig-core. Generates multi-module Rust projects from natural language descriptions, fixes compilation errors with tooled LLM agents, and maintains a persistent local documentation database via LanceDB embeddings.

Quick Start

Prerequisites

  1. Rust 1.75+ (rustup update stable)

  2. LM Studio running at http://localhost:1234 with two models loaded:

    • A code model (e.g., gpt-oss-20b-mlx, qwen2.5-coder:7b)
    • An embedding model (e.g., text-embedding-nomic-embed-text-v2-moe)

    Or Ollama running at http://localhost:11434 with a model pulled.

  3. Build RustCoder:

    cd rustyralph
    cargo build --release

    The binary is at target/release/rustcoder.

Generate a project (scaffold)

rustcoder --lm-studio scaffold "A terminal spreadsheet with ratatui, formula support, and CSV import"

This will:

  1. Decompose the description into modules
  2. Generate stubs with public API contracts
  3. Have the LLM implement each module
  4. Infer Cargo.toml from the generated source
  5. Run an iterative fix loop until it compiles

Output goes to ~/Desktop/rustcoder_experiments/scaffold_<timestamp>/.

Fix a broken project

cd /path/to/broken/project
rustcoder --lm-studio fix

The LLM agent reads compiler errors one at a time (highest priority first), edits files, and re-checks until clean or max iterations hit.

Other commands

# Implement a feature in an existing project
rustcoder --lm-studio implement "add retry logic to the http client"

# Run tests until they pass
rustcoder --lm-studio test

# Ask a question about the codebase
rustcoder --lm-studio ask "how does the formula parser work?"

# Search indexed crate docs (uses embeddings)
rustcoder --lm-studio docs-search ratatui "table widget rendering"

# Fast semantic analysis (no cargo check, uses rust-analyzer libs)
rustcoder analyze
rustcoder analyze --file src/main.rs

# Check that the LLM is reachable
rustcoder --lm-studio ping

# See what context the LLM gets for this project
rustcoder context

Global Options

Flag Default Description
--lm-studio off (uses Ollama) Use LM Studio instead of Ollama
--lm-studio-url http://localhost:1234/v1 LM Studio endpoint
--model qwen2.5-coder:7b Model name
--context-window 8192 Model context window in tokens
--max-iterations 5 Max fix/test loop iterations
--embed-model text-embedding-nomic-embed-text-v2-moe Embedding model for docs RAG
-p, --project current directory Project directory to operate on
-v, --verbose off Show full prompts and LLM responses

LLM Provider Setup

LM Studio (recommended)

  1. Download LM Studio
  2. Load a code model (e.g., mlx-community/gpt-oss-20b-mlx)
  3. Load an embedding model (e.g., nomic-ai/nomic-embed-text-v2-moe)
  4. Start the server (default port 1234)
  5. Always pass --lm-studio to rustcoder

Ollama

ollama serve
ollama pull qwen2.5-coder:7b
rustcoder --model qwen2.5-coder:7b fix

Note: Ollama does not serve embeddings on the same endpoint, so docs RAG features will still try to reach LM Studio at localhost:1234 for embeddings.

How Scaffold Works

  1. Phase 1: Decompose - LLM breaks the description into modules with dependency graph and public API stubs
  2. Phase 2: Structure - Creates project dir, src/main.rs with module stubs, WORKING_MEMORY.md
  3. Phase 3: Implement - LLM implements each module in dependency order. Each module gets: its API contract, dependency APIs, working memory notes from previously implemented modules, and contextual Rust gotcha hints
  4. Phase 4: Cargo.toml - Deterministic: scans generated source for use statements, resolves crate versions from crates.io, writes Cargo.toml
  5. Phase 4.5: Pre-index docs - Downloads and indexes all dependency crates into local docs RAG (~/.rustcoder/docs_rag/)
  6. Phase 5: Fix loop - cargo check → extract highest-priority error → extract surrounding function via tree-sitter → LLM fixes with tools (file edit, read, cargo check, docs search) → repeat

How the Fix Loop Works

Errors are triaged by priority:

  1. Syntax errors (missing ;, unmatched braces)
  2. Import/module errors (unresolved import, module not found)
  3. Type errors (mismatched types, expected X found Y)
  4. Trait/impl errors (method not found, trait bound not satisfied)
  5. Borrow/lifetime errors (cannot borrow, lifetime mismatch)

Each iteration targets the highest-priority error. After the LLM applies a fix, cargo check runs again to see if that specific error was resolved. Output looks like:

  Targeting error [1/12]: E0432 at src/main.rs:5 (priority 2: import/module)
  ...
  ✅ E0432 resolved (11 errors remaining)

Tool Call Visibility

All LLM tool calls are logged so you can see what the agent is doing:

    🔧 read_file(src/main.rs)
    🔧 patch_file(src/main.rs)
    🔧 cargo_check()
    🔍 search_crate_docs(ratatui, "Widget trait render")
    🔍 → 3 results

Project Structure

src/
  main.rs              CLI (clap), command dispatch, provider setup
  rig_agent.rs         Agent struct, LLM orchestration, scaffold/fix/implement
  rig_tools.rs         Tool impls: FileEditor, PatchFile, ReadFile, CargoCheck, CodeSearch, CratesIo
  docs_rag.rs          DocsRag + SearchDocsTool: LanceDB crate docs with embeddings
  scaffold.rs          Module decomposition, stub gen, Cargo.toml inference, working memory
  tree_sitter_extract.rs  Function extraction and error priority via tree-sitter
  analyzer.rs          ra_ap_* semantic analysis wrapper
  context.rs           Project file enumeration
  pipeline.rs          Multi-phase implementation workflow (alternative to scaffold)
  gotchas.rs           Rust gotchas knowledge base for LLM guidance
  experiment_db.rs     SQLite logging of prompts/responses for debugging

Persistent State

Path Purpose
~/.rustcoder/docs_rag/ LanceDB vector store for crate documentation
~/Desktop/rustcoder_experiments/ Default output for scaffold/pipeline projects
WORKING_MEMORY.md (in generated projects) Inter-module notes from implementation phase

Key Dependencies

Component Crate Purpose
LLM Orchestration rig-core 0.29 Agent framework with typed tools
Semantic Analysis ra_ap_* 0.0.317 rust-analyzer libraries
Vector Storage lancedb 0.23 Persistent docs RAG
Embeddings arrow 56 Arrow arrays for LanceDB
Code Parsing tree-sitter 0.24 Function extraction from source
CLI clap 4 Argument parsing
HTTP reqwest 0.12 crates.io API, LM Studio embeddings
Experiment DB rusqlite 0.32 Prompt/response logging

About

Autonomous Rust coding agent powered by local LLMs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages