RustFox is a Telegram AI assistant written in Rust. It connects to Telegram as a bot, uses OpenRouter LLM for inference (default model: qwen/qwen3-235b-a22b), provides built-in sandboxed tools (file I/O, command execution), and supports MCP (Model Context Protocol) servers for extensible tool integration. It implements an agentic loop that iterates tool calls until a final text response is produced (max iterations configurable, default 25).
# Build (debug)
cargo build
# Build (release)
cargo build --release
# Run (uses ./config.toml by default)
cargo run
# Run with custom config path
cargo run -- /path/to/config.toml
# Check without building
cargo check
# Format code
cargo fmt
# Lint
cargo clippyCopy config.example.toml to config.toml and fill in credentials. The config.toml file is gitignored and must never be committed. Required fields:
telegram.bot_token- Telegram Bot API tokentelegram.allowed_user_ids- Whitelist of Telegram user IDsopenrouter.api_key- OpenRouter API keysandbox.allowed_directory- Directory for sandboxed file/command operations
src/
├── main.rs # Entry point: logging init, config loading, MCP setup, bot launch
├── config.rs # TOML config parsing (Config, TelegramConfig, OpenRouterConfig, SandboxConfig, McpServerConfig)
├── llm.rs # OpenRouter API client (ChatMessage, ToolCall, ToolDefinition, LlmClient)
├── tools.rs # Built-in tool definitions and execution with sandbox path validation
├── mcp.rs # MCP client manager (McpManager, McpConnection) for external tool servers
└── bot.rs # Telegram bot handler: message routing, agentic loop, conversation state
- User sends a Telegram message
bot.rsfilters byallowed_user_ids, routes commands (/start,/clear,/tools)- Non-command messages enter
process_with_llm()which runs the agentic loop llm.rssends conversation history + tool definitions to OpenRouter- If LLM returns tool calls,
execute_tool()dispatches to built-in tools or MCP tools - Tool results are appended to conversation and the loop repeats (up to 10 iterations)
- Final text response is split into <=4000 char chunks and sent back via Telegram
- AppState (
bot.rs): Shared state holdingLlmClient,Config,McpManager, and per-userConversationmap behind aMutex - LlmClient (
llm.rs): Stateless HTTP client for OpenRouter's/chat/completionsendpoint with tool-calling support - McpManager (
mcp.rs): Manages stdio-based MCP server child processes. Tools are namespaced asmcp_{server_name}_{tool_name} - Sandbox validation (
tools.rs): All file/command operations are restricted to the configured sandbox directory via path canonicalization
- Edition: 2021
- Async runtime: Tokio with
fullfeatures - Error handling:
anyhow::Resultthroughout, with.context()/.with_context()for error messages - Logging:
tracingcrate withtracing-subscriber(env filter:RUST_LOG, defaultinfo,rustfox=debug) - Serialization:
serdederive macros with#[serde(skip_serializing_if = "Option::is_none")]for optional fields - Shared state:
Arc<AppState>passed via teloxide's dependency injection (dptree::deps!) - Concurrency:
tokio::sync::Mutexfor per-user conversation map (notstd::sync::Mutex)
- Module names are single words (
bot,config,llm,mcp,tools) - Struct fields use
snake_case - JSON field renames use
#[serde(rename = "type")]where the Rust field name differs from the API field
- Use
anyhow::bail!()for early returns with error messages - Use
.context("message")onResultchains for context propagation - MCP connection failures are logged but do not abort startup (
connect_allcatches errors) - Tool execution errors return error strings to the LLM rather than crashing
- All file and command operations go through
validate_sandbox_path()which canonicalizes both the sandbox root and the requested path, then verifies the requested path starts with the sandbox root - The bot only responds to user IDs in
allowed_user_ids config.toml(containing secrets) is gitignored
| Crate | Purpose |
|---|---|
tokio |
Async runtime |
teloxide |
Telegram bot framework |
reqwest |
HTTP client for OpenRouter API |
serde / serde_json |
Serialization |
toml |
Config file parsing |
rmcp |
Official MCP Rust SDK (stdio transport) |
tracing / tracing-subscriber |
Structured logging |
anyhow |
Error handling |
futures |
Async utilities |
CI runs on every push to main and on pull requests targeting main. The pipeline is defined in .github/workflows/ci.yml and runs five parallel jobs:
| Job | Command | Purpose |
|---|---|---|
| Check | cargo check |
Fast compilation check |
| Format | cargo fmt --all -- --check |
Enforces consistent formatting |
| Clippy | cargo clippy -- -D warnings |
Lint — all warnings are errors |
| Test | cargo test |
Runs all unit and integration tests |
| Build | cargo build --release |
Release build (runs after all other jobs pass) |
All jobs use dtolnay/rust-toolchain@stable and Swatinem/rust-cache@v2 for caching. Before opening a PR, ensure cargo fmt, cargo clippy -- -D warnings, and cargo test pass locally.
No automated tests exist yet. When adding tests:
- Place unit tests in
#[cfg(test)] mod testsblocks within each source file - Integration tests go in a top-level
tests/directory - The sandbox path validation logic in
tools.rsand message splitting inbot.rsare good candidates for unit tests
- Add a
ToolDefinitionentry inbuiltin_tool_definitions()insrc/tools.rs - Add a match arm in
execute_builtin_tool()insrc/tools.rs - Use
validate_sandbox_path()if the tool accesses the filesystem
- Add a new
if text == "/command"block inhandle_message()insrc/bot.rs(before the LLM processing section)
Update default_model() in src/config.rs. Users can also override this in their config.toml.
Add a [[mcp_servers]] block to config.toml with name, command, args, and optional env fields. See config.example.toml for examples.
Bot skills are natural-language instructions loaded at startup and injected into the LLM's system prompt. Each skill must be in its own folder following the Claude agent skills format:
skills/
skill-name/
SKILL.md # Required: YAML frontmatter + instruction body
supporting-file.* # Optional: templates, examples, reference docs
SKILL.md frontmatter:
---
name: skill-name # lowercase letters, numbers, hyphens only
description: Brief description of what this skill does
tags: [tag1, tag2] # optional: for organization
---- Create
skills/<skill-name>/SKILL.mdwith frontmatter and instruction body - The skill is auto-loaded at startup — no code changes needed
- Configure the skills directory in
config.toml:[skills] directory = "skills"
All skills are represented in the system prompt by metadata only (name + description). Instruction skills (no model in frontmatter) have their full content loaded by the agent via read_skill_file(skill_name="...", relative_path="SKILL.md") when relevant. Subagent skills (model set) are invoked via invoke_subagent(skill="name", prompt="..."). The orchestration skill teaches the agent when to call which subagent and when to override the model (e.g. model="anthropic/claude-sonnet-4-6" for thread-writer-hk).
Subagent tool whitelist: For subagent skills, the frontmatter tools: list must use the exact tool names as seen by the agent. MCP tools are named mcp_{server_name}_{tool_name} (e.g. mcp_google-workspace_query_gmail_emails). These names are logged at startup when MCP servers connect (MCP server 'X' provides N tools). A mismatch (e.g. declaring search_gmail_messages when the server exposes query_gmail_emails) causes the subagent to have no access to that tool.
Daily News to Threads flow: The daily-news-to-threads orchestration skill (instruction) directs the main agent to: (1) call the news-fetcher subagent (default model) to get AI news from Gmail Google 快訊, (2) call the thread-writer-hk subagent with model override to write a HK-style Threads thread with verified links, (3) post the thread via Threads MCP and report success. Requires Gmail (google-workspace), fetch, and threads MCP servers in config.
config.toml- Contains API keys and tokens.env- Environment variables/target/- Build artifacts