User-facing guide: quick start, configuration, API usage, deployment. See architecture.en.md for layers and request flow.
- Install cursor-agent (see Installing cursor-agent).
- Build:
cargo build --release. - Run:
cargo run(or run the binary). Default port: 3001. - Optional: create or edit
~/.cursor-brain/config.json(see Configuration). - Call the API: e.g.
curl -X POST http://localhost:3001/v1/chat/completions -H "Content-Type: application/json" -d '{"model":"auto","messages":[{"role":"user","content":"Hello"}]}'.
- Source:
~/.cursor-brain/config.jsononly (no environment variables). On first run, if the file is missing, a default config file is written there. - Location: Windows
%USERPROFILE%\.cursor-brain\config.json; Linux/macOS~/.cursor-brain/config.json.
| Key | Description | Default |
|---|---|---|
port |
HTTP server port | 3001 |
bind_address |
Listen address (e.g. 0.0.0.0, 127.0.0.1) |
0.0.0.0 |
cursor_path |
Path to cursor-agent executable | auto-detect |
request_timeout_sec |
Per-request timeout (seconds) | 300 |
session_cache_max |
Session cache capacity | 1000 |
session_header_name |
Header for external session id | x-session-id |
default_model |
Default model when request omits model | (none) |
fallback_model |
Fallback when no content | (none) |
minimal_workspace_dir |
Workspace for agent (no project MCP) | ~/.cursor-brain/workspace |
sandbox |
enabled or disabled |
enabled |
forward_thinking |
off, content, or reasoning_content |
content |
{
"port": 3001,
"bind_address": "0.0.0.0",
"request_timeout_sec": 300
}| Method | Path | Description |
|---|---|---|
| POST | /v1/chat/completions | Chat completion (streaming or non-streaming). |
| GET | /v1/models | List models (from cursor-agent). |
| GET | /v1/models/:id | Get model by id. |
| GET | /v1/health | Health and versions (cursor_agent_version, cursor_brain_version). |
| GET | /v1/version | cursor-agent version. |
| GET | /v1/agent/about, /v1/agent/status, /v1/agent/sessions | Agent subcommands. |
| POST | /v1/agent/chats | Create empty chat. |
| GET | /v1/metrics | JSON metrics (requests_total, cursor_calls_ok, etc.). |
| POST | /v1/embeddings | 501 Not Implemented. |
| POST | /v1/completions | 501 Not Implemented. |
- Session: send
X-Session-Id(or header name from config) to reuse or create a cursor session. - Streaming: set
"stream": truein the request body. - Thinking: see openai-protocol.md for
contentvsreasoning_contentandforward_thinking.
All 4xx/5xx responses use: { "error": { "message", "code", "type" } }. Response header X-Request-Id (UUID) for correlation.
- Path:
~/.cursor-brain/cursor-brain.pid. - Behavior: Written after bind (create or truncate); removed on normal exit and on panic.
- Use: Single-instance check, monitoring, process managers (e.g. systemd).
- Default:
~/.cursor-brain/workspace. Created on startup if missing. - Purpose: Used as
--workspacefor cursor-agent; empty directory avoids project-level MCP loading.
- Start cursor-brain (e.g.
cargo run). - Add provider to
~/.ironclaw/providers.json: merge the object from provider-definition.json into the array. - In Ironclaw LLM setup, choose Cursor Brain and pick a model.
cursor-brain does not install or upgrade cursor-agent. Install it yourself:
- Linux / macOS:
curl https://cursor.com/install -fsSL | bash - Windows: Follow Cursor documentation. Ensure
agentis on PATH or setcursor_pathin config.
- DESIGN.md — design decisions, defaults, platform support.
- openai-protocol.md — OpenAI alignment,
contentvsreasoning_content. - architecture.en.md — component layers and request flow.