| Requirement | Version | Notes |
|---|---|---|
| Python | 3.12+ | python3 --version |
| uv | latest | astral.sh/uv |
| Git | any | |
| Docker | optional | Required for mesh mode |
git clone https://github.com/monaccode/astromesh.git
cd astromesh
uv sync --extra all
make dev-single
# or: astromeshctl init --dev && astromeshd --config ./config --log-level debugVerify the node is running:
curl http://localhost:8000/healthUsing the pre-built image? Skip
make dev-meshand use the ready-made compose recipes inrecipes/instead — no source checkout needed. See the Maia Developer Guide for details.
make dev-meshThis starts a 3-node cluster (from source):
- Gateway (port 8000) — receives API requests, routes to workers
- Worker — executes agent pipelines and tool calls
- Inference — hosts local models (Ollama)
Nodes discover each other via the gossip protocol. All three form a mesh automatically.
Verify the mesh by running an agent:
curl http://localhost:8000/v1/agents/support-agent/run \
-X POST \
-H "Content-Type: application/json" \
-d '{"query": "Hello"}'Stop the mesh:
make dev-stopastromeshctl init --devThe wizard walks through:
- Role selection — gateway, worker, inference, or standalone
- Provider configuration — API keys for OpenAI/Anthropic/etc., or local Ollama endpoint
- Config generation — writes YAML files to
./config/
For CI pipelines, skip the prompts:
astromeshctl init --dev --non-interactiveCreate a file at config/agents/hello.agent.yaml:
apiVersion: astromesh/v1
kind: Agent
metadata:
name: hello-agent
spec:
identity:
description: A minimal test agent
model:
primary:
provider: openai
model: gpt-4o-mini
prompts:
system: |
You are a helpful assistant. Keep responses brief.
orchestration:
pattern: react
max_iterations: 3
timeout: 30Call it:
curl http://localhost:8000/v1/agents/hello-agent/run \
-X POST \
-H "Content-Type: application/json" \
-d '{"query": "What is 2+2?"}'See config/agents/support-agent.agent.yaml for a full example with tools, memory, and guardrails.
| Target | Description |
|---|---|
make help |
Show all targets |
make dev-single |
Run single node from source |
make dev-mesh |
Start 3-node Docker mesh |
make dev-stop |
Stop Docker mesh |
make dev-logs |
Tail mesh logs |
make test |
Run tests |
make test-cov |
Tests with coverage |
make lint |
Lint with ruff |
make fmt |
Format with ruff |
make build-deb |
Build .deb package |
make build-rust |
Build Rust native extensions |
For deploying on servers:
curl -fsSL https://monaccode.github.io/astromesh/get-astromesh.sh | bashThis installs the astromeshd daemon and astromeshctl CLI, then runs astromeshctl init to configure the node.
Native Rust extensions provide 5-50x speedup on CPU-bound paths (embedding ops, message parsing).
make build-rustRequires maturin and a Rust toolchain. Without them, pure-Python fallbacks are used automatically. Set ASTROMESH_FORCE_PYTHON=1 to disable native extensions at runtime.
Port 8000 already in use
# Find what's using the port
lsof -i :8000
# Or pick a different port
astromeshd --config ./config --port 8001Ollama not running
The inference node expects Ollama at http://localhost:11434. Start it:
ollama serveOr point to a remote instance via OLLAMA_HOST in your environment.
Python version mismatch
Astromesh requires Python 3.12+. Check your version:
python3 --versionIf you have multiple versions installed, point uv at the right one:
uv python pin 3.12
uv sync --extra allDocker mesh nodes failing to connect
Ensure Docker networking is healthy:
docker network ls
make dev-stop && make dev-meshTests failing with import errors
Reinstall dependencies:
uv sync --extra all
uv run pytest -v