Container + Wasm orchestrator with AI ops
Fills the gap between Coolify and Kubernetes.
Documentation • Quick Start • Features • Contributing
Orca is a single-binary orchestrator for teams that have outgrown one server but don't need Kubernetes. It runs containers and WebAssembly modules as first-class workloads, with built-in reverse proxy, auto-TLS, secrets management, health checks, and an AI operations assistant. Deploy with TOML configs that fit on one screen — no YAML empires.
Docker Compose ──> Coolify ──> Orca ──> Kubernetes
(1 node) (1 node) (2-20) (20-10k)
cargo install mallorca
# Option A: systemd (recommended — handles port binding automatically)
orca install-service
sudo systemctl start orca
# Option B: manual (requires setcap after each install/update)
sudo setcap 'cap_net_bind_service=+ep' $(which orca)
orca server --daemonAdd worker nodes:
# On the worker node:
orca install-service --leader <master-ip>:6880
sudo systemctl start orca-agentCreate a service in services/web/service.toml and deploy:
[[service]]
name = "web"
image = "nginx:alpine"
replicas = 2
port = 80
domain = "example.com"
health = "/"orca deploy && orca status- WebSocket streaming -- agents connect to the master over a persistent bidirectional WebSocket, replacing HTTP heartbeat polling.
- Agent proxy hot-adds routes + TLS certs on container deploy -- no proxy restart needed.
- Reconcile on reconnect -- after a network partition, agents automatically converge to the desired state.
- Infra webhook -- git push to your orca-infra repo triggers
git pull+ redeploy. Full GitOps. - Single-service deploys --
orca deploy <service-name>andorca redeploy <service>for force pull + restart. - CLI auto-connects -- on agent nodes, all commands work without
--api. - Smart reconciler -- compares unresolved env templates so OAuth token refreshes don't cause unnecessary restarts.
- Webhook persistence -- webhooks survive restarts (
~/.orca/webhooks.json).
See CHANGELOG.md for the full history.
One static executable is the agent, control plane, CLI, and reverse proxy. scp it to a server and you have a production-ready orchestrator with auto-TLS, secrets, health checks, and Prometheus metrics.
Run Docker containers and WebAssembly modules side by side. Containers for existing images and databases (~3s cold start). Wasm for edge functions and API handlers (~5ms cold start, ~1-5MB memory).
Raft consensus via openraft with embedded redb storage — no etcd. Bin-packing scheduler with GPU awareness. Nodes can span multiple cloud providers via NetBird WireGuard mesh.
Watchdog restarts crashed containers in ~30s. Health checks with configurable thresholds. Stale route cleanup. Agent reconnection with exponential backoff. Services survive server restarts.
orca ask "why is the API slow?" — diagnoses issues using cluster context. Works with any OpenAI-compatible API (Ollama, LiteLLM, vLLM, OpenAI). Conversational alerts, config generation, and optional auto-remediation.
TOML config that fits on one screen. TUI dashboard with k9s-style navigation. Git push deploy via webhooks. One-click database creation. RBAC with admin/deployer/viewer roles.
┌─────────────────────────────────────┐
│ CLI / TUI / API │
└──────────────┬──────────────────────┘
│
┌──────────────▼──────────────────────┐
│ Control Plane │
│ Raft consensus (openraft + redb) │
│ Scheduler (bin-packing + GPU) │
│ API server (axum) │
│ Health checker + AI monitor │
└──────────────┬──────────────────────┘
│ WebSocket
┌──────────┼──────────┐
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Node 1 │ │ Node 2 │ │ Node 3 │
│ Docker │ │ Docker │ │ Docker │
│ Wasm │ │ Wasm │ │ Wasm │
│ Proxy │ │ Proxy │ │ Proxy │
└────────┘ └────────┘ └────────┘
8 Rust crates | ~28k lines | ~120 tests | all files under 250 lines
Full documentation at mighty840.github.io/orca:
- Getting Started — install, first cluster, first deploy
- Configuration — cluster.toml and service.toml reference
- CLI Reference — every command with examples
- REST API — full endpoint reference
- Architecture — crate map, runtime trait, design principles
We welcome contributions! See CONTRIBUTING.md for setup instructions and guidelines.
Key areas where help is wanted:
- ACME/Let's Encrypt automation
- Nixpacks integration for auto-detect builds
- Service templates (WordPress, Supabase, etc.)
- Preview environments (PR-based deploys)
AGPL-3.0. See LICENSE.