An open-source control plane for governed agent systems.
Watch the OpenHive product intro video with audio
OpenHive helps teams create, run, and govern agent systems with a clear path from local preview to production-facing operations. A Keeper agent manages each project, Scout assistants operate inside group chats, and Pipeline jobs run scheduled workflows in the background.
The runtime is business-agnostic: behavior comes from templates, skills, plugins, bundles, and policies instead of hard-coded vertical logic. OpenHive is designed around secure execution, approval boundaries, auditability, trusted extensions, and fleet-style operations for many agents.
The current repository ships a self-hosted platform preview with a packaged generic starter template and a baseline Feishu integration, but that preview setup is only one workload on top of a general platform. The intended platform direction is broader, including:
- Secure Internal Agents — coding, research, docs, and internal ops
- Research And Monitoring Workflows — recurring analysis, alerts, and reports
- Agent Fleets For Customer And Operations — many related agents under one control plane
The current 0.9.x External Preview target is a self-hosted platform
preview: one repo, one database, one dashboard, and a source-based local
setup that gets you from clone to login to your first project without extra
infrastructure. Feishu is the current baseline messaging integration, not the
product's defining scope.
The supported preview install mode is preview_local:
- PostgreSQL via
docker compose - backend from source with
uv - dashboard from source with
npm
See docs/installation-modes.md for the install
mode matrix and the ownership boundaries for future installer flows.
Hive (platform gateway)
├── Keeper — one per project, the project's PM agent (Claude Sonnet)
├── Scout — one per group chat, the group assistant (Claude Haiku)
└── Pipeline — background jobs for classify / alert / report workflows
| Name | Role |
|---|---|
| Hive | Platform gateway — routes messages, manages credentials, runs the scheduler |
| Keeper | Project manager agent; creates Scouts, deploys pipeline jobs, processes feedback |
| Scout | Group chat assistant; queries data, collects feedback, runs Skills |
| Pipeline | Background execution path for scheduled analysis, alerts, reports, and other automation |
| Skill | Stateless capability unit executed as a subprocess (stdin/stdout JSON) |
| Plugin | Extension that adds platform capabilities such as connectors, policy packs, or deploy targets |
┌──────────────────── Hive Gateway (always on, LLM-free) ─────────────────────┐
│ Feishu WebSocket (lark-oapi SDK) → Message Router → Auth Checker │
│ Gateway Bot (project creation, no LLM) │
│ Dashboard Auth (username/password) · Credential Proxy · APScheduler │
└──────────────────────────┬──────────────────────────────────────────────────┘
│ wake on demand
┌────────────┼────────────┐
▼ ▼ ▼
HiveAgent HiveAgent HiveAgent ← unified runtime
Keeper Scout Scout ← config differentiates roles
(Sonnet) (Haiku) (Haiku)
↕ stdout/stdin JSON subprocess
┌─────────────────────────────────────┐
│ Skills / Pipeline-Skills │
│ pre-processor → classifier → │
│ router → handler → post-processor │
└─────────────────────────────────────┘
┌──────────────────── Web Dashboard (Next.js) ────────────────────────────────┐
│ /projects — overview, keeper status, pending work-item counts │
│ /projects/[id] — detail: agents, trend chart, scouts, changes, work items │
│ /audit — full change history with inline diff viewer │
│ /admin — user management, invite codes │
└─────────────────────────────────────────────────────────────────────────────┘
Core design principles:
- One runtime, N configs —
HiveAgentis the only agent class; Keeper/Scout/Queen are instances configured differently. - Scope isolation — all DB operations go through
ScopedDB, which enforcesproject_id/group_idboundaries automatically. - Skills are subprocesses — each Skill runs as a sandboxed Python script; no direct DB access, no shared state.
- Gateway is LLM-free — the platform entry point uses rule matching only; no LLM tokens are spent on routing.
- Secrets are gateway-managed — OpenHive already centralizes secret resolution, encrypted channel storage, and relay-backed sandbox model access, but the in-process
LocalAgentPoolpath is not yet a complete gateway-only secret boundary for agent LLM calls. - Governance before sprawl — approvals, audit, rollout, rollback, and trusted extensions matter more than raw agent count.
- Unified agent loop — a single
HiveAgentclass handles all roles viaAgentConfiginjection - Feishu (Lark) integration — official
lark-oapiSDK with WebSocket long connection, DM and group chat - Interactive card notifications — approval cards plus plugin-owned operational cards with action buttons
- Card action handling — PM clicks "mark handled" / "approve" / "reject" directly inside Feishu
- Multi-channel Feishu support — multiple Feishu apps per project; one channel designated as Keeper DM
- Template-driven project creation —
GatewayBotoffers pre-built templates (e.g. Starter Workspace); PM picks a number or selects Custom;GET /api/templatesexposes the same list in the Dashboard - Multi-turn project creation —
GatewayBotguides PMs through template → name → keywords → confirm without LLM - Invite-code admission — whitelist + one-time invite codes for PM onboarding
- Layered memory — MEMORY.md + role files + time-decayed date logs + GLOBAL.md budget system
- Memory compaction — LLM-powered MEMORY.md compression with automatic backup/restore
- Tool transparency — critical Keeper operations auto-notify the PM before and after execution
- Config-driven scheduler — cron schedules live in
config.yaml;HEARTBEAT.mdis PM-editable without code changes - Skill pipeline — serial subprocess execution with
stage_resultschaining and timeout guards - Feedback loop — Scout writes to
feedback_queue→ Keeper analyses →ChangeProposal→ PM approves via card - Skill version isolation — each project pins its own skill version; upgrades are opt-in and non-disruptive
- Prompt shadow testing — Pipeline runs both production and shadow prompts in parallel, diffs outputs, stores results for PM review via Dashboard
- Soft delete — projects, groups, and agents are logically deleted (deleted_at) rather than hard-removed, preserving audit trails
- Rate limiting — sliding-window per-user rate limiter on Gateway and Dashboard API
- Docker pipeline runtime — pipeline-skills run as Docker containers with drain-on-restart support
- Web Dashboard — Next.js app with project overview, trend charts (ECharts), audit log with diff viewer, admin panel, prompt-test review
- Dashboard auth — username/password login with scrypt-hashed passwords; HMAC-signed session cookies; admin bootstrapped from env vars
- Multiple LLM providers — Anthropic Claude, OpenAI, DeepSeek, Qwen, Ollama (OpenAI-compatible)
- Supported path —
preview_local: local Docker PostgreSQL plus source-run backend and dashboard - Core operator workflow — log in to the Dashboard, create a project, inspect sessions/runs/traces, manage channels and Scouts
- Optional integration — Feishu bot messaging and channel wiring when app credentials are provided
- Experimental runtime surface — the governed workspace-task sandbox exists in backend and K8s baseline form, and you can run it locally on
http://127.0.0.1:8091withmake run-sandbox, but it is still an experimental operator workflow rather than a preview-default requirement - Current trust-boundary limitation —
preview_localstill runs Keeper and Scout via the in-processLocalAgentPool, so gateway-only vendor-secret residency is fully enforced today on relay-backed sandbox paths, not yet on the default in-process agent model path - Explicitly deferred — fully hardened sandbox productization, provider-neutral sandbox backends beyond the current
codexpath, multi-IM expansion, and v2 container orchestration work
See docs/preview-release-checklist.md for the release gate and the intentionally deferred backlog.
See docs/installation-modes.md for why the current preview path is not the same as a future packaged docker_quickstart.
| Layer | Technology |
|---|---|
| Backend language | Python 3.12+ |
| API server | FastAPI + uvicorn |
| LLM | Anthropic Claude (Sonnet 4.6 / Haiku 4.5) |
| Database | PostgreSQL 16 + asyncpg |
| ORM | SQLAlchemy 2.0 (async) |
| Feishu SDK | lark-oapi (WebSocket long connection) |
| Scheduler | APScheduler |
| Container | Docker (pipelines) |
| Config | Pydantic Settings + YAML |
| Logging | structlog |
| Package manager | uv |
| Frontend | Next.js 16 + React 19 + TypeScript |
| UI styles | Tailwind CSS v4 |
| Server state | TanStack Query v5 |
| Charts | ECharts 6 |
| Frontend tests | Vitest + Testing Library |
See docs/getting-started.md for the full setup guide with troubleshooting.
That guide describes the supported preview_local path, not a packaged docker_quickstart.
See docs/env-ownership.md for the planned managed .env boundary used by future installer flows.
- Python 3.12+, Node.js 18+
- uv (
pip install uv) - Docker (pgvector-backed PostgreSQL + pipelines)
- (Optional) A Feishu Open Platform app — only needed for bot messaging
git clone https://github.com/terrywangcode/openhive.git
cd openhive
cd server
uv sync --all-extras
uv run openhive setupopenhive setup is now the recommended preview-local entry point. It creates
the managed .env, writes .openhive/install-state.json, boots local Docker
PostgreSQL, runs migrations, and prints the next backend/dashboard commands.
The current preview_local flow uses the fixed local ports 5432 (PostgreSQL),
8080 (API), and 3000 (dashboard dev server); custom port flags are not part
of the supported installer contract yet.
The installer CLI resolves language in this order:
--lang- persisted install preference in
.openhive/install-state.json OPENHIVE_LANG- system locale
- English fallback
After setup:
cd ..
make run
cd web
npm install
npm run devRead-only installer support commands:
cd server
uv run openhive status
uv run openhive doctorLifecycle follow-up commands for an existing preview_local install:
cd server
uv run openhive update --yes
uv run openhive uninstall --yes
uv run openhive purge --confirm "PURGE OPENHIVE"openhive updatekeeps operator-owned repo files intact, refreshes installer metadata, and runs pending migrations unless you pass--skip-migrateopenhive uninstallstops managed preview-local services and marks the install inactive while preserving project data and config backupsopenhive purgeis the destructive path; it removes only managed local preview-local resources after typed confirmation and does not touch external PostgreSQL resources- deferred install modes fail closed for these lifecycle commands until their runtime ownership model is implemented
cp .env.example .env # then edit with your valuesFuture installer flows should only manage the documented OpenHive-owned subset
and preserve unrelated operator keys; see docs/env-ownership.md.
docker compose up -d postgresThe bundled docker-compose.yml uses pgvector/pgvector:pg16, so the local
database supports the vector extension required by hybrid memory search.
cd server
uv sync --all-extras
uv run alembic upgrade headmake run # starts http://localhost:8080
curl http://localhost:8080/healthz # {"status":"ok"}If you want Keeper-driven dev-task workflows locally, start the sandbox in a second terminal:
make run-sandbox # starts http://localhost:8091
curl http://localhost:8091/healthz # {"status":"ok","role":"sandbox"}This sandbox lane exists today, but it is still an experimental operator path
inside preview_local rather than a preview-default requirement.
cd web && npm install && npm run dev # http://localhost:3000API calls are proxied to :8080 via Next.js rewrites, and the production build is designed to succeed without fetching remote fonts.
Open http://localhost:3000 — you'll be redirected to the login page.
Log in with the admin credentials set in .env (HIVE_ADMIN_USERNAME / HIVE_ADMIN_PASSWORD).
Once logged in, go to Admin to create PM user accounts (username + password).
Open Projects in the Dashboard and click New Project.
- Choose a template
- Set the project name and keywords
- Save the project and verify it appears in the list
If you also want live Feishu routing, go to the project Settings page and add a Feishu channel after the core dashboard flow is working.
make test # backend unit tests (no server required)
cd web && npm test # frontend component tests
make test-e2e # API smoke tests (requires a live backend server)The repository map now lives in docs/project-layout.md.
Use it when you need a quick directory-level guide to:
- backend runtime and gateway code under
server/hive/ - dashboard pages and components under
web/src/ - shared API contracts under
packages/shared/ - public docs under
docs/; maintainer-only private notes and KB live in a separate companion repo outside this public checkout
Admin creates PM account in Dashboard (/admin)
↓
PM logs into Dashboard with username/password
↓
PM opens Feishu → sends message to Hive Bot
↓
AuthChecker: is PM's Feishu ID bound to a platform user?
No → "Enter your invite code"
Yes ↓
GatewayBot: does PM have a project?
No → multi-turn create flow (template → name → keywords → confirm)
Yes → route to Keeper
↓
Keeper initialised → sends intro DM to PM
Group message arrives
↓ pre-processor (normalise, strip @mentions)
↓ intent-classifier-hybrid (rules → LLM fallback)
↓ router (select handler from intent)
↓ handler (resolved from installed skill capabilities)
↓ post-processor (format reply)
→ Scout sends reply to group
Scout.write_feedback() → feedback_queue table
↓ (cron: 18:00 daily)
Keeper wakes up → query_data (unprocessed feedback)
↓
Configured evolution plugin handles feedback events
→ ChangeProposal list (project-scoped adjustments)
↓
Keeper sends approval card to PM → PM clicks Approve/Reject in Feishu
↓ WebSocket card_action event → handle_change_approval()
Each Skill is a standalone Python script communicating over stdin/stdout:
# skills/my-skill/scripts/run.py
import json, sys
def main():
data = json.loads(sys.stdin.read())
# data: {"message": str, "context": dict, "config": dict, "stage_results": dict}
result = {"reply": f"Processed: {data['message']}", "error": None}
print(json.dumps(result))
if __name__ == "__main__":
main()Add a SKILL.md alongside it:
---
name: my-skill
version: 1.0.0
description: My custom skill
skill_type: handler
trigger_intent: "query"
compatible_group_types:
- internal
- client
---Install via Keeper:
PM: "Install my-skill into the internal group" Keeper: Skill 'my-skill' installed into group oc_xxx.
from hive.plugins.base import EvolutionPlugin, ChangeProposal
class MyEvolutionPlugin(EvolutionPlugin):
async def on_feedback_event(self, project_id, feedbacks):
return [ChangeProposal(
change_type="config_update",
description="Adjust threshold",
diff_content="threshold: 0.7 → 0.65",
risk_level="low",
auto_apply=False,
)]
async def run_shadow_test(self, project_id, proposal):
return {"passed": True, "report": "Looks good."}| Variable | Description | Default |
|---|---|---|
DATABASE_URL |
PostgreSQL async URL | postgresql+asyncpg://hive:password@localhost:5432/hive |
DB_PASSWORD |
Password used by docker compose for the local PostgreSQL container |
password |
ANTHROPIC_API_KEY |
Anthropic API key | — |
DASHBOARD_SESSION_SECRET |
HMAC key for Dashboard session tokens | change-me-in-production |
HIVE_ADMIN_USERNAME |
Bootstrap admin username | admin |
HIVE_ADMIN_PASSWORD |
Bootstrap admin password (required for auto-creation) | — |
HIVE_ADMIN_FEISHU_USER_ID |
Optional Feishu ID for admin bot binding | — |
HIVE_WORKSPACE |
Project data root (server for source setup, /data in the compose gateway) |
server |
HIVE_SANDBOX_URL |
Optional local sandbox API base URL for Keeper dev-task workflows | http://127.0.0.1:8091 |
HIVE_INTERNAL_SECRET |
Shared internal secret used by preview-local relay and sandbox flows | — |
HIVE_MAX_ACTIVE_AGENTS |
Max concurrent agents | 20 |
FEISHU_APP_ID |
Feishu app ID (optional — for bot messaging) | — |
FEISHU_APP_SECRET |
Feishu app secret | — |
FEISHU_VERIFICATION_TOKEN |
Feishu verification token | — |
FEISHU_ENCRYPT_KEY |
Feishu encrypt key (for WebSocket event subscription) | — |
project:
id: my_project
name: "My Project"
status: active
pm_user_id: "alice"
keeper_identity:
name: "Bee"
vibe: "professional, concise, data-driven"
schedules:
heartbeat:
cron: "0 * * * *"
prompt_file: HEARTBEAT.md # PM-editable checklist
process_feedback:
cron: "0 18 * * *"
prompt: "Process the feedback queue."
pipeline:
cron: "0 * * * *"
run_pipeline: true
prompt: "Check pipeline results."pm_user_id is the PM's platform username. The Feishu account used for Keeper
pairing is stored separately in project_pm_bindings.feishu_user_id.
make run # Start the backend (hot-reload, :8080)
make dev-web # Start the Dashboard dev server (:3000)
make test # Backend unit tests (no server required)
make test-e2e # API smoke tests (requires a live backend server)
make lint # ruff (backend) + eslint (frontend)
make format # ruff format
make migrate # alembic upgrade head
make db # Start pgvector-backed PostgreSQL via docker compose
make db-stop # Stop PostgreSQL
make install # uv sync --all-extras + npm install
make build-web # Production build of the DashboardSee CONTRIBUTING.md.
For private vulnerability reporting and supported security scope, see SECURITY.md.
For shipped-version maintenance and hotfixes, also see docs/release-workflow.md.
All code, comments, commit messages, and documentation must be in English. User-facing messages sent to Feishu (runtime) may be in Chinese.
MIT — see LICENSE.