Skip to content

terrywangcode/openhive

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

957 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

OpenHive

CI

An open-source control plane for governed agent systems.

OpenHive product introduction video preview

Watch the OpenHive product intro video with audio

OpenHive helps teams create, run, and govern agent systems with a clear path from local preview to production-facing operations. A Keeper agent manages each project, Scout assistants operate inside group chats, and Pipeline jobs run scheduled workflows in the background.

The runtime is business-agnostic: behavior comes from templates, skills, plugins, bundles, and policies instead of hard-coded vertical logic. OpenHive is designed around secure execution, approval boundaries, auditability, trusted extensions, and fleet-style operations for many agents.

The current repository ships a self-hosted platform preview with a packaged generic starter template and a baseline Feishu integration, but that preview setup is only one workload on top of a general platform. The intended platform direction is broader, including:

  • Secure Internal Agents — coding, research, docs, and internal ops
  • Research And Monitoring Workflows — recurring analysis, alerts, and reports
  • Agent Fleets For Customer And Operations — many related agents under one control plane

The current 0.9.x External Preview target is a self-hosted platform preview: one repo, one database, one dashboard, and a source-based local setup that gets you from clone to login to your first project without extra infrastructure. Feishu is the current baseline messaging integration, not the product's defining scope.

The supported preview install mode is preview_local:

  • PostgreSQL via docker compose
  • backend from source with uv
  • dashboard from source with npm

See docs/installation-modes.md for the install mode matrix and the ownership boundaries for future installer flows.


Key Concepts

Hive (platform gateway)
 ├── Keeper   — one per project, the project's PM agent (Claude Sonnet)
 ├── Scout    — one per group chat, the group assistant (Claude Haiku)
 └── Pipeline — background jobs for classify / alert / report workflows
Name Role
Hive Platform gateway — routes messages, manages credentials, runs the scheduler
Keeper Project manager agent; creates Scouts, deploys pipeline jobs, processes feedback
Scout Group chat assistant; queries data, collects feedback, runs Skills
Pipeline Background execution path for scheduled analysis, alerts, reports, and other automation
Skill Stateless capability unit executed as a subprocess (stdin/stdout JSON)
Plugin Extension that adds platform capabilities such as connectors, policy packs, or deploy targets

Architecture

┌──────────────────── Hive Gateway (always on, LLM-free) ─────────────────────┐
│  Feishu WebSocket (lark-oapi SDK)  →  Message Router  →  Auth Checker       │
│  Gateway Bot (project creation, no LLM)                                      │
│  Dashboard Auth (username/password)  ·  Credential Proxy  ·  APScheduler    │
└──────────────────────────┬──────────────────────────────────────────────────┘
                           │  wake on demand
              ┌────────────┼────────────┐
              ▼            ▼            ▼
        HiveAgent      HiveAgent    HiveAgent     ← unified runtime
        Keeper         Scout        Scout         ← config differentiates roles
        (Sonnet)       (Haiku)      (Haiku)

              ↕ stdout/stdin JSON subprocess
        ┌─────────────────────────────────────┐
        │  Skills / Pipeline-Skills           │
        │  pre-processor → classifier →       │
        │  router → handler → post-processor  │
        └─────────────────────────────────────┘

┌──────────────────── Web Dashboard (Next.js) ────────────────────────────────┐
│  /projects      — overview, keeper status, pending work-item counts          │
│  /projects/[id] — detail: agents, trend chart, scouts, changes, work items  │
│  /audit         — full change history with inline diff viewer                │
│  /admin         — user management, invite codes                              │
└─────────────────────────────────────────────────────────────────────────────┘

Core design principles:

  • One runtime, N configsHiveAgent is the only agent class; Keeper/Scout/Queen are instances configured differently.
  • Scope isolation — all DB operations go through ScopedDB, which enforces project_id/group_id boundaries automatically.
  • Skills are subprocesses — each Skill runs as a sandboxed Python script; no direct DB access, no shared state.
  • Gateway is LLM-free — the platform entry point uses rule matching only; no LLM tokens are spent on routing.
  • Secrets are gateway-managed — OpenHive already centralizes secret resolution, encrypted channel storage, and relay-backed sandbox model access, but the in-process LocalAgentPool path is not yet a complete gateway-only secret boundary for agent LLM calls.
  • Governance before sprawl — approvals, audit, rollout, rollback, and trusted extensions matter more than raw agent count.

Features

  • Unified agent loop — a single HiveAgent class handles all roles via AgentConfig injection
  • Feishu (Lark) integration — official lark-oapi SDK with WebSocket long connection, DM and group chat
  • Interactive card notifications — approval cards plus plugin-owned operational cards with action buttons
  • Card action handling — PM clicks "mark handled" / "approve" / "reject" directly inside Feishu
  • Multi-channel Feishu support — multiple Feishu apps per project; one channel designated as Keeper DM
  • Template-driven project creationGatewayBot offers pre-built templates (e.g. Starter Workspace); PM picks a number or selects Custom; GET /api/templates exposes the same list in the Dashboard
  • Multi-turn project creationGatewayBot guides PMs through template → name → keywords → confirm without LLM
  • Invite-code admission — whitelist + one-time invite codes for PM onboarding
  • Layered memory — MEMORY.md + role files + time-decayed date logs + GLOBAL.md budget system
  • Memory compaction — LLM-powered MEMORY.md compression with automatic backup/restore
  • Tool transparency — critical Keeper operations auto-notify the PM before and after execution
  • Config-driven scheduler — cron schedules live in config.yaml; HEARTBEAT.md is PM-editable without code changes
  • Skill pipeline — serial subprocess execution with stage_results chaining and timeout guards
  • Feedback loop — Scout writes to feedback_queue → Keeper analyses → ChangeProposal → PM approves via card
  • Skill version isolation — each project pins its own skill version; upgrades are opt-in and non-disruptive
  • Prompt shadow testing — Pipeline runs both production and shadow prompts in parallel, diffs outputs, stores results for PM review via Dashboard
  • Soft delete — projects, groups, and agents are logically deleted (deleted_at) rather than hard-removed, preserving audit trails
  • Rate limiting — sliding-window per-user rate limiter on Gateway and Dashboard API
  • Docker pipeline runtime — pipeline-skills run as Docker containers with drain-on-restart support
  • Web Dashboard — Next.js app with project overview, trend charts (ECharts), audit log with diff viewer, admin panel, prompt-test review
  • Dashboard auth — username/password login with scrypt-hashed passwords; HMAC-signed session cookies; admin bootstrapped from env vars
  • Multiple LLM providers — Anthropic Claude, OpenAI, DeepSeek, Qwen, Ollama (OpenAI-compatible)

Preview Scope

  • Supported pathpreview_local: local Docker PostgreSQL plus source-run backend and dashboard
  • Core operator workflow — log in to the Dashboard, create a project, inspect sessions/runs/traces, manage channels and Scouts
  • Optional integration — Feishu bot messaging and channel wiring when app credentials are provided
  • Experimental runtime surface — the governed workspace-task sandbox exists in backend and K8s baseline form, and you can run it locally on http://127.0.0.1:8091 with make run-sandbox, but it is still an experimental operator workflow rather than a preview-default requirement
  • Current trust-boundary limitationpreview_local still runs Keeper and Scout via the in-process LocalAgentPool, so gateway-only vendor-secret residency is fully enforced today on relay-backed sandbox paths, not yet on the default in-process agent model path
  • Explicitly deferred — fully hardened sandbox productization, provider-neutral sandbox backends beyond the current codex path, multi-IM expansion, and v2 container orchestration work

See docs/preview-release-checklist.md for the release gate and the intentionally deferred backlog. See docs/installation-modes.md for why the current preview path is not the same as a future packaged docker_quickstart.


Tech Stack

Layer Technology
Backend language Python 3.12+
API server FastAPI + uvicorn
LLM Anthropic Claude (Sonnet 4.6 / Haiku 4.5)
Database PostgreSQL 16 + asyncpg
ORM SQLAlchemy 2.0 (async)
Feishu SDK lark-oapi (WebSocket long connection)
Scheduler APScheduler
Container Docker (pipelines)
Config Pydantic Settings + YAML
Logging structlog
Package manager uv
Frontend Next.js 16 + React 19 + TypeScript
UI styles Tailwind CSS v4
Server state TanStack Query v5
Charts ECharts 6
Frontend tests Vitest + Testing Library

Quick Start

See docs/getting-started.md for the full setup guide with troubleshooting. That guide describes the supported preview_local path, not a packaged docker_quickstart. See docs/env-ownership.md for the planned managed .env boundary used by future installer flows.

Prerequisites

  • Python 3.12+, Node.js 18+
  • uv (pip install uv)
  • Docker (pgvector-backed PostgreSQL + pipelines)
  • (Optional) A Feishu Open Platform app — only needed for bot messaging

1. Clone and configure

git clone https://github.com/terrywangcode/openhive.git
cd openhive
cd server
uv sync --all-extras
uv run openhive setup

openhive setup is now the recommended preview-local entry point. It creates the managed .env, writes .openhive/install-state.json, boots local Docker PostgreSQL, runs migrations, and prints the next backend/dashboard commands. The current preview_local flow uses the fixed local ports 5432 (PostgreSQL), 8080 (API), and 3000 (dashboard dev server); custom port flags are not part of the supported installer contract yet.

The installer CLI resolves language in this order:

  1. --lang
  2. persisted install preference in .openhive/install-state.json
  3. OPENHIVE_LANG
  4. system locale
  5. English fallback

After setup:

cd ..
make run
cd web
npm install
npm run dev

Read-only installer support commands:

cd server
uv run openhive status
uv run openhive doctor

Lifecycle follow-up commands for an existing preview_local install:

cd server
uv run openhive update --yes
uv run openhive uninstall --yes
uv run openhive purge --confirm "PURGE OPENHIVE"
  • openhive update keeps operator-owned repo files intact, refreshes installer metadata, and runs pending migrations unless you pass --skip-migrate
  • openhive uninstall stops managed preview-local services and marks the install inactive while preserving project data and config backups
  • openhive purge is the destructive path; it removes only managed local preview-local resources after typed confirmation and does not touch external PostgreSQL resources
  • deferred install modes fail closed for these lifecycle commands until their runtime ownership model is implemented

1A. Manual contributor fallback

cp .env.example .env   # then edit with your values

Future installer flows should only manage the documented OpenHive-owned subset and preserve unrelated operator keys; see docs/env-ownership.md.

2. Start PostgreSQL manually

docker compose up -d postgres

The bundled docker-compose.yml uses pgvector/pgvector:pg16, so the local database supports the vector extension required by hybrid memory search.

3. Install and migrate manually

cd server
uv sync --all-extras
uv run alembic upgrade head

4. Run the backend

make run        # starts http://localhost:8080
curl http://localhost:8080/healthz   # {"status":"ok"}

5. Optional: run the sandbox

If you want Keeper-driven dev-task workflows locally, start the sandbox in a second terminal:

make run-sandbox   # starts http://localhost:8091
curl http://localhost:8091/healthz   # {"status":"ok","role":"sandbox"}

This sandbox lane exists today, but it is still an experimental operator path inside preview_local rather than a preview-default requirement.

6. Run the Dashboard

cd web && npm install && npm run dev   # http://localhost:3000

API calls are proxied to :8080 via Next.js rewrites, and the production build is designed to succeed without fetching remote fonts.

7. Log in to the Dashboard

Open http://localhost:3000 — you'll be redirected to the login page. Log in with the admin credentials set in .env (HIVE_ADMIN_USERNAME / HIVE_ADMIN_PASSWORD).

Once logged in, go to Admin to create PM user accounts (username + password).

8. Create your first project

Open Projects in the Dashboard and click New Project.

  • Choose a template
  • Set the project name and keywords
  • Save the project and verify it appears in the list

If you also want live Feishu routing, go to the project Settings page and add a Feishu channel after the core dashboard flow is working.

9. Run tests

make test          # backend unit tests (no server required)
cd web && npm test # frontend component tests
make test-e2e      # API smoke tests (requires a live backend server)

Project Layout

The repository map now lives in docs/project-layout.md.

Use it when you need a quick directory-level guide to:

  • backend runtime and gateway code under server/hive/
  • dashboard pages and components under web/src/
  • shared API contracts under packages/shared/
  • public docs under docs/; maintainer-only private notes and KB live in a separate companion repo outside this public checkout

How It Works

PM onboarding

Admin creates PM account in Dashboard (/admin)
   ↓
PM logs into Dashboard with username/password
   ↓
PM opens Feishu → sends message to Hive Bot
   ↓
AuthChecker: is PM's Feishu ID bound to a platform user?
   No  → "Enter your invite code"
   Yes ↓
GatewayBot: does PM have a project?
   No  → multi-turn create flow (template → name → keywords → confirm)
   Yes → route to Keeper
   ↓
Keeper initialised → sends intro DM to PM

Group chat message flow

Group message arrives
   ↓ pre-processor (normalise, strip @mentions)
   ↓ intent-classifier-hybrid (rules → LLM fallback)
   ↓ router (select handler from intent)
   ↓ handler (resolved from installed skill capabilities)
   ↓ post-processor (format reply)
   → Scout sends reply to group

Feedback loop

Scout.write_feedback() → feedback_queue table
   ↓ (cron: 18:00 daily)
Keeper wakes up → query_data (unprocessed feedback)
   ↓
Configured evolution plugin handles feedback events
   → ChangeProposal list (project-scoped adjustments)
   ↓
Keeper sends approval card to PM → PM clicks Approve/Reject in Feishu
   ↓ WebSocket card_action event → handle_change_approval()

Writing a Custom Skill

Each Skill is a standalone Python script communicating over stdin/stdout:

# skills/my-skill/scripts/run.py
import json, sys

def main():
    data = json.loads(sys.stdin.read())
    # data: {"message": str, "context": dict, "config": dict, "stage_results": dict}
    result = {"reply": f"Processed: {data['message']}", "error": None}
    print(json.dumps(result))

if __name__ == "__main__":
    main()

Add a SKILL.md alongside it:

---
name: my-skill
version: 1.0.0
description: My custom skill
skill_type: handler
trigger_intent: "query"
compatible_group_types:
  - internal
  - client
---

Install via Keeper:

PM: "Install my-skill into the internal group" Keeper: Skill 'my-skill' installed into group oc_xxx.


Writing a Custom Plugin

from hive.plugins.base import EvolutionPlugin, ChangeProposal

class MyEvolutionPlugin(EvolutionPlugin):
    async def on_feedback_event(self, project_id, feedbacks):
        return [ChangeProposal(
            change_type="config_update",
            description="Adjust threshold",
            diff_content="threshold: 0.7 → 0.65",
            risk_level="low",
            auto_apply=False,
        )]

    async def run_shadow_test(self, project_id, proposal):
        return {"passed": True, "report": "Looks good."}

Configuration Reference

Environment variables (.env)

Variable Description Default
DATABASE_URL PostgreSQL async URL postgresql+asyncpg://hive:password@localhost:5432/hive
DB_PASSWORD Password used by docker compose for the local PostgreSQL container password
ANTHROPIC_API_KEY Anthropic API key
DASHBOARD_SESSION_SECRET HMAC key for Dashboard session tokens change-me-in-production
HIVE_ADMIN_USERNAME Bootstrap admin username admin
HIVE_ADMIN_PASSWORD Bootstrap admin password (required for auto-creation)
HIVE_ADMIN_FEISHU_USER_ID Optional Feishu ID for admin bot binding
HIVE_WORKSPACE Project data root (server for source setup, /data in the compose gateway) server
HIVE_SANDBOX_URL Optional local sandbox API base URL for Keeper dev-task workflows http://127.0.0.1:8091
HIVE_INTERNAL_SECRET Shared internal secret used by preview-local relay and sandbox flows
HIVE_MAX_ACTIVE_AGENTS Max concurrent agents 20
FEISHU_APP_ID Feishu app ID (optional — for bot messaging)
FEISHU_APP_SECRET Feishu app secret
FEISHU_VERIFICATION_TOKEN Feishu verification token
FEISHU_ENCRYPT_KEY Feishu encrypt key (for WebSocket event subscription)

Project config (projects/{id}/config.yaml)

project:
  id: my_project
  name: "My Project"
  status: active

pm_user_id: "alice"

keeper_identity:
  name: "Bee"
  vibe: "professional, concise, data-driven"

schedules:
  heartbeat:
    cron: "0 * * * *"
    prompt_file: HEARTBEAT.md     # PM-editable checklist
  process_feedback:
    cron: "0 18 * * *"
    prompt: "Process the feedback queue."
  pipeline:
    cron: "0 * * * *"
    run_pipeline: true
    prompt: "Check pipeline results."

pm_user_id is the PM's platform username. The Feishu account used for Keeper pairing is stored separately in project_pm_bindings.feishu_user_id.


Make Targets

make run          # Start the backend (hot-reload, :8080)
make dev-web      # Start the Dashboard dev server (:3000)
make test         # Backend unit tests (no server required)
make test-e2e     # API smoke tests (requires a live backend server)
make lint         # ruff (backend) + eslint (frontend)
make format       # ruff format
make migrate      # alembic upgrade head
make db           # Start pgvector-backed PostgreSQL via docker compose
make db-stop      # Stop PostgreSQL
make install      # uv sync --all-extras + npm install
make build-web    # Production build of the Dashboard

Contributing

See CONTRIBUTING.md.

For private vulnerability reporting and supported security scope, see SECURITY.md.

For shipped-version maintenance and hotfixes, also see docs/release-workflow.md.

All code, comments, commit messages, and documentation must be in English. User-facing messages sent to Feishu (runtime) may be in Chinese.


License

MIT — see LICENSE.

About

Open-source control plane for governed agent systems: unified runtime, trusted extensions, governed evolution, approvals, and production-facing operations.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors