An Agent Development Environment (ADE) for building, running, and improving autonomous coding agents.
Status: In active development. Phase 0 complete (ACP server, iterative refinement, task pipeline). See ROADMAP.md for what's next.
Crow is NOT just an ACP server, NOT just an IDE. It's a complete environment where:
- Humans plan in a journal (Logseq-inspired knowledge base)
- Humans + agents prime the environment together (pair programming)
- Autonomous agents work in the primed environment (read journal → write code → document decisions)
- Humans review in the journal and provide feedback
- Knowledge accumulates and agents get better over time
cd crow
uv syncpython -m crow.agent.acp_serverThe server will listen on stdin/stdout for JSON-RPC messages following the ACP protocol.
python task_pipeline.py --plan-file PLAN.mdThis will:
- Split PLAN.md into tasks
- Run each task through iterative refinement (planning → implementation → critique)
- Track progress and results
- DESIGN.md - Vision, architecture, and design decisions
- CURRENT_STATE.md - Analysis of current code and what needs fixing
- ROADMAP.md - Development phases and timeline
- AGENTS.md - Project-specific knowledge for agents
- REFACTOR_PLAN.md - Original refactor plan (superseded by ROADMAP.md)
- ✅ ACP Server - Streaming ACP server wrapping OpenHands SDK
- ✅ Iterative Refinement - Planning → Implementation → Critique → Documentation loop
- ✅ Task Pipeline - Split PLAN.md into tasks, run sequentially
- ✅ MCP Integration - playwright, zai-vision, fetch, web_search
- ✅ Session Management - Multiple concurrent sessions with persistence
- ✅ Slash Commands - /help, /clear, /status
- 🚧 Restructure - Moving files from root to
src/crow/ - 📋 Jinja2 Templates - Replace hardcoded prompts with templates
- 📋 Environment Priming - Human + agent pair programming before autonomous phase
- 📋 Project Management -
/projects/directory, git repos, journals - 📋 Journal Page - Logseq-inspired knowledge base
- 📋 Web UI - CodeBlitz/Monaco integration
- 📋 Feedback Loops - Capture human feedback, feed to agents
- 📋 Telemetry - Self-hosted Laminar/Langfuse
Crow
├── ACP Server (src/crow/agent/)
│ └── Streaming ACP protocol implementation
├── Orchestration (src/crow/orchestration/)
│ ├── Environment priming
│ ├── Task splitting
│ ├── Iterative refinement
│ └── Task pipeline
├── Web UI (Future)
│ ├── CodeBlitz/Monaco editor
│ ├── Journal page
│ ├── Project browser
│ └── Terminal
└── Projects (/projects/)
└── Each project = git repo + journal
Current AI coding tools:
- ❌ Drop agents into empty workspaces (no context)
- ❌ Lose agent decisions in markdown files ("lost like tears in rain")
- ❌ No feedback loop (human review not captured)
- ❌ No knowledge accumulation
Our solution:
- ✅ Environment priming - Human + agent set up context first
- ✅ Journal - All decisions documented and linked
- ✅ Feedback loops - Human review captured and fed back
- ✅ Knowledge accumulation - Agents get better over time
This is a personal project, but feedback and ideas are welcome!
MIT
- Agent Client Protocol
- OpenHands SDK
- Model Context Protocol
- Trae Solo - Autonomous development inspiration
- Google Antigravity - Agent-first IDE inspiration
- Logseq - Knowledge management inspiration
- CodeBlitz - Web IDE foundation
"The agent is the primary developer, humans are the critics/product managers."
Modified with Crow ADE
