AOF is an Apache 2.0 open source Rust framework for building agentic applications.
Repository: https://github.com/agenticdevops/aof License: Apache 2.0 Type: Pure Rust library crates + CLI (NO desktop/Tauri)
/Users/gshah/work/opsflow-sh/
βββ aof/ # THIS REPO - Open source framework
βββ kubepilot/ # Closed source K8s desktop (imports AOF crates)
βββ opspilot/ # Closed source enterprise (imports AOF crates)
aof-core- Core traits, types, interfacesaof-llm- LLM provider abstractionaof-mcp- MCP clientaof-runtime- Agent executionaof-memory- State managementaof-triggers- Event triggersaofctl- CLI binary
KubePilot and OpsPilot import AOF crates:
aof-core = { path = "../../aof/aof/crates/aof-core" }
aof-llm = { path = "../../aof/aof/crates/aof-llm" }Documentation: https://docs.aof.sh
Installation: curl -sSL https://docs.aof.sh/install.sh | bash
The release process is fully automated via GitHub Actions. DO NOT create releases manually.
# 1. Create and push a version tag (triggers automated build)
git tag -a v0.1.14 -m "Release v0.1.14: Brief description"
git push origin v0.1.14
# 2. Monitor: https://github.com/agenticdevops/aof/actions
# 3. Verify: https://github.com/agenticdevops/aof/releasesThe workflow will automatically:
- Build binaries for Linux, macOS (Intel & Apple Silicon), Windows
- Calculate SHA256 checksums
- Create GitHub Release with formatted release notes
- Include installation instructions
The workflow creates consistent release notes with:
- Installation instructions (curl | bash)
- Manual download links
- Checksum verification commands
- Getting started guide
Use semantic versioning: vMAJOR.MINOR.PATCH
- MAJOR: Breaking changes
- MINOR: New features (backward compatible)
- PATCH: Bug fixes
Full details: See RELEASE_PROCESS.md
ABSOLUTE RULES:
- ALL operations MUST be concurrent/parallel in a single message
- NEVER save working files, text/mds and tests to the root folder
- ALWAYS organize files in appropriate subdirectories
- USE CLAUDE CODE'S TASK TOOL for spawning agents concurrently, not just MCP
MANDATORY PATTERNS:
- TodoWrite: ALWAYS batch ALL todos in ONE call (5-10+ todos minimum)
- Task tool (Claude Code): ALWAYS spawn ALL agents in ONE message with full instructions
- File operations: ALWAYS batch ALL reads/writes/edits in ONE message
- Bash commands: ALWAYS batch ALL terminal operations in ONE message
- Memory operations: ALWAYS batch ALL memory store/retrieve in ONE message
Claude Code's Task tool is the PRIMARY way to spawn agents:
// β
CORRECT: Use Claude Code's Task tool for parallel agent execution
[Single Message]:
Task("Research agent", "Analyze requirements and patterns...", "researcher")
Task("Coder agent", "Implement core features...", "coder")
Task("Tester agent", "Create comprehensive tests...", "tester")
Task("Reviewer agent", "Review code quality...", "reviewer")
Task("Architect agent", "Design system architecture...", "system-architect")MCP tools are ONLY for coordination setup:
mcp__claude-flow__swarm_init- Initialize coordination topologymcp__claude-flow__agent_spawn- Define agent types for coordinationmcp__claude-flow__task_orchestrate- Orchestrate high-level workflows
NEVER save to root folder. Use these directories:
/src- Source code files/tests- Test files/docs- Documentation and markdown files/config- Configuration files/scripts- Utility scripts/examples- Example code
This project uses SPARC (Specification, Pseudocode, Architecture, Refinement, Completion) methodology with Claude-Flow orchestration for systematic Test-Driven Development.
npx claude-flow sparc modes- List available modesnpx claude-flow sparc run <mode> "<task>"- Execute specific modenpx claude-flow sparc tdd "<feature>"- Run complete TDD workflownpx claude-flow sparc info <mode>- Get mode details
npx claude-flow sparc batch <modes> "<task>"- Parallel executionnpx claude-flow sparc pipeline "<task>"- Full pipeline processingnpx claude-flow sparc concurrent <mode> "<tasks-file>"- Multi-task processing
Quick validation (as needed):
# 1. Optional: Fast pre-compile checks (5 seconds) - catches 80% of errors
./scripts/test-pre-compile.sh
# 2. Build:
cargo build --release
# 3. Optional: End-to-end validation:
./scripts/test-agent.shWhen to use pre-compile tests:
- β Before major changes
- β When debugging issues
- β When testing new features
- β 9x faster than full build (5s vs 45s)
- β Catches syntax, unit tests, patterns in one go
Complete Build Process:
./scripts/test-pre-compile.sh- Fast validationcargo check --all-features- Syntax validationcargo test --lib- Unit testscargo build --release- Full release build./scripts/test-agent.sh- End-to-end validation
Rust-Specific Commands:
cargo check- Quick syntax check (no build)cargo test --lib- Unit tests onlycargo build --release- Optimized release buildcargo clippy --all-targets- Static analysis
- Specification - Requirements analysis (
sparc run spec-pseudocode) - Pseudocode - Algorithm design (
sparc run spec-pseudocode) - Architecture - System design (
sparc run architect) - Refinement - TDD implementation (
sparc tdd) - Completion - Integration (
sparc run integration)
- Modular Design: Files under 500 lines
- Environment Safety: Never hardcode secrets
- Test-First: Write tests before implementation
- Clean Architecture: Separate concerns
- Documentation: Keep updated
- Helpful Error Messages: Use
serde_path_to_errorfor YAML/JSON parsing to show exact field paths on errors
Always use serde_path_to_error when deserializing user-provided config files. This gives precise error locations instead of vague "didn't match" errors.
// Bad: Generic error messages
let config: Config = serde_yaml::from_str(&content)?;
// Error: "data did not match any variant of untagged enum"
// Good: Precise field path errors
let deserializer = serde_yaml::Deserializer::from_str(&content);
let config: Config = serde_path_to_error::deserialize(deserializer)
.map_err(|e| anyhow!("Field: {}\nError: {}", e.path(), e.inner()))?;
// Error: "Field: spec.memory\nError: invalid type: map, expected string"Add to Cargo.toml:
serde_path_to_error = "0.1"Optional pre-compile validation (5 seconds):
./scripts/test-pre-compile.shThis validates:
- β Syntax errors
- β Unit tests
- β Clippy static analysis
- β MCP initialization patterns
- β Common error patterns
- β Configuration consistency
| Scenario | Command | Time |
|---|---|---|
| Quick check before building | ./scripts/test-pre-compile.sh |
5s |
| Unit tests only | cargo test --lib |
10s |
| MCP initialization | cargo test --lib mcp_initialization |
5s |
| Tool executor | cargo test --lib tool_executor |
5s |
| Full release build | cargo build --release |
45s |
| Full validation | ./scripts/test-agent.sh |
10s |
| All tests | cargo test --all |
30s |
The codebase includes an Error Knowledge Base (RAG) that:
- Tracks recurring errors
- Stores solutions for known problems
- Helps agents learn from past mistakes
- Prevents the same error from happening twice
Access it in code:
use aof_core::ErrorKnowledgeBase;
let kb = ErrorKnowledgeBase::new();
let similar_errors = kb.find_similar("MCP", &["initialize"]);
let stats = kb.stats();coder, reviewer, tester, planner, researcher
hierarchical-coordinator, mesh-coordinator, adaptive-coordinator, collective-intelligence-coordinator, swarm-memory-manager
byzantine-coordinator, raft-manager, gossip-coordinator, consensus-builder, crdt-synchronizer, quorum-manager, security-manager
perf-analyzer, performance-benchmarker, task-orchestrator, memory-coordinator, smart-agent
github-modes, pr-manager, code-review-swarm, issue-tracker, release-manager, workflow-automation, project-board-sync, repo-architect, multi-repo-swarm
sparc-coord, sparc-coder, specification, pseudocode, architecture, refinement
backend-dev, mobile-dev, ml-developer, cicd-engineer, api-docs, system-architect, code-analyzer, base-template-generator
tdd-london-swarm, production-validator
migration-planner, swarm-init
- Task tool: Spawn and run agents concurrently for actual work
- File operations (Read, Write, Edit, MultiEdit, Glob, Grep)
- Code generation and programming
- Bash commands and system operations
- Implementation work
- Project navigation and analysis
- TodoWrite and task management
- Git operations
- Package management
- Testing and debugging
- Swarm initialization (topology setup)
- Agent type definitions (coordination patterns)
- Task orchestration (high-level planning)
- Memory management
- Neural features
- Performance tracking
- GitHub integration
KEY: MCP coordinates the strategy, Claude Code's Task tool executes with real agents.
# Add MCP servers (Claude Flow required, others optional)
claude mcp add claude-flow npx claude-flow@alpha mcp start
claude mcp add ruv-swarm npx ruv-swarm mcp start # Optional: Enhanced coordination
claude mcp add flow-nexus npx flow-nexus@latest mcp start # Optional: Cloud featuresswarm_init, agent_spawn, task_orchestrate
swarm_status, agent_list, agent_metrics, task_status, task_results
memory_usage, neural_status, neural_train, neural_patterns
github_swarm, repo_analyze, pr_enhance, issue_triage, code_review
benchmark_run, features_detect, swarm_monitor
Flow-Nexus extends MCP capabilities with 70+ cloud-based orchestration tools:
Key MCP Tool Categories:
- Swarm & Agents:
swarm_init,swarm_scale,agent_spawn,task_orchestrate - Sandboxes:
sandbox_create,sandbox_execute,sandbox_upload(cloud execution) - Templates:
template_list,template_deploy(pre-built project templates) - Neural AI:
neural_train,neural_patterns,seraphina_chat(AI assistant) - GitHub:
github_repo_analyze,github_pr_manage(repository management) - Real-time:
execution_stream_subscribe,realtime_subscribe(live monitoring) - Storage:
storage_upload,storage_list(cloud file management)
Authentication Required:
- Register:
mcp__flow-nexus__user_registerornpx flow-nexus@latest register - Login:
mcp__flow-nexus__user_loginornpx flow-nexus@latest login - Access 70+ specialized MCP tools for advanced orchestration
- Optional: Use MCP tools to set up coordination topology
- REQUIRED: Use Claude Code's Task tool to spawn agents that do actual work
- REQUIRED: Each agent runs hooks for coordination
- REQUIRED: Batch all operations in single messages
// Single message with all agent spawning via Claude Code's Task tool
[Parallel Agent Execution]:
Task("Backend Developer", "Build REST API with Express. Use hooks for coordination.", "backend-dev")
Task("Frontend Developer", "Create React UI. Coordinate with backend via memory.", "coder")
Task("Database Architect", "Design PostgreSQL schema. Store schema in memory.", "code-analyzer")
Task("Test Engineer", "Write Jest tests. Check memory for API contracts.", "tester")
Task("DevOps Engineer", "Setup Docker and CI/CD. Document in memory.", "cicd-engineer")
Task("Security Auditor", "Review authentication. Report findings via hooks.", "reviewer")
// All todos batched together
TodoWrite { todos: [...8-10 todos...] }
// All file operations together
Write "backend/server.js"
Write "frontend/App.jsx"
Write "database/schema.sql"1οΈβ£ BEFORE Work:
npx claude-flow@alpha hooks pre-task --description "[task]"
npx claude-flow@alpha hooks session-restore --session-id "swarm-[id]"2οΈβ£ DURING Work:
npx claude-flow@alpha hooks post-edit --file "[file]" --memory-key "swarm/[agent]/[step]"
npx claude-flow@alpha hooks notify --message "[what was done]"3οΈβ£ AFTER Work:
npx claude-flow@alpha hooks post-task --task-id "[task]"
npx claude-flow@alpha hooks session-end --export-metrics true// Step 1: MCP tools set up coordination (optional, for complex tasks)
[Single Message - Coordination Setup]:
mcp__claude-flow__swarm_init { topology: "mesh", maxAgents: 6 }
mcp__claude-flow__agent_spawn { type: "researcher" }
mcp__claude-flow__agent_spawn { type: "coder" }
mcp__claude-flow__agent_spawn { type: "tester" }
// Step 2: Claude Code Task tool spawns ACTUAL agents that do the work
[Single Message - Parallel Agent Execution]:
// Claude Code's Task tool spawns real agents concurrently
Task("Research agent", "Analyze API requirements and best practices. Check memory for prior decisions.", "researcher")
Task("Coder agent", "Implement REST endpoints with authentication. Coordinate via hooks.", "coder")
Task("Database agent", "Design and implement database schema. Store decisions in memory.", "code-analyzer")
Task("Tester agent", "Create comprehensive test suite with 90% coverage.", "tester")
Task("Reviewer agent", "Review code quality and security. Document findings.", "reviewer")
// Batch ALL todos in ONE call
TodoWrite { todos: [
{id: "1", content: "Research API patterns", status: "in_progress", priority: "high"},
{id: "2", content: "Design database schema", status: "in_progress", priority: "high"},
{id: "3", content: "Implement authentication", status: "pending", priority: "high"},
{id: "4", content: "Build REST endpoints", status: "pending", priority: "high"},
{id: "5", content: "Write unit tests", status: "pending", priority: "medium"},
{id: "6", content: "Integration tests", status: "pending", priority: "medium"},
{id: "7", content: "API documentation", status: "pending", priority: "low"},
{id: "8", content: "Performance optimization", status: "pending", priority: "low"}
]}
// Parallel file operations
Bash "mkdir -p app/{src,tests,docs,config}"
Write "app/package.json"
Write "app/src/server.js"
Write "app/tests/server.test.js"
Write "app/docs/API.md"Message 1: mcp__claude-flow__swarm_init
Message 2: Task("agent 1")
Message 3: TodoWrite { todos: [single todo] }
Message 4: Write "file.js"
// This breaks parallel coordination!- 84.8% SWE-Bench solve rate
- 32.3% token reduction
- 2.8-4.4x speed improvement
- 27+ neural models
- Auto-assign agents by file type
- Validate commands for safety
- Prepare resources automatically
- Optimize topology by complexity
- Cache searches
- Auto-format code
- Train neural patterns
- Update memory
- Analyze performance
- Track token usage
- Generate summaries
- Persist state
- Track metrics
- Restore context
- Export workflows
- π Automatic Topology Selection
- β‘ Parallel Execution (2.8-4.4x speed)
- π§ Neural Training
- π Bottleneck Analysis
- π€ Smart Auto-Spawning
- π‘οΈ Self-Healing Workflows
- πΎ Cross-Session Memory
- π GitHub Integration
- Start with basic swarm init
- Scale agents gradually
- Use memory for context
- Monitor progress regularly
- Train patterns from success
- Enable hooks automation
- Use GitHub tools first
- Documentation: https://github.com/ruvnet/claude-flow
- Issues: https://github.com/ruvnet/claude-flow/issues
- Flow-Nexus Platform: https://flow-nexus.ruv.io (registration required for cloud features)
Remember: Claude Flow coordinates, Claude Code creates!
Do what has been asked; nothing more, nothing less. NEVER create files unless they're absolutely necessary for achieving your goal. ALWAYS prefer editing an existing file to creating a new one. NEVER proactively create documentation files (*.md) or README files. Only create documentation files if explicitly requested by the User. Never save working files, text/mds and tests to the root folder.
- Strictly follow kubectl style implementation for aofctl. For example use "aofctl run agent" instead of "aofctl agent run". If you find anything non compliant, correct it.
- for every feature added, add/update docs/ so that we are keeping track of every single feautre the product has and how it works.
- When you make changes, first update the internal docs, then implement, verify the impleemntation matches the docs, rhen also update the user docs with concepts, examples, resource spec, tutorials etc.