A plug-and-play multi-agent platform on Azure AI Foundry. Build, connect, and deploy AI agents that work together using open standards.
- A2A + MCP combined β connect agents across frameworks (A2A) and external tools (MCP) in one platform
- Drop-in agents β add a package to
agents/, runuv sync, the router auto-discovers it - Declarative HITL β
@tool(approval_mode="always_require")β one decorator, no custom approval code - 3-layer middleware β InputGuard, logging, and sensitive data masking with early termination
- Checkpoint + resume β pause and resume conversations across process restarts
- Azure-native β Managed Identity, VNet isolation, Azure AI Foundry integration
Built on Microsoft Agent Framework with HandoffBuilder orchestration.
Note: Microsoft Agent Framework is currently in Release Candidate (RC5) and A2A support is in beta. This platform tracks the latest pre-release versions. Pin your dependencies in production.
graph LR
User([User]) --> Triage[Triage Agent]
subgraph Local Agents
Triage -->|handoff| Helpdesk[Helpdesk Agent]
Triage -->|handoff| Knowledge[Knowledge Agent]
Triage -->|handoff| DataAnalyst[Data Analyst]
Triage -->|handoff| Incident[Incident Triage]
Triage -->|handoff| CodeReview[Code Reviewer]
Triage -->|handoff| InfraAnalyzer[Infra Analyzer]
end
subgraph Remote Agents - A2A Protocol
Triage -.->|A2A| Legal[Legal Agent]
Triage -.->|A2A| Finance[Finance Agent]
end
subgraph External Tools - MCP
Helpdesk -->|MCP| Confluence[Confluence]
Helpdesk -->|MCP| Jira[Jira]
end
Helpdesk & Knowledge & DataAnalyst & Incident & CodeReview & InfraAnalyzer & Legal --> Triage
- User sends a question β via CLI, DevUI, or API
- Triage agent analyzes and routes β reads each agent's description to pick the right specialist
- Specialist uses its tools β KB search, SQL queries, file search, MCP tools, or A2A calls
- Response flows back β through the triage agent to the user
New agents are auto-discovered. Drop a package in agents/, run uv sync, and the router picks it up.
- A2A protocol β connect agents across frameworks and services (demo)
- MCP integration β connect Confluence, Jira, SharePoint without code
- Plugin architecture β
scaffoldβconfigureβdeploy - Auto-discovery β no manifest or registry needed
- Human-in-the-loop β one decorator:
@tool(approval_mode="always_require") - RAG built-in β drop documents in
knowledge/, upload to vector store - Checkpointing β resume conversations after interruptions
- Middleware stack β InputGuard, logging, sensitive data masking
- Terraform IaC β POC (~$10/mo) and enterprise environments
- Observability β OpenTelemetry + Aspire Dashboard
Prerequisites
- Azure subscription β free trial
- Azure AI Foundry resource with a model deployment (e.g.,
gpt-4.1) - Python 3.13+ β download
- uv package manager β
pip install uv - Azure CLI β install, then
az login
Click Open in GitHub Codespaces above. Everything is pre-configured.
pip install uv
git clone https://github.com/b-franken/agent-platform.git
cd agent-platform
uv sync
cp .env.example .env
# Edit .env β set AZURE_AI_PROJECT_ENDPOINT and AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME
az login
uv run python -m agent_core.validate
uv run --package router-agent python -m router.mainYou should see:
Agent Platform β Interactive Mode
Discovered agents: helpdesk, knowledge-agent, data-analyst
Type your question (or 'quit' to exit):
You >
azd up # Deploys infrastructure + application in one commanduv run python -m agent_core.scaffold my-agent --description "What it does"
# Edit agents/my-agent/src/my_agent/tools.py and config.py
uv sync
uv run python -m agent_core.validate
# Restart β the router auto-discovers your new agentSee docs/adding-agents.md for the full guide with examples.
| Agent | What it demonstrates | Data source | Framework Features |
|---|---|---|---|
| helpdesk | IT troubleshooting + ticket management | YAML KB, SQLite | @tool, approval_mode HITL |
| knowledge-agent | Company docs search with citations | Markdown files | RAG, file_search, vector store |
| data-analyst | Natural language SQL queries | SQLite sample DB | Schema discovery, input validation |
| expense-approver | Budget checks + expense submission | SQLite | approval_mode, HITL |
| incident-triage | Severity classification + runbook lookup | Keyword matching | Structured Output (response_format) |
| code-reviewer | Code quality + security scanning | Regex patterns | Deterministic analysis, streaming |
| infra-analyzer | Terraform scanning + remediation | Regex HCL matching | Tool Approval HITL |
| router | Triage routing to specialists | β | HandoffBuilder, auto-discovery |
Note: All agents use local demo data (YAML, SQLite, regex) β no external API keys needed. This is intentional: the platform works out of the box. See docs/production.md for connecting real data sources.
- A2A demo β cross-framework agents communicating via the A2A protocol
- MCP server β expose tools via Model Context Protocol
- Start here β study
helpdesk(YAML KB, SQLite tickets, HITL) - Build your first agent β Tutorial (~15 min)
- Structured output β study
incident-triage(Pydantic models asresponse_format) - Security scanning β study
code-reviewerandinfra-analyzer - Multi-agent orchestration β study
router(HandoffBuilder, auto-discovery) - Cross-service agents β run the A2A demo and MCP server
- Deploy β Deployment guide
agents-platform/
βββ packages/agent-core/ # Shared library (config, factory, middleware, registry)
βββ agents/
β βββ router/ # Triage + HandoffBuilder orchestration
β βββ helpdesk/ # IT troubleshooting, KB search, ticket creation
β βββ knowledge-agent/ # Company docs search with RAG citations
β βββ data-analyst/ # Natural language SQL queries
β βββ expense-approver/ # Human-in-the-loop demo
β βββ incident-triage/ # Structured output + context providers
β βββ code-reviewer/ # Code quality + security analysis
β βββ infra-analyzer/ # Terraform scanning + tool approval HITL
βββ examples/
β βββ a2a-demo/ # A2A cross-framework demo
β βββ mcp-server/ # MCP server example
βββ infra/ # Terraform with Azure Verified Modules
βββ deployment/azurefunctions/ # Azure Functions deployment
βββ scripts/ # Setup and pre-flight checks
βββ tests/ # Unit tests
βββ evals/ # Agent quality evaluations
βββ docs/ # Documentation
| Guide | Description |
|---|---|
| Getting Started | Setup and first run |
| Tutorial: Build Your First Agent | Step-by-step from zero |
| Key Concepts | Multi-agent systems, A2A, MCP, RAG explained |
| Adding Agents | The plugin contract and patterns |
| Architecture | How the platform works under the hood |
| Deployment | Local, Docker, Azure deployment options |
| From Demo to Production | Connect real data sources |
| Cost Optimization | Model selection and cost tips |
| Evaluations | Agent quality testing framework |
| Troubleshooting | Common issues and fixes |
| FAQ | Frequently asked questions |
| Method | Use case | Command |
|---|---|---|
| CLI | Local development | uv run --package router-agent python -m router.main |
| DevUI | Browser-based testing | uv run devui --port 8080 |
| Docker | Containerized dev | docker compose up |
| azd | One-command Azure deploy | azd up |
| Terraform | Infrastructure only | terraform apply |
| Azure Functions | Serverless production | func start |
The default deployment uses Azure AI Foundry Agent Service (GA since March 2026). Foundry manages the agent runtime β scaling, observability, enterprise security β you only pay for model tokens.
# Default: Foundry hosted (recommended)
azd up
# Alternative: self-hosted Container Apps
azd up -- -var="deployment_mode=container_apps"See docs/deployment.md for details.
Cost estimate
- Development: ~$5-10/month (Azure AI Foundry + GPT-4.1-mini for routing)
- Foundry hosted: Token cost only β no hosting fees, Foundry manages runtime
- Container Apps (alternative): ~$10-92/month depending on networking
- Clean up:
azd downorterraform destroy
See docs/cost-optimization.md for model selection tips.
The platform includes an eval framework for testing agent quality against a real Azure endpoint:
uv run pytest evals/ -m eval -vThree suites: routing accuracy, tool selection, and response quality. See docs/evals.md.
This platform includes built-in guardrails:
- InputGuard middleware β enforces input length and conversation turn limits
- Sensitive data masking β tool arguments and results are masked in logs by default
- Human-in-the-loop β
approval_mode="always_require"on destructive operations (e.g., ticket creation, infrastructure fixes) - Grounded responses β RAG with citations, SQL with actual queries shown
See TRANSPARENCY_FAQ.md for detailed transparency information and Microsoft Responsible AI principles.
- Microsoft Agent Framework β the framework this platform is built on
- Azure AI Foundry β the Azure service powering the agents
- A2A Protocol β the open standard for agent-to-agent communication
- Model Context Protocol (MCP) β the standard for agent-to-tool integration
- Microsoft Solution Accelerators β similar projects from Microsoft
We welcome contributions! Whether it's a new agent, a bug fix, or documentation improvement.
See CONTRIBUTING.md for guidelines. Good first issues are labeled good first issue.