Build AI agents that actually do things.
Combine local tools and MCP servers in a single, elegant runtime.
Write agents in 5 lines of code. Run them anywhere.
Instead of spending days wiring together LLMs, tools, and execution environments, Agentic Framework gives you a production-ready setup instantly.
- Write Less, Do More: Create a fully functional agent with just 5 lines of Python using the zero-config
@AgentRegistry.registerdecorator. - Context is King (MCP): Native integration with Model Context Protocol (MCP) servers to give your agents live data (Web search, APIs, internal databases).
- Hardcore Local Tools: Built-in blazing fast tools (
ripgrep,fd, AST parsing) so your agents can explore and understand local codebases out-of-the-box. - Stateful & Resilient: Powered by LangGraph to support memory, cyclic reasoning, and human-in-the-loop workflows.
- Docker-First Isolation: Every agent runs in isolated containersβno more "it works on my machine" when sharing with your team.
In this single command, the framework orchestrates 3 distinct AI sub-agents working together to plan a tripβbuilt entirely in just 126 lines of Python.
- π§° Available Out of the Box
- π Quick Start (Zero to Agent in 60s)
- π οΈ Build Your Own Agent
- ποΈ Architecture
- π» CLI Reference
- π§βπ» Local Development
- π¬ See it in Action
- π€ Contributing
The framework includes several pre-built agents for common use cases:
| Agent | Purpose |
|---|---|
developer |
Code Master: Read, search & edit code |
travel-coordinator |
Trip Planner: Orchestrates agents |
chef |
Chef: Recipes from your fridge |
news |
News Anchor: Aggregates top stories |
travel |
Flight Booker: Finds the best routes |
simple |
Chat Buddy: Vanilla conversational agent |
github-pr-reviewer |
PR Reviewer: Reviews diffs, posts inline comments & summaries |
whatsapp |
WhatsApp Agent: Bidirectional WhatsApp communication |
π See docs/agents.md for detailed information about each agent, including configuration options and usage examples.
Fast, zero-dependency tools for working with local codebases:
| Tool | Capability |
|---|---|
find_files |
Fast search via fd |
discover_structure |
Directory tree mapping |
get_file_outline |
AST signature parsing |
read_file_fragment |
Precise file reading |
code_search |
Fast search via ripgrep |
edit_file |
Safe file editing |
π See docs/tools.md for detailed documentation of each tool, including parameters and examples.
Model Context Protocol servers for extending agent capabilities:
| Server | Purpose |
|---|---|
kiwi-com-flight-search |
Search real-time flights |
webfetch |
Extract clean text from URLs & web search |
duckduckgo-search |
Web search via DuckDuckGo |
π See docs/mcp-servers.md for details on each server and how to add custom MCP servers.
The framework supports 11 LLM providers out of the box, covering 90%+ of the market:
| Provider | Type | Use Case |
|---|---|---|
| Anthropic | Cloud | State-of-the-art reasoning (Claude) |
| OpenAI | Cloud | GPT-4, GPT-4.1, o1 series |
| Azure OpenAI | Cloud | Enterprise OpenAI deployments |
| Google GenAI | Cloud | Gemini models via API |
| Google Vertex AI | Cloud | Gemini models via GCP |
| Groq | Cloud | Ultra-fast inference |
| Mistral AI | Cloud | European privacy-focused models |
| Cohere | Cloud | Enterprise RAG and Command models |
| AWS Bedrock | Cloud | Anthropic, Titan, Meta via AWS |
| Ollama | Local | Run LLMs locally (zero API cost) |
| Hugging Face | Cloud | Open models from Hugging Face Hub |
π See docs/llm-providers.md for detailed setup instructions, environment variables, and provider comparison.
You need an LLM API key to breathe life into your agents. The framework supports 10+ LLM providers via LangChain!
# Copy the template
cp .env.example .env
# Edit .env and paste your API key
# Choose one of the following providers:
# OPENAI_API_KEY=sk-your-key-here
# ANTHROPIC_API_KEY=sk-ant-your-key-here
# GOOGLE_API_KEY=your-google-key
# GROQ_API_KEY=gsk-your-key-here
# MISTRAL_API_KEY=your-mistral-key-here
# COHERE_API_KEY=your-cohere-key-here
# For Ollama (local), no API key needed:
# OLLAMA_BASE_URL=http://localhost:11434
# For Azure OpenAI:
# AZURE_OPENAI_API_KEY=your-azure-key
# AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
# For Google Vertex AI:
# GOOGLE_VERTEX_PROJECT_ID=your-project-id
# For AWS Bedrock:
# AWS_PROFILE=your-profile
# For Hugging Face:
# HUGGINGFACEHUB_API_TOKEN=your-hf-token
β οΈ Note: Set your preferred provider's API key. Priority: Anthropic > Google Vertex > Google GenAI > Azure > Groq > Mistral > Cohere > Bedrock > HuggingFace > Ollama > OpenAI (default fallback).
No pip, no virtualenv, no "it works on my machine" excuses.
# Clone the repository
git clone https://github.com/jeancsil/agentic-framework.git
cd agentic-framework
# Build the Docker image
make docker-build
# Unleash your first agent!
bin/agent.sh developer -i "Explain this codebase"
# Or try the chef agent
bin/agent.sh chef -i "I have chicken, rice, and soy sauce. What can I make?"π Required Environment Variables
Only one provider's API key is required. The framework auto-detects which provider to use based on available credentials.
# Anthropic (Recommended)
ANTHROPIC_API_KEY=sk-ant-your-key-here
# OpenAI
OPENAI_API_KEY=sk-your-key-here
# Google GenAI / Vertex
GOOGLE_API_KEY=your-google-key
GOOGLE_VERTEX_PROJECT_ID=your-project-id
# Groq
GROQ_API_KEY=gsk-your-key-here
# Mistral AI
MISTRAL_API_KEY=your-mistral-key-here
# Cohere
COHERE_API_KEY=your-cohere-key-here
# Azure OpenAI
AZURE_OPENAI_API_KEY=your-azure-key
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
# AWS Bedrock
AWS_PROFILE=your-profile
# Ollama (Local, no API key needed)
OLLAMA_BASE_URL=http://localhost:11434
# Hugging Face
HUGGINGFACEHUB_API_TOKEN=your-hf-tokenπ See docs/llm-providers.md for detailed environment variable configurations, model overrides, and provider comparison.
from agentic_framework.core.langgraph_agent import LangGraphMCPAgent
from agentic_framework.registry import AgentRegistry
@AgentRegistry.register("my-agent", mcp_servers=["webfetch"])
class MyAgent(LangGraphMCPAgent):
@property
def system_prompt(self) -> str:
return "You are my custom agent with the power to fetch websites."Boom. Run it instantly:
bin/agent.sh my-agent -i "Summarize https://example.com"Want to add your own Python logic? Easy.
from langchain_core.tools import StructuredTool
from agentic_framework.core.langgraph_agent import LangGraphMCPAgent
from agentic_framework.registry import AgentRegistry
@AgentRegistry.register("data-processor")
class DataProcessorAgent(LangGraphMCPAgent):
@property
def system_prompt(self) -> str:
return "You process data files like a boss."
def local_tools(self) -> list:
return [
StructuredTool.from_function(
func=self.process_csv,
name="process_csv",
description="Process a CSV file path",
)
]
def process_csv(self, filepath: str) -> str:
# Magic happens here β¨
return f"Successfully processed {filepath}!"Under the hood, we seamlessly bridge the gap between user intent and execution:
flowchart TB
subgraph User [π€ User Space]
Input[User Input]
end
subgraph CLI [π» CLI - agentic-run]
Typer[Typer Interface]
end
subgraph Registry [π Registry]
AR[AgentRegistry]
AD[Auto-discovery]
end
subgraph Agents [π€ Agents]
Chef[chef agent]
Dev[developer agent]
Travel[travel agent]
end
subgraph Core [π§ Core Engine]
LGA[LangGraphMCPAgent]
LG[LangGraph Runtime]
CP[(Checkpointing)]
end
subgraph Tools [π§° Tools & Skills]
LT[Local Tools]
MCP[MCP Tools]
end
subgraph External [π External World]
LLM[LLM API]
MCPS[MCP Servers]
end
Input --> Typer
Typer --> AR
AR --> AD
AR -->|Routes to| Chef & Dev & Travel
Chef & Dev & Travel -->|Inherits from| LGA
LGA --> LG
LG <--> CP
LGA -->|Uses| LT
LGA -->|Uses| MCP
LT -->|Reasoning| LLM
MCP -->|Queries| MCPS
MCPS -->|Provides Data| LLM
LLM --> Output[Final Response]
Command your agents directly from the terminal.
# π List all registered agents
bin/agent.sh list
# π΅οΈ Get detailed info about what an agent can do
bin/agent.sh info developer
# π Run an agent with input
bin/agent.sh developer -i "Analyze the architecture of this project"
# β±οΈ Run with an execution timeout (seconds)
bin/agent.sh developer -i "Refactor this module" -t 120
# π Run with debug-level verbosity
bin/agent.sh developer -i "Hello" -v
# π Access logs (same location as local)
tail -f agentic-framework/logs/agent.log
# π± Run the WhatsApp agent (requires config - see docs/agents.md)
agentic-run whatsapp --config config/whatsapp.yaml
# π± Run WhatsApp with custom settings
agentic-run whatsapp --allowed-contact "+1234567890" --storage ~/custom/pathPrefer running without Docker? We got you.
System Requirements & Setup
Requirements:
- Python 3.13+
ripgrep,fd,fzf
# Install dependencies (blazingly fast with uv β‘)
make install
# Run the test suite
make test
# Run agents directly in your environment
uv --directory agentic-framework run agentic-run developer -i "Hello"Useful `make` Commands
make install # Install dependencies with uv
make test # Run pytest with coverage
make format # Auto-format codebase with ruff
make check # Strict linting (mypy + ruff)We love contributions! Check out our AGENTS.md for development guidelines.
The Golden Rules:
make checkshould pass without complaints.make testshould stay green.- Don't drop test coverage (we like our 80% mark!).
This project is licensed under the MIT License. See LICENSE for details.
Stand on the shoulders of giants:
If you find this useful, please consider giving it a β or buying me a coffee!
Β
