A simple LangChain agent where:
- Each bot has its own MCP server tools
- The LLM automatically decides which tools to use
- Tools are executed automatically based on the query
# Simple agent that follows your pattern exactly:
llm = OpenAI(temperature=0)
tools = [calculator_tool, mcp_tools...] # Bot's MCP tools
agent = initialize_agent(tools, llm, AgentType.ZERO_SHOT_REACT_DESCRIPTION)
response = agent.run(input=query) # LLM decides and executes tools automatically{
"bot_id": "7e57e5b1-994a-44e4-973e-ed83ffa0ea51",
"query": "What is 2 + 4?",
"temperature": 0
}Query: What is 2 + 4?
Agent automatically chose: calculator tool
Action: calculator("2 + 4")
Result: 6
- Agent gets bot's MCP tools: Loads all registered MCP servers for the bot
- LLM decides automatically: Agent analyzes query and picks the right tool
- Tool executes: MCP server tool runs automatically
- Returns result: Agent provides the final answer
Direct Service:
agent_service = MCPAgentService()
response = await agent_service.query_with_mcp_agent(
bot_id="bot123",
query="add 2 and 4",
temperature=0
)API Call:
curl -X POST "http://localhost:8000/query-with-agent" \
-H "Content-Type: application/json" \
-d '{
"bot_id": "7e57e5b1-994a-44e4-973e-ed83ffa0ea51",
"query": "What is 15 * 3?",
"temperature": 0
}'Your example:
stock_tool = Tool(name="GetStockPrice", func=lambda symbol: mcp_client.get_stock_price(symbol))
agent = initialize_agent(tools, llm, AgentType.ZERO_SHOT_REACT_DESCRIPTION)
response = agent.run("What is the current stock price of AAPL?")My implementation:
calculator_tool = Tool(name="calculator", func=self._calculate)
mcp_tools = [Tool(name=tool["name"], func=lambda params: execute_mcp_tool(...))]
agent = initialize_agent(tools, llm, AgentType.ZERO_SHOT_REACT_DESCRIPTION)
response = agent.run(query)The agent automatically decides when to use MCP tools vs. built-in tools vs. just answering directly! 🎯