A lightweight wrapper around Temporal Workflows
This lightweight Temporal wrapper lets you build AI agents that can:
- Execute long-running conversations with memory
- Coordinate multiple specialized sub-agents
- Handle function calls and tool integrations
- Recover from failures and resume execution
- Scale across distributed systems
graph TB
subgraph "Application Layer"
Console[UI/Console]
Session[Session]
end
subgraph "Temporal Wrapper"
Agent[Agent]
Runner[Runner]
end
subgraph "Temporal Execution"
Workflow[AgentWorkflow]
LLMManager[LLMManager]
Activities[Tool Activities]
end
subgraph "External Services"
LLM[Vertex AI]
Tools[External APIs<br/>Slack, GitHub, etc.]
end
Console --> Session
Session --> Runner
Runner --> Agent
Agent --> Workflow
Workflow --> LLMManager
Workflow --> Activities
LLMManager --> LLM
Activities --> Tools
Agent -.-> Agent2[Sub-Agents]
Agent2 -.-> Agent3[Sub-Sub-Agents]
Agent: Defines the AI model, instructions, functions, and sub-agents
Runner: Manages Temporal worker and workflow execution
Session: Handles workflow lifecycle and provides interaction methods
LLMManager: Handles AI model calls as Temporal activities
from temporal.agent import Agent, Runner, Session, AgentConsole
# Define your tools
def get_weather(location: str) -> str:
return f"Weather in {location}: Sunny, 72°F"
# Create an agent
agent = Agent(
name="Weather Assistant",
model_name="gemini-2.0-flash",
instruction="You help users get weather information.",
functions=[get_weather]
)
# Run with console interface
async def main():
async with Runner(app_name="weather-app", agent=agent) as runner:
async with Session(client=runner.client, agent=agent) as session:
await AgentConsole(session=session).run()
asyncio.run(main())# Create specialized sub-agents
search_agent = Agent(
name="Search Specialist",
instruction="You search for information",
functions=[search_function],
input_schema=SearchSchema
)
# Root agent coordinates sub-agents
root_agent = Agent(
name="Research Assistant",
instruction="You coordinate research tasks",
sub_agents=[search_agent]
)
# Run with console interface
async def main():
async with Runner(app_name="research-app", agent=root_agent) as runner:
async with Session(client=runner.client, agent=root_agent) as session:
await AgentConsole(session=session).run()- Customer Service - Simple agent with function calling
- GitHub Research - Technical code analysis across GitHub
- Single Agent Slack - Slack integration with one agent
- Multi-Agent Slack - Specialized agents for Slack research
- Python 3.12+
- Temporal server
- Google Cloud Vertex AI access
-
Install uv if you haven't already:
curl -LsSf https://astral.sh/uv/install.sh | sh -
Start Temporal server:
temporal server start-dev
-
Set environment variables:
export GCP_PROJECT_ID=your-project export GOOGLE_APPLICATION_CREDENTIALS=path/to/key.json
-
Run an example:
uv run python -m examples.customer_service.console