A Python library for building AI agents with progressive tool disclosure. Instead of loading all tools upfront, agents discover tools on-demand, reducing token consumption by 85%+ and enabling scalability to 100+ tools.
- 🎯 Progressive Disclosure: Only load tool definitions when needed
- 🔍 Smart Discovery: Agents search for tools with keyword/regex matching
- ⚡ Type-Safe: Automatic JSON schema generation from Python type hints
- 🛠️ Simple API: Just add
@tooldecorator to your functions - 📝 Clear Errors: User-friendly validation and error messages
- 🤖 Mistral Integration: Built-in support for Mistral API
Note: This library is not yet published to PyPI. To use it, clone the repository and install locally.
# Clone the repository
git clone https://github.com/yourusername/agent-tools.git
cd agent-tools
# Install dependencies and run
uv syncfrom agent_tools import Agent, tool
# Define your tools with @tool decorator
@tool
def get_weather(city: str, units: str = "celsius") -> dict:
"""Get current weather for a city.
Args:
city: The name of the city.
units: Temperature units (celsius or fahrenheit).
Returns:
Dictionary with temperature, conditions, and humidity.
"""
return {"temp": 20, "conditions": "sunny", "humidity": 65}
@tool
def get_forecast(city: str, days: int = 3) -> list:
"""Get weather forecast for a city.
Args:
city: The name of the city.
days: Number of days to forecast (1-7).
Returns:
List of daily forecasts.
"""
return [{"day": i + 1, "temp": 20 + i, "conditions": "sunny"} for i in range(days)]
@tool
def convert_temperature(temp: float, from_unit: str, to_unit: str) -> float:
"""Convert temperature between units.
Args:
temp: Temperature value.
from_unit: Source unit (celsius or fahrenheit).
to_unit: Target unit (celsius or fahrenheit).
Returns:
Converted temperature.
"""
if from_unit == "celsius" and to_unit == "fahrenheit":
return (temp * 9/5) + 32
elif from_unit == "fahrenheit" and to_unit == "celsius":
return (temp - 32) * 5/9
return temp
# Create agent with your tools
# API key is read from MISTRAL_API_KEY environment variable
agent = Agent(tools=[get_weather, get_forecast, convert_temperature])
# Agent discovers and uses tools automatically
response = agent.run("What's the weather in Paris?")
print(response)- Tool Definition: Define tools with
@tooldecorator - requires type hints and docstrings - Schema Generation:
@tooldecorator automatically creates ToolMetadata with JSON schemas - Progressive Disclosure: Agent starts with only
search_toolsin context - Discovery: Agent searches for relevant tools when needed
- Execution: Agent calls tools with validated parameters
- Results: Tool results fed back to agent for final response
Want to see it in action? Run simple_demo.py to see the exact messages sent to the LLM at each turn!
Traditional approach: Load all N tools in system prompt = N × 500 tokens Progressive disclosure: Load only search_tools initially = ~200 tokens Token savings: ~85% for 10+ tools
This enables:
- Scaling to 100+ tools without context explosion
- Faster response times (less tokens to process)
- More efficient token usage (only load what's needed)
If you prefer not to use the decorator:
from agent_tools import Agent, generate_schema
def get_weather(city: str) -> dict:
"""Get current weather."""
return {"temp": 20, "conditions": "sunny"}
# Explicitly generate schema
agent = Agent(tools=[generate_schema(get_weather)])from agent_tools import Agent, DetailLevel
def custom_search(query: str, detail_level: str) -> list[dict]:
# Your custom search logic
# Should return list of tool metadata dicts
pass
agent = Agent(tools=[...], search_fn=custom_search)agent = Agent(
tools=[get_weather, get_forecast],
system_prompt="You are a specialized agent for weather queries."
)from agent_tools import search_tools, DetailLevel, tool
@tool
def get_weather(city: str) -> dict:
"""Get current weather."""
return {}
@tool
def get_time(timezone: str) -> str:
"""Get current time."""
return ""
# Create tool list
my_tools = [get_weather, get_time]
# Search with different detail levels
# Just names
results = search_tools(my_tools, "weather", DetailLevel.NAMES)
# [{"name": "get_weather"}, ...]
# Names + descriptions
results = search_tools(my_tools, "weather", DetailLevel.DESCRIPTIONS)
# [{"name": "get_weather", "description": "..."}, ...]
# Full schemas
results = search_tools(my_tools, "weather", DetailLevel.FULL)
# [{"name": "get_weather", "description": "...", "parameters": {...}}, ...]See the examples/ directory:
simple_demo.py- See progressive disclosure in action! Prints the message history showing how the agent discovers and uses tools
Run the example:
export MISTRAL_API_KEY=your_key_here
uv run python examples/simple_demo.py
# Or view example output without API key
cat examples/simple_demo_output.md# Install dependencies
uv sync
# Run tests
uv run pytest
# Run tests with coverage
uv run pytest --cov=src --cov-report=term-missing
# Type checking
uv run mypy src/
# Linting
uv run ruff check src/
# Formatting
uv run ruff format src/- Python 3.12+
- Mistral API key
MIT
Built with Claude - conversational AI-assisted development.
Based on research from Anthropic on progressive tool disclosure for AI agents.