AI Agent responsible for installing all the necessary tools and dependencies and running the app, built with Python and LangGraph.
The easiest way to get started with the setup-agent is to use pipx for installing the agent as the CLI tool.
brew install pipx
pipx install setup-agent
setup-agent setup run- Install
uvaccording to the Astral docs - In your terminal run the following commands:
uv sync # install all dependencies
uv run pre-commit install # install pre-commit hooksFor the testing purposes, run:
source .venv/bin/activate
python src/cli/app.py setup runIn order to install the CLI globally, use provided installation scripts:
uv build # or ./scripts/build.sh
./scripts/install.shThen you can use the tool globally by running:
setup-agent setup runIn order to uninstall the tool globally:
./scripts/uninstall.shTo make the tool working you will need 2 API keys:
TAVILY_API_KEY- key for Tavily (web search layer for agents)- an API key for your LLM provider, e.g
OPENAI_API_KEYforOpenAImodels (i.e.gpt-4o) orANTHROPIC_API_KEYforAnthropicmodels (i.e.claude-sonnet-4-5). For full list of possible integrations, please check out these Langchain docs (and make sure to install langchain package for given provider usinguvif you're using provider other thanOpenAIorAnthropic).
The easiest way to use the tool is to add these 2 API keys to the .env file located where the cli is used:
# .env
TAVILY_API_KEY=...
OPENAI_API_KEY=...
ANTHROPIC_API_KEY=...For extended debugging and tool usage metrics, please consider adding these Langsmith related variables:
# .env
LANGSMITH_TRACING=...
LANGSMITH_ENDPOINT=...
LANGSMITH_API_KEY=...
LANGSMITH_PROJECT=...The CLI runs an interactive script that allows user to configure everything and define the problem:
| Option | Type | Description | Default | Required |
|---|---|---|---|---|
Project root |
str | Path to the root of the project (needs to be a valid directory) | Current directory | Yes |
Guideline files |
List[str] | Optional list of guideline files. If no guidelines files are specified at this point, agent will suggest all relevant files to the user. | - | No |
Task |
str | Predefined goal to achieve by the agent. If task is not defined, agent will suggest some tasks to the user. |
- | No |
Model |
str | LLM model to be used as a core reasoning model. For a full list of available models check out these Langchain docs | anthropic:claude-sonnet-4-5 |
No |
The script allows for optional, extended model configuration as well:
| Option | Type | Description | Default | Required |
|---|---|---|---|---|
Temperature |
float | Controls randomness of the model. Must be a number between 0.0 and 1.0 |
Model's default value | No |
Max output tokens |
int | Max output tokens of the model. | Model's default value | No |
Timeout |
int | Timeout for LLM calls in seconds. | Model's default value | No |
Max retries |
int | Max number of retries for LLM calls. | Model's default value | No |