This project reproduces the agent code from the Fly.io blog article: You Should Write An Agent.
Caution
The code in this repository is purely vibe-coded and was created just for fun during a late-evening session. Do not use it for anything serious or for real products!
Important
Instead of using OpenAI, this project uses the SAIA API endpoint, which is fully OpenAI-compatible. SAIA is the Scalable Artificial Intelligence (AI) Accelerator developed by the GWDG (where I also work). You can request access via our service catalog if you meet the user requirements for the AI service center KISSKI.
This repository contains three implementations:
agent.py- Basic chat agent (ChatGPT-like)agent_multi.py- Multi-personality agent (Alph tells truth, Ralph tells lies)agent_tools.py- Tool-enabled agent with ping capability
- Docker and Docker Compose
- API key for the academiccloud endpoint
The agents are configured to always use the SAIA API endpoint: https://chat-ai.academiccloud.de/v1
-
Set your API key as an environment variable:
export OPENAI_API_KEY=your-api-key-here -
Or create a
.envfile:OPENAI_API_KEY=your-api-key-here
Environment Variables:
OPENAI_API_KEYorAPI_KEY- Your API key (required)MODEL- Model name to use (defaults toqwen3-32b)
Troubleshooting 401 Unauthorized Error:
- Make sure your API key is set:
echo $OPENAI_API_KEY - Verify the API key is correct and valid for the academiccloud endpoint
- When using Docker, ensure the environment variable is passed:
docker-compose run --rm -e OPENAI_API_KEY=$OPENAI_API_KEY agent
Run the basic chat agent (use run for interactive input):
docker-compose run --rm agentOr build and run manually:
docker build -t aiagent .
docker run -it --rm \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
-e MODEL=$MODEL \
aiagent python agent.pyRun the multi-personality agent:
docker-compose run --rm agent-multiOr manually:
docker run -it --rm \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
-e MODEL=$MODEL \
aiagent python agent_multi.pyRun the agent with ping tool:
docker-compose run --rm agent-toolsOr manually:
docker run -it --rm \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
-e MODEL=$MODEL \
aiagent python agent_tools.pyNote: Use docker-compose run instead of docker-compose up for interactive sessions that require input. The run command properly handles stdin/stdout for interactive terminals.
> What is 2+2?
>>> 4
> Who are you?
>>> I'm not Ralph. (or) Yes—I'm Alph. How can I help?
> describe our connectivity to google
>>> [Agent will ping google.com, www.google.com, and 8.8.8.8 automatically]
- The code uses
qwen3-32bmodel by default - The OpenAI API structure matches the current Chat Completions API
- Tool calls are handled automatically in a loop until the agent is satisfied
- All agents maintain conversation context throughout the session
- API endpoint is hardcoded to:
https://chat-ai.academiccloud.de/v1
To run locally without Docker:
pip install -r requirements.txt
export OPENAI_API_KEY=your-key
python agent.py # or agent_multi.py or agent_tools.py