AI-powered bash command generator using ollama.
Describe what you need in natural language, get a bash command back.
Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | shInstall ollama (the model will be downloaded on first run):
curl -fsSL https://ollama.com/install.sh | shInstall ai-cli:
uv tool install git+https://github.com/3amyatin/ai-cliTo upgrade to the latest version:
uv tool upgrade ai-cliai list all jpg files larger than 10mb
ai compress directory into tar.gz
ai find all python files modified todayThe tool displays the generated command and prompts: [E]xecute (run it), [C]opy (copy to clipboard), or [A]bort.
-v— show explanation before the command-m MODEL— use a specific model for this run-M MODEL— use a specific model and save it as default-i/--interactive— interactively pick a model and save it as default--— separator: everything after is task text, not parsed as options
Settings are stored in ~/.config/ai-cli/config.toml:
model = "glm-5:cloud"
context = "Projects: ~/Documents/dev/. Server: d1.example.com (Docker, SSH). Python: use uv."The context field adds custom environment info to the LLM prompt, so it can generate commands tailored to your setup (server names, project paths, tool preferences).
The system prompt also auto-detects your OS, architecture, shell, available tools (Homebrew, uv, Docker), working directory, and home path.
All interactions are logged to ~/.config/ai-cli/history.jsonl:
{"ts": "2026-03-28T12:00:00+00:00", "task": "find large files", "model": "glm-5:cloud", "command": "find . -size +100M", "action": "execute"}Environment variables:
AI_MODEL— ollama model name (overrides config file)OLLAMA_HOST— ollama server URL (default:http://localhost:11434)
Priority: -m/-M flag > -i > AI_MODEL env var > config file > glm-5:cloud
The default model is glm-5:cloud (cloud-hosted, no local GPU required). You can use any model available in ollama — both local and cloud-hosted.
To switch the model for one run:
ai -m qwen2.5:7b find large files in home directoryTo switch and save as default:
ai -M gemini-3-flash-preview find large files in home directoryModels tested with ai-cli (March 2026):
| Model | Type | Avg latency | Quality | Notes |
|---|---|---|---|---|
glm-5:cloud (default) |
cloud | ~2s | correct | fastest, consistent |
gemini-3-flash-preview |
cloud | ~3s | correct | occasional output artifacts |
minimax-m2.5:cloud |
cloud | ~4s | correct | reliable |
qwen2.5:7b |
local | ~4s | correct | best local, no network needed |
llama3 |
local | ~5s | correct | solid local alternative |
deepseek-coder-v2:16b |
local | ~9s | correct | needs 9GB RAM |
qwen3.5:cloud |
cloud | ~18s | correct | slow |
To find the best local model for your hardware, try llm-checker:
uvx llm-checker- You describe a task in natural language:
ai find large files in home directory - The CLI sends your prompt to an ollama model (local or cloud)
- The model returns a bash command (and optionally an explanation with
-v) - The CLI shows the command and asks for confirmation before executing
Limitations:
- Requires a running ollama instance (local or remote via
OLLAMA_HOST) - Output quality depends on the chosen model and prompt clarity
- Generated commands target the detected shell — may need adaptation for other shells
- Cloud model latency depends on network and provider load
Clone and set up the dev environment:
git clone https://github.com/3amyatin/ai-cli
cd ai-cli
uv syncInstall as editable uv tool — the ai command links directly to your source code, so any code changes take effect immediately without reinstalling:
uv tool install -e .Run tests and lint:
just test # or: uv run pytest
just lint # or: uv run ruff check ai_cli/ tests/
just fmt # or: uv run ruff format ai_cli/ tests/
just check # lint + formatOther useful recipes:
just # list all available recipes
just run <args> # run the CLI via uv (e.g., just run -v list files)
just update # upgrade and sync dependencies
just fix # auto-fix lint issues