| layout | page |
|---|---|
| title | FAQ |
| permalink | /faq/ |
Frequently asked questions about mini-a. Can't find your answer? Open an issue. {: .faq-section}
mini-a is a minimalist autonomous agent framework built on OpenAF. It connects to LLMs (like GPT-5, Gemini, Claude, or local models via Ollama), uses tools through the MCP protocol, and can execute shell commands to achieve goals you define — all from a single command.
| Feature | mini-a | LangChain | AutoGPT | CrewAI |
|---|---|---|---|---|
| Setup complexity | 1 command | pip + config | Docker + config | pip + config |
| Lines to first agent | 1 | 20-50 | 10-20 | 30-50 |
| LLM providers | 10+ | 10+ | 3-5 | 5-10 |
| Built-in cost optimization | Dual-model | Manual | No | No |
| MCP support | 20+ built-in | Via plugins | No | No |
| Runtime footprint | Lightweight | Heavy | Heavy | Medium |
| Language | JavaScript/OpenAF | Python | Python | Python |
| Interfaces | Console, Web, Library, Docker | Library | Web | Library |
| Shell integration | Native | Via tools | Via plugins | Via tools |
Yes. mini-a is open source. You only pay for the LLM API calls you make. Using local models (Ollama) is completely free.
OpenAF is an open-source automation framework for Java/JavaScript. mini-a is built as an OpenAF package (oPack). OpenAF provides the runtime, and mini-a provides the agent logic.
Visit openaf.io for installation instructions. It supports Linux, macOS, and Windows.
Anywhere OpenAF runs: Linux, macOS, Windows, and Docker. ARM and x86 architectures are supported.
OpenAF includes its own runtime, so you don't need to install Java separately.
It depends on your needs:
| Use Case | Recommended Model | Why |
|---|---|---|
| General purpose | openai:gpt-5.2 |
Good balance of speed, cost, and quality |
| Budget-friendly | openai:gpt-5-mini |
Low cost, good for simple tasks |
| Best quality | anthropic:claude-sonnet-4-20250514 |
Strong reasoning and coding |
| Privacy/local | ollama:llama3 |
Runs locally, no data leaves your machine |
| AWS environments | bedrock:anthropic.claude-sonnet-4-20250514-v1:0 |
Uses existing AWS credentials |
Yes. Install Ollama, pull a model (ollama pull llama3), and set:
export OAF_MODEL="(type: ollama, model: 'llama3', url: 'http://localhost:11434')"No API key needed. All processing stays on your machine.
Use the built-in model manager for encrypted storage:
mini-a modelman=trueThis provides a TUI to store, encrypt, and manage credentials. Alternatively, use environment variables.
Yes. Use the /model command in the console to see the current model. To change models, restart with a different OAF_MODEL value, or use the model manager.
Yes. mini-a supports:
- Custom slash command templates in
~/.openaf-mini-a/commands/<name>.md - Skills in
~/.openaf-mini-a/skills/<name>/SKILL.mdor~/.openaf-mini-a/skills/<name>.md - Console hooks in
~/.openaf-mini-a/hooks/*.yaml|*.yml|*.json - Extra loading paths via
extracommands=...,extraskills=..., andextrahooks=...
You can run templates directly with mini-a exec="/<name> arg1 arg2" and list skills with /skills.
Supported placeholders inside command/skill templates are {{args}}, {{argv}}, {{argc}}, and positional {{arg1}}, {{arg2}}, ...
Reference: mini-a USAGE.md
MCP (Model Context Protocol) is a standard protocol for connecting LLMs to external tools and data sources. mini-a includes 20+ built-in MCP servers for common tasks. See the [MCP Catalog]({{ '/mcp-catalog' | relative_url }}).
mini-a mcp="(cmd: 'ojob mcps/mcp-time.yaml')" goal='What time is it?'If the MCP server loads correctly, you'll see its tools listed in the startup output.
Yes. Any server implementing the MCP protocol (STDIO or HTTP) can be used with mini-a. Point to your server with a full path or URL.
There's no hard limit, but using 3+ MCPs is best with proxy mode to reduce overhead:
mini-a mcpproxy=true mcp="[(cmd: 'ojob mcps/mcp-time.yaml'), (cmd: 'ojob mcps/mcp-web.yaml'), (cmd: 'ojob mcps/mcp-db.yaml jdbc=jdbc:h2:./data user=sa pass=sa')]"Shell access is off by default. When enabled (useshell=true), consider:
- Keep
readwrite=false(the default) to prevent file modifications - Use
shellallowto whitelist specific commands - Use
shellbanto block dangerous commands - Use Docker for full isolation
# Safe shell access: read-only with allowed commands only
mini-a useshell=true readwrite=false shellallow='git,ls,cat,grep'Run mini-a in a container so the agent can only affect the container environment:
docker run --rm -e OAF_MODEL="(type: openai, model: gpt-5.2, key: '...')" \
-v $(pwd):/work:ro openaf/mini-a useshell=true goal='Analyze the project in /work'The :ro mount flag ensures files are read-only inside the container.
When using the model manager (modelman=true), keys are stored encrypted on disk. Environment variables are not encrypted but are standard practice for API key management.
- Use dual-model: Set
OAF_LC_MODELfor simple tasks (50-70% savings) - Compact conversations: Use
/compactor setmaxcontext - Be specific: Precise goals = fewer tokens
- Limit steps: Set
maxstepsto prevent runaway agents
Set a powerful model for complex tasks and a cheaper model for simple ones:
export OAF_MODEL="(type: openai, model: gpt-5.2, key: '...')" # Complex reasoning
export OAF_LC_MODEL="(type: openai, model: gpt-5-mini, key: '...')" # Routing, summarizationmini-a automatically routes tasks to the appropriate model. Simple operations (summarization, classification, routing) go to the cheaper model, saving 50-70% on typical workloads.
Use /stats in the console to see token counts, model usage, and estimated costs for the current session.
Check that OAF_MODEL is in the expected SLON/JSON-style configuration format and that credentials are present:
echo $OAF_MODEL # Should show something like: (type: openai, model: gpt-5.2, key: '...')Shell access is disabled by default. Enable it:
mini-a useshell=trueYour conversation exceeded the context window. Solutions:
/compact # Manual compaction
mini-a maxcontext=40000 # Set a limit
/reset # Start freshSet a step limit to prevent infinite loops:
mini-a maxsteps=20 goal='...'- Check your internet connection
- Verify the API key is valid and has credits
- Check for rate limiting (
rpm,tpmparameters) - For Ollama, ensure the service is running (
ollama serve)
Ensure you specified a port and check for conflicts:
mini-a onport=8080
# If port is busy, try another:
mini-a onport=3000Still stuck? Open an issue with the error message and your configuration (redact API keys).