Thanks for your interest! mini-agent is a perception-driven AI agent framework. Contributions that align with its design philosophy are welcome.
mini-agent is:
- Perception-driven — sees the environment first, then decides what to do
- File-based — Markdown + JSON Lines, no database
- Composable — shell scripts as perception, Markdown as skills
git clone https://github.com/miles990/mini-agent.git
cd mini-agent
pnpm install
pnpm build
pnpm typecheckpnpm build # Compile TypeScript
pnpm typecheck # Type-check without emitting
pnpm test # Run tests (vitest)
pnpm test:watch # Run tests in watch modeTo run the agent locally, create a .env file with at minimum:
MINI_AGENT_INSTANCE=dev
PORT=3001Then node dist/cli.js to start, or node dist/cli.js up to run the daemon. See README.md for full environment variable reference.
A perception plugin is any executable that writes to stdout. The output gets injected into the agent's context as an XML section.
Create the script:
#!/bin/bash
# plugins/my-sensor.sh
# Output becomes <my-sensor>...</my-sensor> in agent context
echo "Status: $(systemctl is-active myservice)"
echo "Queue: $(wc -l < /tmp/queue.txt) items"
echo "Last error: $(tail -1 /var/log/myservice.err)"Register in agent-compose.yaml:
perception:
custom:
- name: my-sensor
script: ./plugins/my-sensor.sh
# Optional:
# timeout: 15000 # ms, default 5000
# output_cap: 2500 # chars, default 4000
# enabled: false # disable without removingTips:
- Keep output concise — it's injected into the LLM context every cycle
- Use
output_capto limit verbose outputs - Exit 0 on success; non-zero exits are logged but don't crash the agent
- Test by running
bash plugins/my-sensor.shdirectly
See plugins/ for 34 examples.
Skills are Markdown files injected into the system prompt. They teach the agent how to do things.
# My Domain Skill
## When to use
When the agent encounters [situation].
## Steps
1. Check [prerequisite]
2. Run `command`
3. If [condition], do X; otherwise do Y
## Rules
- Never do [dangerous thing]
- Always verify [result] before reporting successRegister in agent-compose.yaml:
skills:
- ./skills/my-domain.mdSkills can be loaded conditionally (JIT) based on conversation keywords — add a keywords frontmatter:
---
keywords: [docker, container, compose]
---
# Docker Operations Skill
...Open an issue with:
- What happened vs. what you expected
- Steps to reproduce
- Environment (OS, Node version,
mini-agent statusoutput)
- Fork and branch (
git checkout -b fix/description) - Make changes — TypeScript strict mode, keep it minimal
- Run
pnpm typecheckandpnpm test— both must pass - Open a PR with a clear description of what changed and why
Look for issues labeled good first issue for starter tasks.
src/ # TypeScript source (~29K lines)
plugins/ # Perception plugins (shell scripts)
skills/ # Markdown knowledge modules
scripts/ # Utility scripts
memory/ # Agent memory (Markdown + JSONL)
Key files: src/agent.ts (core), src/loop.ts (OODA cycle), src/perception.ts (plugin runner), src/compose.ts (config loader), src/dispatcher.ts (response parser).
- TypeScript strict mode
- Field names consistent across endpoints, plugins, and types
- HTML files making API calls must be served via HTTP (not
file://) - No unnecessary abstractions — three similar lines > premature helper
All changes should pass these checks:
| Constraint | Ask yourself |
|---|---|
| Quality-First | Does this make the agent think better, not just faster? |
| Token Economy | Does this make context more precise, not just smaller? |
| Transparency | Does any tracking add <5% cycle time? |
| Reversibility | Can this be reverted in <1 minute? |
| No Dead Code | Are there paths that never execute? |
- PRs are reviewed for correctness, alignment with design constraints, and code quality
- Plugin/skill PRs are typically reviewed faster — they're self-contained
- Code PRs (
src/) requirepnpm typecheckandpnpm testto pass - Keep PRs focused — one change per PR is easier to review
- Issues — Bug reports, feature proposals, questions
- PRs — Code, plugins, skills, docs
By contributing, you agree that your contributions will be licensed under the MIT License.