OrnnSkills is a background meta-agent that does not replace the main agent to execute tasks. Instead, it continuously observes the real execution of the main agent and maintains a shadow copy of skills from the global Skill registry for each project. It then performs small-step, automatic, and rollback-able continuous optimization on this shadow copy based on execution traces.
- 🔍 Smart Observation: Collect execution traces from Agents like Codex/OpenCode/Claude
- 🎯 Precise Mapping: Intelligently map traces to corresponding skills with 6 mapping strategies
- 🔄 Automatic Optimization: Automatically optimize skills based on real execution data
- 📦 Shadow Copy: Maintain independent skill copies for each project without polluting the global registry
- 🔙 Rollback Support: All modifications have evolution logs and checkpoints, supporting one-click rollback
- 🚀 Seamless Operation: Runs automatically in the background without manual intervention
npm install -g ornn-skillsBefore using OrnnSkills, make sure you have:
- Node.js 18+ installed
- An Agent (Codex/OpenCode/Claude) running in your project
cd /path/to/your/projectRun ornn init inside each project you want OrnnSkills to monitor. After registration, ornn start/ornn restart run as a single global daemon and monitor all initialized projects together, so you do not need to start one daemon per project.
ornn initThis will:
- Create
.ornn/directory in your project - Register the project in OrnnSkills' global project registry
- Generate default configuration files
- Scan and register global skills
ornn startThis starts the background daemon that will:
- Load all projects previously registered by
ornn init - Monitor your Agent's execution traces
- Automatically optimize skills based on real usage
- Run continuously in the background
ornn statusView the current status of the daemon and shadow skills.
ornn stopStop the background daemon when you're done.
ornn skills log <skill-id>ornn skills rollback <skill-id> --to rev_8ornn skills freeze <skill-id>
ornn skills unfreeze <skill-id>┌─────────────────────────────────────────────────────────────┐
│ Main Agent Host │
│ (Codex/OpenCode/Claude) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ TraceSkillObserver │
│ - Listen to trace events │
│ - Real-time mapping of traces to skills │
│ - Aggregate traces by skill │
│ - Trigger evaluation callbacks │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ TraceSkillMapper │
│ - 6 mapping strategies │
│ - Path extraction │
│ - Semantic inference │
│ - Confidence calculation │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ OptimizationPipeline │
│ - Get traces grouped by skill │
│ - Call Evaluator for assessment │
│ - Generate optimization tasks │
│ - Trigger Patch Generator │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Shadow Skill Manager │
│ ├─ Origin Registry (Global skill scanning) │
│ ├─ Shadow Registry (Project skill management) │
│ ├─ Evolution Evaluator (Optimization assessment) │
│ ├─ Patch Generator (Patch generation) │
│ └─ Journal Manager (Evolution logs) │
└─────────────────────────────────────────────────────────────┘
│
▼
Project Shadow Skills (.ornn/skills/*)
The system uses 6 strategies to map traces to corresponding skills:
| Strategy | Trigger Condition | Confidence | Description |
|---|---|---|---|
| Strategy 1 | tool_call reads skill file |
0.95 | Most reliable mapping method |
| Strategy 2 | tool_call executes skill-related operations |
0.85 | Inferred from tool parameters |
| Strategy 3 | file_change modifies skill file |
0.9 | File changes clearly point to skill |
| Strategy 4 | metadata contains skill_id |
0.98 | Explicit skill identifier |
| Strategy 5 | assistant_output references skill |
0.6 | Inferred from output content |
| Strategy 6 | user_input requests skill |
0.5 | Inferred from user input |
The system implements a complete automatic optimization loop:
- Trace Collection: Collect execution traces from the Agent host
- Trace-Skill Mapping: Intelligently map traces to corresponding skills
- Evaluation: Analyze trace patterns and identify optimization opportunities
- Task Generation: Create optimization tasks
- Optimization Execution: Apply patches to shadow skills
- Log Recording: Save evolution history and snapshots
[mapper]
min_confidence = 0.5 # Minimum confidence threshold
persist_mappings = true # Whether to save mapping relationships to database
[observer]
buffer_size = 10 # Buffer size
flush_interval = 5000 # Periodic flush interval (milliseconds)
[pipeline]
auto_optimize = true # Whether to enable automatic optimization
min_confidence = 0.7 # Minimum confidence for optimization tasksyour-project/
└── .ornn/
├── skills/
│ └── <skill-id>/
│ ├── current.md # Current shadow skill content
│ ├── meta.json # Metadata
│ ├── journal.ndjson # Evolution logs
│ └── snapshots/ # Snapshots
│ ├── rev_0005.md
│ └── rev_0010.md
├── state/
│ ├── sessions.db # SQLite database
│ ├── traces.ndjson # Raw traces
│ └── runtime_state.json # Host state
└── config/
└── settings.toml # Project configuration
| Command | Description |
|---|---|
ornn init |
Initialize the current project and register it globally |
ornn start |
Start the global daemon in background |
ornn stop |
Stop the daemon |
ornn daemon |
Manage daemon (start, stop, status, restart) |
ornn logs |
View daemon logs |
ornn config |
Manage configuration |
ornn completion |
Generate shell completion script |
ornn skills status |
View current project shadow skills status |
ornn skills log <skill> |
View evolution log for a skill |
ornn skills diff <skill> |
View diff between current content and origin |
ornn skills rollback <skill> --to <rev> |
Rollback to specified revision |
ornn skills freeze <skill> |
Pause automatic optimization for a skill |
ornn skills unfreeze <skill> |
Resume automatic optimization |
ornn skills sync <skill> |
Resync with origin |
ornn skills preview <skill> |
Preview optimization suggestions |
The system automatically performs the following types of optimizations:
- ✅ append_context: Supplement project-specific context
- ✅ tighten_trigger: Tighten applicability conditions
- ✅ add_fallback: Add high-frequency fallback handling
- ✅ prune_noise: Remove low-value noise descriptions
The following operations are not automatically performed by default:
- ❌ Large-scale rewriting of entire skills
- ❌ Deleting large amounts of core steps
- ❌ Changing the overall goal of a skill
- ❌ Writing back to global origin
[origin_paths]
paths = ["~/.skills", "~/.claude/skills"]
[observer]
enabled_runtimes = ["codex", "opencode", "claude"]
trace_retention_days = 30
[evaluator]
min_signal_count = 3
min_source_sessions = 2
min_confidence = 0.7
[patch]
allowed_types = ["append_context", "tighten_trigger", "add_fallback", "prune_noise"]
cooldown_hours = 24
max_patches_per_day = 3
[journal]
snapshot_interval = 5
max_snapshots = 20
[daemon]
auto_start = true
log_level = "info"To prevent runaway loops from spamming model providers and creating abnormal cost spikes, LiteLLMClient now enforces a local in-process safety guard before each provider call.
Default limits:
12requests per60srolling window2concurrent in-flight requests48,000estimated tokens per60srolling window
You can override these defaults with environment variables:
ORNN_LLM_SAFETY_ENABLED=true
ORNN_LLM_SAFETY_WINDOW_MS=60000
ORNN_LLM_MAX_REQUESTS_PER_WINDOW=12
ORNN_LLM_MAX_CONCURRENT_REQUESTS=2
ORNN_LLM_MAX_ESTIMATED_TOKENS_PER_WINDOW=48000[project]
name = "my-project"
auto_optimize = true
[skills]
# Specific skill configuration overrides
[skills.my-skill]
auto_optimize = false # Freeze this skillnpm installnpm run devnpm run buildnpm testnpm run lint
npm run format- TypeScript: Type-safe JavaScript
- Node.js: Runtime environment
- Commander.js: CLI framework
- SQLite: Local database
- Winston: Logging system
- Vitest: Testing framework
- PRD - Product Requirements Document
- Skill Domain Refactor Plan
- Trace-Skill Mapping Documentation
- User Guide
MIT License - See LICENSE file for details.
Contributions are welcome! Please read the Contributing Guide for details.