NeuroLoop™ – EXG-aware AI agent (Python edition)
A Python port of neuroloop using litellm for multi-provider LLM access and prompt_toolkit for the interactive terminal UI.
neuroloop-py is an EXG-aware conversational AI agent that:
- Reads your brainwaves before every turn via
neuroskill status - Injects your live mental state into the system prompt so the AI responds with full awareness of how you actually feel — cognitively, emotionally, somatically
- Auto-labels notable moments (awe, grief, deep focus, moral clarity, etc.) as permanent EXG annotations
- Runs guided protocols (breathing, meditation, grounding, somatic scans, etc.) step by step with OS notifications and EXG timestamps
- Searches the web, reads URLs, and maintains persistent memory across sessions
- Loads skill reference docs on-demand based on what you're asking about
- Pre-warms the compare cache so session comparisons are instant when you ask
neuroloop/
├── main.py Entry point — model selection, CLI args, asyncio.run()
├── agent.py NeuroloopAgent — main loop, before_agent_start hook, tool dispatch
├── memory.py ~/.neuroskill/memory.md — read/write persistent memory
├── prompts.py STATUS_PROMPT + build_system_prompt() + read_skill_index()
├── neuroskill/
│ ├── run.py run_neuroskill() — subprocess executor (npx neuroskill ...)
│ ├── signals.py detect_signals() — 40+ regex-based domain signal detectors
│ ├── context.py select_contextual_data() — parallel queries + skill injection
│ └── client.py SkillConnection — WebSocket live event listener
└── tools/
├── web_fetch.py web_fetch tool — URL → plain text
├── web_search.py web_search tool — DuckDuckGo Lite (no API key)
└── protocol.py run_protocol tool — timed step execution + EXG labelling
NEUROLOOP.md Capability index — always injected into the system prompt
METRICS.md Full EXG metrics reference — injected on metric questions
skills/ One SKILL.md per neuroskill domain — injected on-demand
neuroskill-data-reference/
neuroskill-labels/
neuroskill-protocols/
neuroskill-recipes/
neuroskill-search/
neuroskill-sessions/
neuroskill-sleep/
neuroskill-status/
neuroskill-streaming/
neuroskill-transport/
| TypeScript (neuroloop) | Python (neuroloop-py) |
|---|---|
| pi coding agent framework | litellm + prompt_toolkit |
pi ExtensionAPI |
NeuroloopAgent class |
before_agent_start hook |
agent.before_agent_start() async method |
pi registerTool |
ALL_TOOLS list (OpenAI function schema) |
pi InteractiveMode TUI |
prompt_toolkit PromptSession + rich |
pi skill loader (skillsOverride) |
_load_skill() / _load_metrics_md() |
| pi model-based skill invocation | Signal-driven injection in context.py |
| WebSocket EXG live panel | Per-turn neuroskill status + WS events |
~/.neuroloop/ agent dir |
~/.neuroskill/ (shared with TypeScript) |
cd /agent/ns/neuroloop-py
pip install -e .
# or: uv syncRequires Python ≥ 3.12.
# Interactive mode (auto-detects model)
neuroloop-py
# With a specific model
neuroloop-py --model claude-3-5-sonnet-20241022
neuroloop-py --model ollama/qwen3:5b
# With an initial message
neuroloop-py "How is my focus today?"
# Via python -m
python -m neuroloop --model gpt-4o "Summarise my last session"--model MODELCLI flagNEUROLOOP_MODELenvironment variable- Auto-detect running Ollama → prefers
qwen3:5b, falls back to first available model - Cloud API keys:
ANTHROPIC_API_KEY→ claude,OPENAI_API_KEY→ gpt-4o,GEMINI_API_KEY→ gemini - Hard default:
ollama/qwen3:5b
Note: Ollama models don't support function calling — tools are disabled automatically. Cloud models (Anthropic, OpenAI, Gemini) get full tool access.
| Key | Action |
|---|---|
ctrl-d |
Quit |
ctrl-c |
Cancel current LLM response (at prompt: press twice to quit) |
tab |
Autocomplete commands and /neuro subcommands |
| Command | Description |
|---|---|
/exg |
Show live EXG snapshot |
/exg on / /exg off |
Toggle EXG display |
/neuro <cmd> [args] |
Run any neuroskill subcommand |
/memory |
Show persistent memory |
/model [name] |
Show or switch model |
/help |
List all commands |
/quit |
Exit |
!cmd |
Run a shell command |
Tools are only available to cloud models (Anthropic, OpenAI, Gemini). Ollama models receive the same context but respond in plain text without tool calls.
| Tool | Description |
|---|---|
web_fetch |
Fetch any URL → plain text |
web_search |
DuckDuckGo Lite search (no API key needed) |
memory_read |
Read ~/.neuroskill/memory.md |
memory_write |
Write / append to memory |
neuroskill_label |
Create a timestamped EXG annotation |
neuroskill_run |
Run any neuroskill subcommand |
prewarm |
Start background neuroskill compare cache build |
run_protocol |
Execute a timed multi-step guided protocol with EXG labels |
Every user prompt is scanned by detect_signals() (40+ regex patterns across 40+
domains). Matching signals trigger two things in parallel:
- Neuroskill queries —
neuroskill session,neuroskill sleep,search-labels …, etc. — run concurrently and appended to the system context - Skill file injection — the relevant
skills/*/SKILL.mdorMETRICS.mdis read and prepended to the context block
| Signal | Skill injected | Queries fired |
|---|---|---|
protocols |
neuroskill-protocols | — |
sleep |
neuroskill-sleep | sleep, search-labels sleep … |
sessions / compare |
neuroskill-sessions | sessions |
compare |
neuroskill-search | compare (cached) |
labels_api |
neuroskill-labels | — |
metrics_ref |
neuroskill-data-reference + METRICS.md | — |
transport |
neuroskill-transport | — |
streaming |
neuroskill-streaming | — |
scripting |
neuroskill-recipes | — |
session |
neuroskill-status | session 0 |
focus / stress / hrv / … |
— | session 0, search-labels … |
NEUROLOOP.md (capability overview) is always injected between the static guidance
and the live EXG context.
- Python ≥ 3.12
neuroskillnpm package reachable vianpx neuroskill- At least one of:
- A running Ollama instance (no API key needed, no tools)
ANTHROPIC_API_KEY,OPENAI_API_KEY, orGEMINI_API_KEY(full tool support)