This document provides a high-level overview of the NeuroCore system architecture, its core layers, and the execution engine.
NeuroCore follows a layered architecture, separating the presentation, core logic, extensible modules, and data persistence.
┌─────────────────────────────────────────────────────────────────────────────┐
│ PRESENTATION LAYER │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ Chat UI │ │ Flow Editor │ │ Memory │ │ Module Dashboard │ │
│ │ (HTMX) │ │ (Canvas) │ │ Browser │ │ (Settings) │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └──────────┬──────────┘ │
└─────────┼────────────────┼────────────────┼────────────────────┼────────────┘
│ │ │ │
└────────────────┴────────────────┴────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ CORE LAYER │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────────┐ │
│ │ FlowRunner │ │FlowManager │ │ModuleManager│ │ SettingsManager│ │
│ │ (Execution) │ │ (Storage) │ │ (Lifecycle) │ │ (Config) │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └────────┬────────┘ │
│ │ │ │ │ │
│ └────────────────┴────────────────┴──────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ LLMBridge │ │Observability│ │ Routers │ │ Dependencies│ │
│ │ (API Client)│ │(Trace/Metr.)│ │ (HTTP API) │ │ (DI) │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
└────────────────────────────────────┬──────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ MODULE LAYER (18 Modules) │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │llm_module│ │ system │ │ memory │ │ tools │ │ chat │ │
│ │ (Core) │ │ _prompt │ │ (Vector) │ │(Sandbox) │ │ (I/O) │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ logic │ │ planner │ │reflection│ │knowledge │ │ agent_ │ │
│ │(Control) │ │(Planning)│ │(Quality) │ │ _base │ │ loop │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │messaging │ │ calendar │ │ skills │ │reasoning │ │ browser │ │
│ │ _bridge │ │ (Events) │ │(Instruct)│ │ _book │ │ _auto │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ memory_ │ │annotatio │ │ email_ │ │
│ │ browser │ │ ns │ │ bridge │ │
│ └──────────┘ └──────────┘ └──────────┘ │
└────────────────────────────────────┬──────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ DATA LAYER │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ SQLite │ │ FAISS │ │ JSON │ │ File │ │
│ │ (Metadata) │ │ (Vectors) │ │ (Config) │ │ Storage │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────────────────┘
- UI: Built with FastAPI, HTMX, and Tailwind CSS to provide a responsive, real-time interface (39 Jinja2 templates).
- Flow Editor: A visual canvas for designing DAG-based AI workflows with version history and rollback.
- Chat Interface: Web UI with real-time streaming and thinking trace display.
The "brain" of the framework, handling orchestration, configuration, and foundational services.
- FlowRunner (
core/flow_runner.py): Executes flows using Kahn's topological sort and bridge group logic. Supportstimeout,raise_errors, andepisode_idparameters for episode persistence. Per-event-loop executor cache (max 100 entries) avoids re-instantiation. - ModuleManager (
core/module_manager.py): Handles hot-loading and unloading of extension modules. Uses_loaded_onceset to distinguish initial loads from hot-reloads (prevents prematuresys.modulesflush). Enforcesmodule_allowlistfrom settings. - FlowManager (
core/flow_manager.py): Manages persistence and CRUD for AI flows, including version history (up to 20 versions per flow stored inai_flows_versions.json). - SettingsManager (
core/settings.py): Centralized, thread-safe configuration. Writes via atomic tempfile+rename. Validates all settings on save. - SessionManager / EpisodeState (
core/session_manager.py): Persists chat sessions and long-running episode state (plan, current step, completed steps, phase) todata/episodes/. Phases:PHASE_PLANNING,PHASE_EXECUTING,PHASE_REPLANNING,PHASE_COMPLETED,PHASE_FAILED,PHASE_PAUSED. - PlanHelper (
core/planner_helpers.py): Shared utility consolidating plan dependency graph logic. Providesbuild_dependency_graph,detect_circular_dependencies,get_executable_steps,generate_plan_context, andvalidate_dependencies. - FlowContext (
core/flow_context.py): Pydantic-based type-safe flow payload model. Provides runtime validation,messaging_platform/messaging_reply_tofields, and IDE-friendly type hints. - FlowData (
core/flow_data.py):TypedDict-based schema for flow payloads. Includes helper functions (get_messages,set_plan, etc.) and backward-compatible migration utilities. Declares all reserved keys including_messaging_platformand_messaging_reply_to. - Error Hierarchy (
core/errors.py): 14 typed exception classes —NeuroCoreError,LLMError,LLMTimeoutError,LLMHTTPError,LLMResponseError,ToolError,ToolExecutionError,ToolTimeoutError,SandboxSecurityError,FlowError,FlowNotFoundError,FlowValidationError,NodeExecutionError,ModuleError,ModuleNotFoundError,ModuleLoadError,MemoryError,MemoryConsolidationError. - Observability (
core/observability.py): Distributed tracing (span-based with parent-child relationships), metrics collection (counters, gauges, histograms with p50/p95/p99), and structured JSON logging. Metrics counters are persisted across restarts.
18 independent, self-contained directories under modules/ extending system functionality.
- Each module implements the
NodeExecutorinterface (receiveandsend). - Modules are hot-loadable — enable/disable without restarting the server.
- See MODULE_GUIDE.md for the full development guide.
- SQLite: Stores relational metadata, structured memory, and conversation history (WAL mode, FTS5 full-text search).
- FAISS:
IndexFlatIPwith L2 normalization for high-performance similarity search (memory + knowledge base). - JSON: Local configuration and flow definitions (
ai_flows.json,ai_flows_versions.json,chat_sessions.json,data/reasoning_book.json). - JSONL:
data/execution_trace.jsonlstores per-node execution traces for debugging (append-only, written only whendebug_mode=true). - Episodes:
data/episodes/stores serializedEpisodeStateobjects for long-running agent tasks.
Domain models for academic research management, usable as structured output targets:
| Schema | File | Purpose |
|---|---|---|
| Hypothesis | hypothesis.py |
Scientific hypotheses with variables and confidence levels |
| Article | article.py |
Academic articles with bibliographic info and citations |
| Finding | finding.py |
Research findings with evidence linking |
| StudyDesign | study_design.py |
Scientific study designs with methodology |
NeuroCore executes workflows as Directed Acyclic Graphs (DAGs).
- [Optional] Episode Restore: If
episode_idis provided,EpisodeStateis loaded fromdata/episodes/and injected intoinitial_input. - Topological Sort: Kahn's algorithm determines the execution sequence.
- Bridge Groups: Parallel components are grouped using BFS to enable implicit data sharing.
- Node Execution: Each node processes input via its
receivemethod and produces output viasend. Messages list is deep-copied before each node to prevent cross-node mutation. - Conditional Routing: Dynamic branching is driven by
_route_targets. - Loop Guard: A safety counter (
max_node_loops, default 100, max 1000) prevents infinite loops. - Timeout: Optional per-flow
timeoutparameter wraps execution inasyncio.wait_for. - Error Mode:
raise_errors=Truepropagates node exceptions instead of returning error dicts.
Bridges create implicit bidirectional connections between nodes in the same "bridge group," enabling Memory Recall, System Prompt, and LLM Core to share a unified execution context without explicit wires.
The flow runner uses an input_node_map to route incoming data to the correct source node:
| Source | Target Node |
|---|---|
chat |
chat_input |
discord |
discord_input (legacy) |
messaging |
messaging_input |
The messaging_bridge module provides a unified interface for all messaging platforms through a single pair of nodes:
messaging_input: Receives messages from any platform; filters by platform list in config.messaging_output: Routes replies back to the originating platform or to configured proactive recipients.
| Platform | Mechanism | Notes |
|---|---|---|
| Telegram | HTTP long-polling | 3072-char chunking |
| Discord | WebSocket Gateway v10 | 1900-char chunking, heartbeat loop |
| Signal | HTTP polling (signal-cli REST) | 1800-char chunking |
| Webhook (Evolution API) | 4000-char chunking, no polling thread |
| Key | Source | Purpose |
|---|---|---|
_messaging_platform |
MessagingInputExecutor | Platform that originated the message |
_messaging_reply_to |
MessagingInputExecutor | Sender address (chat_id/channel/phone/JID) |
modules/messaging_bridge/node.py defines MESSAGING_PLATFORMS as the single source of truth for all supported platforms. Adding a new platform requires only appending one entry plus implementing a bridge class.
core/routers.py implements a full community marketplace for sharing AI flows, skills, tools, and prompts between NeuroCore instances.
| File | Purpose |
|---|---|
data/marketplace/catalog.json |
Central catalog of all uploaded items |
data/marketplace/uploads/ |
Uploaded item files |
data/marketplace_profile.json |
Local uploader profile (handle, username, description) |
data/marketplace_notifications.json |
In-app notification queue (capped at 200) |
data/download_history.json |
Per-item import history for dedup tracking |
Each NeuroCore instance has an immutable uploader handle (12-char HMAC-SHA256 hex, derived from a local secret). The uploader_username and uploader_description are editable separately in marketplace_profile.json. The handle acts as a tamper-proof author identity.
| Type | Import Mechanism |
|---|---|
skill |
Copies .md file to modules/skills/data/ |
flow |
Imports into ai_flows.json via FlowManager |
tool |
Registers JSON definition in modules/tools/tools.json, writes code to modules/tools/library/{name}.py using filelock |
prompt |
Copies .md file to modules/skills/data/prompts/ |
Notifications are generated server-side when:
- Someone comments on your uploaded item →
"comment"notification - A comment contains
@{your_handle}→"mention"notification
Notifications are stored in data/marketplace_notifications.json and served via GET /marketplace/notifications. The badge count is fetched on marketplace page load.
Each item carries a changelog list: [{version, notes, timestamp}]. When a publisher updates an item, the new version is prepended. Visitors see the full changelog on the item detail page.
Upload form requires an originality checkbox. On the backend, items flagged as "marketplace_import" in their metadata are rejected with HTTP 400, preventing re-upload of unmodified imported content.
The email_bridge module provides IMAP receive and SMTP send capabilities, bridging email into NeuroCore flows.
| Property | Value |
|---|---|
| Bridges | ImapBridge (receive), SmtpBridge (send) |
| Files | imap_bridge.py, smtp_bridge.py, node.py, router.py, service.py |
| Config | IMAP/SMTP server, port, credentials, polling interval |
The module follows the same pattern as messaging_bridge: an input node polls the IMAP inbox and an output node sends via SMTP.
NeuroCore uses a hybrid concurrency model:
threading.RLock: Used for synchronous shared state (FlowManager, ModuleManager, SettingsManager, SessionPersistenceManager).threading.Lock: Used for single-level synchronous guards (Metrics, SessionManager instance,_init_locksingleton guard).asyncio.Lock: Used for asynchronous resources (LLM clients, FlowRunner cache per event loop, ChatSessions viaasyncio.to_thread, ReasoningBook).
For detailed locking rules, see ./CONCURRENCY.md.
| Component | Technology |
|---|---|
| Backend | Python 3.12+, FastAPI 0.115+, Uvicorn 0.32+ |
| Frontend | HTMX, Vanilla JS, TailwindCSS (CDN), Jinja2 3.1+ |
| Vector DB | FAISS IndexFlatIP + L2 normalization |
| Relational DB | SQLite (WAL mode, FTS5 full-text search) |
| HTTP Client | HTTPX 0.28+ (async, connection pooling) |
| WebSocket | websockets 12.0+ (Discord Gateway, custom protocols) |
| LLM Integration | OpenAI-compatible API |
| Data Validation | Pydantic 2.10+ |
| Testing | pytest, pytest-asyncio (asyncio_mode = "auto"), pytest-httpx, pytest-cov |
| Deployment | Docker + docker-compose |
| Linting | Ruff (configured in pyproject.toml) |
All runtime configuration lives in settings.json. The SettingsManager provides atomic reads and writes protected by threading.RLock.
DEFAULT_SETTINGS = {
"llm_api_url": "http://localhost:1234/v1",
"llm_api_key": "",
"embedding_api_url": "",
"default_model": "local-model",
"embedding_model": "",
"active_ai_flows": [],
"temperature": 0.7,
"max_tokens": 2048,
"debug_mode": False,
"ui_wide_mode": False,
"ui_show_footer": True,
"request_timeout": 60.0,
"max_node_loops": 100,
"module_allowlist": [], # empty = allow all modules
}module_allowlist is a security control: when non-empty, only listed module IDs can be hot-loaded by ModuleManager. debug_mode enables per-node execution tracing via observability and triggers importlib.reload() on every executor load.
| Metric | Value |
|---|---|
| Active modules | 18 |
| Node executors | 28 |
| HTTP routes (core) | 74 |
| HTTP routes (total) | 167 |
| Test files | 72 |
| Test cases | 1,141+ |
| Web templates | 39 |
| Built-in tools | 23 (16 standard + 7 RLM) |
| Python files | 170+ |