Created by Louie Nemesh
Resonant IDE is a full-featured AI-native code editor built on the VS Code Open Source foundation, deeply integrated with the Resonant Genesis AI governance platform. Unlike traditional editors that bolt on AI as an afterthought, Resonant IDE was designed from the ground up with AI at its core — every feature, every tool, every workflow is AI-first.
This is not a wrapper. This is not a plugin. This is a complete development environment where the AI assistant has the same capabilities as you: it reads your files, runs your commands, searches your codebase, manages your git, edits your notebooks, browses the web, and deploys your code — all through a governed, auditable, identity-bound execution pipeline.
AI Chat + Editor + Terminal — Unified Workspace
11 AI Providers with BYOK (Bring Your Own Key)
Configurable Max Tool Loops — Up to Unlimited
| Feature | Resonant IDE | Traditional Editors | AI Wrappers |
|---|---|---|---|
| Native AI Agent | Built-in agentic loop with 59+ tools | Separate extension/plugin | Chat-only, no tools |
| Local + Cloud AI | Ollama, LM Studio, OpenAI, Anthropic, Groq | Cloud-only or local-only | Single provider |
| SAST & Architecture | AST analysis, dependency graphs, SAST, full-stack mapping | Basic search | No analysis |
| Platform Identity | DSID (Decentralized Semantic ID) per user | Username/password | API key |
| Memory System | Hash Sphere persistent memory across sessions | No memory | Chat history only |
| Tool Execution | 59 local tools + 433 platform API endpoints | Limited extensions | Sandboxed/limited |
| Governed Execution | Pre-execution policies, trust tiers, audit trails | No governance | No governance |
Resonant IDE uses a thin client + server orchestration model. The desktop app handles UI rendering, authentication, local LLM discovery, and tool execution. All AI orchestration intelligence (system prompts, tool selection, agentic loop, LLM provider routing) runs server-side in RG_Axtention_IDE.
┌──────────────────────────────────────┐ ┌──────────────────────────────────┐
│ Resonant IDE (Electron Client) │ │ RG_Axtention_IDE (Server) │
│ │ │ │
│ ┌────────────┐ ┌────────────────┐ │ │ ┌──────────────────────────────┐│
│ │ VS Code │ │ Resonant AI │ │ SSE │ │ Agentic Loop Engine ││
│ │ Core │ │ Extension │◄─┼──────┼──│ - System prompt (protected) ││
│ │ (Editor, │ │ │ │ │ │ - Smart tool selection ││
│ │ Terminal, │ │ Responsibilities│ │ │ │ - LLM calls (multi-prov) ││
│ │ Debug) │ │ ───────────── │──┼──────┼─►│ - Message history mgmt ││
│ │ │ │ • Auth (JWT) │ │ POST │ │ - Retry + rate limiting ││
│ │ │ │ • UI rendering │ │ │ │ - BYOK key resolution ││
│ │ │ │ • Tool executor│ │ │ └──────────────────────────────┘│
│ │ │ │ • LLM discovery│ │ │ │
│ │ │ │ (Ollama) │ │ │ ┌──────────────────────────────┐│
│ └────────────┘ └────────────────┘ │ │ │ 59 Tool Definitions ││
│ │ │ │ (never leave the server) ││
│ NO orchestration intelligence │ │ └──────────────────────────────┘│
│ NO system prompts │ │ │
│ NO tool definitions │ │ Providers: Groq, OpenAI, │
│ (this repo is public) │ │ Anthropic, Google, DeepSeek, │
│ │ │ Mistral + user BYOK keys │
└──────────────────────────────────────┘ └─────────────┬────────────────────┘
│
▼
┌──────────────────────────┐
│ Resonant Genesis Cloud │
│ (30+ microservices) │
│ │
│ Gateway → Auth → Chat │
│ Agents → Memory → Billing│
│ Blockchain → Marketplace │
└──────────────────────────┘
- User sends a prompt in the IDE chat
- Client POSTs to server via
/api/v1/ide/agent-stream - Server selects tools, builds system prompt, calls LLM
- Server streams SSE events:
thinking,text,execute_tool,stats,done - On
execute_tool→ client runs the tool locally → POSTs result back - Server resumes agentic loop → calls LLM again → repeat until done
- For local Ollama: client sends
local_llmconfig, server proxies the call
| File | Purpose | Lines |
|---|---|---|
extension.ts |
Main entry point — SSE client, tool dispatch, auth wiring | ~800 |
toolExecutor.ts |
All 59 tool implementations — file I/O, git, web, deploy, etc. | ~2,300 |
toolDefinitions.ts |
Local tool schemas (for Ollama fallback) | ~200 |
languageModelProvider.ts |
Multi-provider LLM discovery (cloud + local) | ~600 |
localLLMProvider.ts |
Ollama/LM Studio/llama.cpp local model discovery | ~300 |
chatViewProvider.ts |
Sidebar webview chat UI with streaming | ~900 |
authProvider.ts |
VS Code AuthenticationProvider for Resonant Genesis | ~180 |
authService.ts |
Token management, refresh, DSID binding | ~280 |
interactiveTerminal.ts |
Persistent terminal sessions with I/O capture | ~300 |
inlineCompletionProvider.ts |
Ghost text code completions (FIM) | ~190 |
locTracker.ts |
Lines-of-code tracking per session | ~160 |
updateChecker.ts |
Auto-update system with release notes | ~160 |
settingsPanel.ts |
Full settings webview panel | ~700 |
profileWebview.ts |
User profile and account management | ~250 |
agentProvider.ts |
VS Code Chat Participant integration | ~190 |
Note: Tool definitions and orchestration intelligence (system prompts, tool selection algorithm) live server-side in
RG_Axtention_IDE(private repo). This client repo is public and contains no proprietary AI logic.
Tool definitions and selection are managed server-side. Tool execution happens locally on your machine — the AI can read your files, run your commands, and manage your git without any code leaving your machine unless you explicitly share it.
The IDE AI has access to both local tools (executed via Electron IPC on your machine) and cloud/platform tools (executed on the server). Total: 137+ tools across 17 categories.
| Tool | Description |
|---|---|
file_read |
Read file with optional offset/limit for large files |
file_write |
Create or overwrite a file with new content |
file_edit |
Replace an exact unique string in a file (surgical edits) |
multi_edit |
Atomic batch edits on one file — multiple find/replace in sequence |
file_list |
List directory contents with sizes and types |
file_delete |
Delete a file or directory |
| Tool | Description |
|---|---|
grep_search |
Search text patterns across files via ripgrep — regex, case-insensitive, glob filters |
find_by_name |
Find files by name glob pattern with depth limits and type filters |
| Tool | Description |
|---|---|
run_command |
Run any shell command (blocking or async) with working directory control |
command_status |
Check background command status and read output |
| Tool | Description |
|---|---|
git_clone |
Clone a Git repository to local path |
git_branch |
Create, list, or switch Git branches |
git_merge |
Merge a branch into current branch |
git_push |
Push commits to remote |
git_pull |
Pull changes from remote |
terminal_create · terminal_send · terminal_send_raw · terminal_read · terminal_wait · terminal_list · terminal_close · terminal_clear
notebook_read · notebook_edit
ssh_run · deploy_web_app
trajectory_search — semantic search over conversation history
Real-time ghost text code suggestions via FIM (Fill-in-the-Middle) across 30+ languages.
| Tool | Description |
|---|---|
web_search |
Search the web for current information, news, articles, documentation |
fetch_url |
Fetch and read content from any URL |
read_webpage |
Read a webpage and extract clean structured content |
read_many_pages |
Read multiple web pages in parallel (max 5) |
reddit_search |
Search Reddit for discussions and recommendations |
image_search |
Search for images on the web |
news_search |
Search latest news articles |
places_search |
Search for businesses on Google Maps |
youtube_search |
Search YouTube for videos |
deep_research |
Deep multi-source research via Perplexity AI |
wikipedia |
Search and read Wikipedia articles |
| Tool | Description |
|---|---|
code_visualizer_scan |
Full AST scan — functions, classes, endpoints, imports, pipelines, dead code detection |
code_visualizer_functions |
List all functions and API endpoints in a project |
code_visualizer_trace |
Trace dependency flow from any node through the codebase |
code_visualizer_governance |
Architecture governance audit — reachability, drift detection, health score (0-100) |
code_visualizer_graph |
Get full dependency graph as structured data |
code_visualizer_pipeline |
Auto-detect and visualize pipeline flows |
code_visualizer_filter |
Filter graph by file path, node type, or keyword |
code_visualizer_by_type |
Get all nodes of a type — function, class, api_endpoint, service, file, import |
| Tool | Description |
|---|---|
memory_read |
Search user's long-term memory (cross-session, cross-machine) |
memory_write |
Save information to long-term memory |
memory_search |
Deep keyword + semantic search through memories |
memory_stats |
Get memory usage stats |
hash_sphere_search |
Search Hash Sphere anchors (blockchain-verified memories) |
hash_sphere_anchor |
Create a new blockchain-verified memory point |
hash_sphere_list_anchors |
List all user's Hash Sphere anchors |
hash_sphere_hash |
Generate a Hash Sphere hash for content |
hash_sphere_resonance |
Check resonance between two content pieces |
| Tool | Description |
|---|---|
agents_list |
List user's AI agents |
agents_create |
Create a new AI agent |
agents_start |
Start/run an agent |
agents_stop |
Stop a running agent |
agents_status |
Get agent config and status |
agents_delete |
Delete an agent |
agents_update |
Update agent config — name, goal, model, tools, etc. |
agents_sessions |
List sessions/runs for an agent |
agents_session_steps |
Get execution steps for a session |
agents_session_trace |
Full execution trace — steps, waterfall, cost, safety flags |
agents_metrics |
Get agent run metrics |
agents_session_cancel |
Cancel a running session |
workspace_snapshot |
Full overview of workspace |
run_agent |
Directly run an agent with a goal |
schedule_agent |
Set recurring schedule for an agent |
present_options |
Present interactive options to the user |
architect_plan |
Analyze a request and produce a JSON blueprint for agents |
architect_create_agent |
Create a fully-configured agent from a blueprint |
architect_assign_goal |
Assign a goal to an agent |
architect_create_schedule |
Create a recurring schedule — cron or interval |
architect_create_webhook |
Create a webhook trigger for an agent |
architect_set_autonomy |
Set autonomy mode (governed, supervised, unbounded) |
architect_list_available_tools |
List all tools available to assign to agents |
architect_list_providers |
List available LLM providers and models |
| Tool | Description |
|---|---|
generate_image |
Generate an AI image from text (DALL-E) |
generate_audio |
Generate speech from text (TTS) |
generate_music |
Generate music from text description |
| Tool | Description |
|---|---|
gmail_send |
Send email via Gmail |
gmail_read |
Read recent Gmail inbox |
slack_send |
Send Slack message |
slack_read |
Read Slack channel messages |
google_calendar |
Google Calendar: list/create events, check availability |
google_drive |
Google Drive: list/search/read/create files |
figma |
Figma: list projects, get file, inspect components |
sigma |
Sigma Computing dashboards and analytics |
send_email |
Send email via SendGrid with HTML support |
| Tool | Description |
|---|---|
github_create_repo |
Create GitHub repository |
github_list_repos |
List GitHub repositories |
github_list_files |
List files in a GitHub repo |
github_download_file |
Download file from GitHub repo |
github_upload_file |
Upload file to GitHub repo |
github_pull_request |
Create or list pull requests |
github_issue |
Create or list issues |
github_commit |
Get commits in a repository |
github_comment |
Comment on a GitHub issue or PR |
| Tool | Description |
|---|---|
sp_state |
Get full State Physics universe — nodes, edges, metrics, invariants |
sp_reset |
Reset universe to initial state |
sp_nodes |
List all nodes in Hash Sphere universe |
sp_metrics |
Get universe metrics — node count, edge count, entropy |
sp_identity |
Create identity node in Hash Sphere universe |
sp_simulate |
Run N physics simulation steps |
sp_galaxy |
Create galaxy-scale simulation |
sp_demo |
Seed universe with demo data |
sp_asymmetry |
Get asymmetry score — trust variance and Gini |
sp_physics_config |
Update physics engine parameters |
sp_entropy_config |
Update entropy engine parameters |
sp_entropy_toggle |
Enable or disable entropy injection |
sp_entropy_perturbation |
Inject perturbation event |
sp_agent_spawn |
Spawn autonomous agent in universe |
sp_agent_step |
Step the active agent once |
sp_agent_kill |
Kill the active agent |
sp_agents_spawn |
Spawn multiple agents |
sp_agents_kill_all |
Kill all autonomous agents |
sp_experiment |
Setup named experiment — zero_agent, stress_test, long_run |
sp_memory_cost |
Set memory cost multiplier |
sp_metrics_record |
Record metrics snapshot to history |
| Tool | Description |
|---|---|
create_rabbit_post |
Create post in Rabbit community |
list_rabbit_communities |
List all Rabbit communities |
list_rabbit_posts |
List Rabbit posts |
rabbit_vote |
Vote on Rabbit post/comment |
create_rabbit_community |
Create a new Rabbit community |
get_rabbit_community |
Get a Rabbit community by slug |
search_rabbit_posts |
Search Rabbit posts by keyword |
get_rabbit_post |
Get a specific Rabbit post by ID |
delete_rabbit_post |
Delete a Rabbit post (owner only) |
create_rabbit_comment |
Comment on a Rabbit post |
list_rabbit_comments |
List comments on a Rabbit post |
delete_rabbit_comment |
Delete a Rabbit comment (owner only) |
| Tool | Description |
|---|---|
execute_code |
Run code in Docker sandbox (Python, JavaScript, Bash) |
http_request |
HTTP request to internal platform APIs |
external_http_request |
HTTP request to any external URL |
dev_tool |
Bridge to ED service for file ops, git, docker, testing |
| Tool | Description |
|---|---|
weather |
Get current weather and 3-day forecast |
stock_crypto |
Get real-time stock or crypto prices |
generate_chart |
Generate chart image from data (bar, line, pie, radar, scatter) |
visualize |
Generate SVG diagram inline in chat |
get_current_time |
Get current date, time, timezone |
get_system_info |
Get platform system info |
| Tool | Description |
|---|---|
platform_api_search |
Search ~383 platform API endpoints by keyword or category |
platform_api_call |
Call any authenticated platform API endpoint directly |
| Tool | Description |
|---|---|
create_tool |
Create custom HTTP tool stored in DB. Set is_shared=true to make it platform-wide |
list_tools |
List user's custom tools + all shared platform tools |
delete_tool |
Delete a custom tool |
update_tool |
Update an existing custom tool |
auto_build_tool |
LLM designs, validates (AST safety scan), and registers a new tool at runtime. Describe what the tool should do and it will be auto-created |
check_tool_exists |
Check if a capability exists as a tool. If not found, suggests using auto_build_tool |
The AI assistant can create its own tools at runtime. If it needs a capability that doesn't exist among the 137+ built-in tools, it can design, validate, and register a new tool — and that tool immediately becomes available to the entire platform.
- Check: AI calls
check_tool_existsto search 137+ built-in + all custom/shared tools - Design: If not found, AI calls
auto_build_toolwith a natural language description - LLM Design: An LLM generates the full tool spec (name, endpoint, method, params, category)
- Safety Scan: AST validation blocks SSRF, localhost, metadata endpoints, invalid schemas
- Register: Tool is stored in PostgreSQL, cached, and immediately available
- Platform-Wide: With
is_shared=true(default), ALL users and agents across the platform can use it
- No restart needed — tool is usable in the same conversation
- DB-persisted — survives container restarts
- Category auto-assigned — or use a custom category string
- Shared or private —
is_shared=truefor platform-wide,falsefor user-only - Full audit trail — who created it, when, what category, active status
- OpenAI — GPT-4o, GPT-4o-mini
- Anthropic — Claude 3.5 Sonnet, Claude 3 Opus
- Groq — Llama 3.3 70B (ultra-fast inference)
- Google — Gemini Pro, Gemini Flash
- BYOK — Bring Your Own Key for any provider
- Ollama — Any model (llama3.1, codellama, deepseek-coder, qwen2.5-coder, etc.)
- LM Studio — OpenAI-compatible API
- llama.cpp — Direct server connection
- LocalAI — Multi-model local server
- vLLM — High-performance local inference
The AI automatically selects the best available provider, or you can manually choose via the model picker. BYOK users get priority routing to their preferred provider.
One of the most important features under the hood is the intelligent LLM fallback chain. When you send a prompt, the system doesn't just try one provider and give up — it executes a multi-step resilience pipeline that ensures your request always gets answered:
User sends prompt
│
▼
┌─────────────────────────────────┐
│ 1. Try user's preferred BYOK │ ← Your own API key (e.g. Claude Sonnet)
│ provider + model │
└──────────────┬──────────────────┘
│ If 401/429/500/timeout...
▼
┌─────────────────────────────────┐
│ 2. Try user's other BYOK keys │ ← e.g. OpenAI, Groq, Google keys
│ (round-robin available keys)│
└──────────────┬──────────────────┘
│ If all BYOK keys fail...
▼
┌─────────────────────────────────┐
│ 3. Fall back to platform pool │ ← Resonant Genesis shared API keys
│ (Groq → OpenAI → Anthropic) │
└──────────────┬──────────────────┘
│ Always succeeds (unless all providers are down)
▼
Response streamed back
Why this matters:
- API keys expire or hit rate limits — instead of showing an error, the system automatically tries the next available provider
- You stay in flow — no need to manually switch models when one provider has an outage
- BYOK keys are always tried first — your preferred provider gets priority; the platform pool is only a safety net
- Full transparency — the response includes
fallbackSSE events showing exactly which providers were tried and which one succeeded
BYOK Direct — User's own key succeeds on first try
Fallback Chain — BYOK key fails, system tries next available BYOK key
Every fallback attempt is logged with provider name, model, and HTTP status. You can see the full chain in the response stats panel.
The built-in Code Visualizer is a full AST-based static analysis engine that runs entirely on your machine using Python. When the AI needs to understand your codebase architecture, it calls the Code Visualizer tools automatically — no cloud services, no code uploads, everything stays local.
- AST Parsing — Full abstract syntax tree analysis for Python (using the
astmodule), JavaScript, and TypeScript (regex-based) - Node Discovery — Services, files, functions, classes, API endpoints, database connections, external service calls
- Connection Mapping — Imports, function calls, API calls, database queries, HTTP requests, class inheritance
- Pipeline Detection — Automatically discovers user_registration, login, chat_flow, billing, agent_execution pipelines across the full stack
- Dead Code Detection — Unreachable functions, unused imports, orphaned files classified as LIVE, Dormant, Experimental, Deprecated, or Invalid
- Governance Engine — Reachability contracts, forbidden dependency rules, architecture drift scoring (0-100), CI-ready enforcement output
- SAST — Security vulnerability patterns, forbidden dependency checks, trust-tier compliance
The AI has access to 8 specialized analysis tools:
| Tool | What It Does |
|---|---|
code_visualizer_scan |
Full AST scan — discovers all services, functions, classes, endpoints, imports, pipelines |
code_visualizer_functions |
List all functions and API endpoints with file paths, line numbers, decorators, routes |
code_visualizer_trace |
Trace a specific function — incoming callers, outgoing calls, full dependency chain |
code_visualizer_governance |
Run governance analysis — reachability, forbidden deps, drift score, CI pass/fail |
code_visualizer_graph |
Get the full node + connection graph for visualization |
code_visualizer_pipeline |
Discover and map multi-service pipelines (e.g. login flow across auth → gateway → chat) |
code_visualizer_filter |
Filter nodes by file pattern, service, or custom criteria |
code_visualizer_by_type |
Get all nodes of a specific type (functions, classes, endpoints, imports, etc.) |
User asks: "Analyze this project's architecture"
│
▼
┌─────────────────────────────────────┐
│ AI selects code_visualizer_scan │
│ tool from 59 available tools │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Client runs: python3 cv_cli.py │ ← Runs locally on YOUR machine
│ scan /path/to/project │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ analyzer.py parses every .py/.js/ │
│ .ts file using AST parsing │
│ │
│ Extracts: nodes, connections, │
│ services, pipelines, dead code │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ JSON report returned to AI │
│ (up to 12K chars, smart-summarized)│
│ │
│ AI presents findings in natural │
│ language with actionable insights │
└─────────────────────────────────────┘
The Code Visualizer source lives in
extensions/resonant-ai/code_visualizer/— analyzer.py (1034 lines), governance.py (384 lines), and cv_cli.py (155 lines). Licensed under the Resonant Genesis Source Available License.
Other AI IDEs make the LLM read files one at a time, burning thousands of tokens and still missing the big picture. Resonant IDE scans the entire codebase structurally and delivers a compressed architectural map:
Traditional AI IDE: Resonant IDE:
───────────────── ─────────────
User: "Explain this project" User: "Explain this project"
→ AI reads file1.py (500 tokens) → AI runs code_visualizer_scan
→ AI reads file2.py (800 tokens) → Gets: 15 services, 342 functions,
→ AI reads file3.py (600 tokens) 47 endpoints, 6 pipelines, 12
→ AI reads file4.py (400 tokens) broken connections (200 tokens)
→ ... (20 more files) → AI already understands architecture
→ Total: 15,000+ tokens → Total: 200 tokens
→ Still doesn't see connections → Sees full dependency graph
Beyond the core analysis listed above, the Code Visualizer also provides:
| Capability | Description | Example Prompt |
|---|---|---|
| Execution Tracing | Trace a specific function's full dependency chain — who calls it (incoming) and what it calls (outgoing), up to configurable depth | "Trace the authentication flow" |
| File Comparison | Compare node graphs between different analysis runs to detect structural changes over time | "What changed architecturally since last scan?" |
| Code Migration Heatmap | Track how files and connections evolve across multiple scans — see the timeline of modifications and identify migration hotspots | "Show me the hot map of recent changes" |
| Broken Connection Detection | Identify imports that don't resolve, API calls to missing endpoints, database queries to non-existent tables | "Find broken imports in the codebase" |
| Service Boundary Analysis | Map which files belong to which service, detect cross-service dependencies, identify coupling hotspots | "Are these services properly isolated?" |
| Graph Filtering | Filter by file path, node type, keyword, or service to focus on specific subsystems | "Show only the auth service functions" |
| GitHub Annotations | Export violations as GitHub-compatible annotation format for CI integration | CI/CD pipeline enforcement |
| Language | Parser | What It Extracts |
|---|---|---|
| Python | ast module (full AST) |
Functions, classes, decorators, imports, HTTP calls, DB queries, async/await, inheritance |
| JavaScript | Regex-based | Functions, arrow functions, classes, imports (ES6 + CommonJS), fetch/axios/HTTP calls |
| TypeScript | Regex-based | Same as JavaScript + type annotations preserved in metadata |
| JSX/TSX | Regex-based | React components detected as functions/classes |
The full analysis JSON can be 50,000+ characters for a large codebase. But the AI only receives a human-readable summary (200-500 tokens). The full detailed report stays in your IDE chat for you to explore:
Code Visualizer (scan) completed.
Services: 12
Files analyzed: 847
Functions: 2,341
Endpoints: 433
Connections: 5,672
Broken connections: 23
Service names: gateway, auth_service, chat_service, memory_service, ...
Top functions:
- route_query (multi_ai_router.py:45)
- authenticate (auth.py:12)
API endpoints:
- POST /api/v1/chat/message (resonant_chat.py)
- GET /api/v1/auth/me (user_routes.py)
Full detailed report is shown to the user in the IDE.
Every conversation, every code change, every decision is optionally stored in the Hash Sphere — a deterministic hashing system that maps content to 3D coordinates for semantic retrieval. Memories persist across sessions and sync to the cloud when authenticated.
Your identity is cryptographically bound to the Ethereum Base Sepolia L2 blockchain. Every action in the IDE is traceable to your verified identity, creating an immutable audit trail of your development activity.
Resonant IDE follows the same thin-client + server-orchestrated agentic loop architecture pioneered by tools like Windsurf Cascade, but with fundamental improvements across every layer. If you've used Cascade, Cursor, or Copilot Chat, Resonant IDE will feel familiar — and then show you what's been missing.
| Capability | Resonant IDE | Windsurf / Cursor |
|---|---|---|
| Loop depth control | Configurable max tool loops (1 → unlimited) per session | Fixed or limited loop depth |
| Smart context passing | Server compresses Code Visualizer results, passes summaries between loops — only actionable data reaches the LLM | Full tool output forwarded, burning tokens on noise |
| Token burn reduction | AST summaries replace raw file reads; CV results are 50-200 lines instead of 5,000+ | No built-in static analysis; LLM reads entire files |
| Local + server tool split | 59 tools execute locally (zero latency), server handles orchestration only | All tools run in cloud sandbox or limited local |
| Provider flexibility | 11 providers: 5 cloud (OpenAI, Anthropic, Groq, Google, DeepSeek) + 5 local (Ollama, LM Studio, llama.cpp, LocalAI, vLLM) + BYOK | Typically 1-3 providers, no local LLM support |
| Fallback chain | Automatic BYOK → other BYOK keys → platform pool, with full transparency | Single provider, manual switching on failure |
| Traceability | Every loop iteration logged with provider, model, token count, tool calls, duration | Minimal or no execution tracing |
| Transparency | Full SSE event stream: thinking, text, execute_tool, fallback, stats, done |
Opaque response generation |
| Cost control | Platform credits, BYOK priority routing, per-session token tracking | Subscription-based, no per-session visibility |
| Customization | Connect any local LLM, set preferred provider, configure loop depth, choose models per task | Fixed model selection |
When the AI runs a Code Visualizer scan, the full report (thousands of nodes, connections, pipelines) stays local in the IDE chat for you to see. But the server only receives a compressed summary — service names, function counts, endpoint counts, violation highlights. This means:
- Loop 1: AI scans your codebase → gets architectural overview (200 tokens instead of 12,000)
- Loop 2: AI traces a specific function → gets caller/callee chain (focused, minimal)
- Loop 3: AI runs governance check → gets violations and drift score
- Result: 3 loops, full codebase understanding, ~500 tokens of context instead of ~30,000
This is why Resonant IDE can run deeper agentic loops without hitting context limits — the AI knows more while reading less.
Every tool runs on your machine. File reads, grep searches, git operations, terminal commands, Code Visualizer scans — all local. The server only sees:
- Your prompt
- Compressed tool results
- Your BYOK keys (encrypted, never stored)
No code leaves your machine unless you explicitly share it. No cloud sandbox. No file upload. Full privacy, full speed.
See more: The full tool list (59 tools across 11 categories) is documented above in 59 Tools (11 Categories).
Resonant IDE connects to the RARA (Resident Autonomous Runtime Agent) system — a physics-inspired governance engine that treats your running platform as a physical system with measurable properties: entropy, energy, mass, collapse risk. This isn't a metaphor — it's a deterministic simulation that predicts failures before they happen.
State Physics models your platform as a physical system where:
- Services are nodes with mass (code size), energy (request throughput), and trust scores
- Connections between services are edges with measured latency and failure rates
- Agents (AI or human) have value scores that decay on failure and grow on success
- Entropy measures system disorder — high entropy means things are drifting apart
- Collapse risk predicts cascading failures before they happen
The Invariant SIM enforces 17 invariants across three classes:
Graph-level constraints verified via the AST Code Visualizer:
| Invariant | What It Checks | Severity |
|---|---|---|
| Route Reachability | Every public route reaches a handler: ∀ route R → ∃ handler H : path(R → H) |
CRITICAL |
| No Orphan Handlers | Every handler has at least one route pointing to it | HIGH |
| Auth Boundary | No unauthenticated path can reach privileged resources | CRITICAL |
| No Execution Cycles | No circular call chains without a circuit breaker | HIGH |
| Capability Isolation | Agent nodes cannot directly depend on core service internals | CRITICAL |
| File Integrity | Modified files must be syntactically valid (AST-parseable) | HIGH |
| Dependency Resolution | All imports must resolve to existing modules | MEDIUM |
Intent and confidence constraints:
| Invariant | What It Checks | Severity |
|---|---|---|
| Confidence Threshold | AI mutation confidence must exceed 0.6 before execution | HIGH |
| Rationale Present | Every code change must have a non-empty explanation | MEDIUM |
| Intent Alignment | The change must match the declared capability | HIGH |
| Scope Containment | Changes must not exceed declared file/service boundaries | CRITICAL |
| Reversibility | Every mutation must be rollback-capable | HIGH |
Rate limiting and blast radius constraints:
| Invariant | What It Checks | Severity |
|---|---|---|
| Rate Limit | Max mutations per hour/day not exceeded | HIGH |
| Blast Radius | Number of affected services per mutation stays within threshold | HIGH |
| Cooldown Period | Minimum time between mutations to the same file/service | MEDIUM |
| Rollback Frequency | If rollbacks are happening too often, something is wrong | HIGH |
| Failure Circuit Breaker | 3+ consecutive failures suspends the capability | CRITICAL |
When the AI makes code changes through the agentic loop, the Invariant SIM can:
- Pre-mutation check — Before writing a file, verify structural invariants (no broken imports, no auth boundary violations)
- Blast radius prediction — Estimate how many services will be affected by a change
- Confidence gating — If the AI's confidence is below threshold, require human approval
- Automatic rollback — If post-mutation checks fail, instantly restore the previous state
- Circuit breaking — If an agent keeps failing, automatically suspend its capabilities
The Physics Bridge translates measured system state into governance actions:
Physics State (measured) Governance Action (automatic)
──────────────────────── ────────────────────────────
Collapse risk > 0.8 → EMERGENCY STOP (kill switch)
Invariant violations > 0 → Block further mutations
Entropy > threshold → Warn + require human approval
Agent trust < 0.3 → Revoke agent capabilities
Energy spike detected → Rate limit mutations
Mass imbalance → Flag architectural drift
- Simulate user flows through connected APIs before deploying — trace the path from login → chat → memory → billing and verify every connection
- Predict overload points — find services with high connection density that will fail under load
- Detect migration risks — before refactoring, see which invariants will break and what the blast radius will be
- Enforce architecture rules — forbidden dependencies (e.g., frontend → backend internals) are caught at the graph level, not in code review
- CI-ready governance — export invariant results as GitHub annotations for automated architecture enforcement
Source: The Invariant SIM runs as a standalone service
RG_Internal_Invarients_SIM— 29 Python modules includinginvariant_engine.py,invariant_classes.py,physics_bridge.py,kill_switch.py,mutation_executor.py,governance_engine.py,capability_engine.py,compliance.py(EU AI Act + SOC2), andcryptographic_receipt.py(tamper-proof mutation receipts).
- Node.js 22.22.0 (exact version required — see
.nvmrc). Node 23+ / 25+ WILL NOT WORK — native modules (tree-sitter) fail to compile against newer V8 headers. - npm 10.x or later (comes with Node 22)
- Python 3.10+ (for native module compilation and SAST analysis)
- Xcode Command Line Tools (macOS) or build-essential (Linux) — required for native modules
- A free account at dev-swat.com (required for AI features)
⚠️ Node version matters. Ifnode -vshows anything other thanv22.x, the build will fail. Use nvm (below) to install the correct version.
bash -lc 'set -e; export NVM_DIR="$HOME/.nvm"; [ -s "$NVM_DIR/nvm.sh" ] || curl -fsSL https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash; . "$NVM_DIR/nvm.sh"; [ -d RG_IDE ] || git clone https://github.com/DevSwat-ResonantGenesis/RG_IDE.git; cd RG_IDE; nvm install; nvm use; npm install; cd extensions/resonant-ai && npm install && npx tsc -p tsconfig.json && cd ../..; npm run compile; ./scripts/code.sh'Important: always launch with
./scripts/code.sh. Do not open the raw Electron bundle directly (.build/electron/*.app), or you'll get an empty Electron shell.
# 1. Install nvm (Node Version Manager) — skip if you already have it
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
source ~/.nvm/nvm.sh
# 2. Clone the repo
git clone https://github.com/DevSwat-ResonantGenesis/RG_IDE.git
cd RG_IDE
# 3. Install and use the EXACT Node version required (reads .nvmrc)
nvm install
nvm use
# Verify — MUST show v22.22.0
node -v
# 4. Install dependencies (takes 2-5 minutes)
npm install
# 5. Build the Resonant AI extension
cd extensions/resonant-ai && npm install && npx tsc -p tsconfig.json && cd ../..
# 6. Compile the full IDE (takes ~2 minutes)
npm run compile
# 7. Launch Resonant IDE
./scripts/code.shEvery time you open a new terminal to work on RG_IDE, run
nvm useinside the project directory first. It reads.nvmrcand switches to the correct Node version automatically.
The scripts/code.sh launcher will:
- Download the correct Electron binary (first run only)
- Verify compilation output exists
- Sync built-in extensions
- Launch the IDE
"Cannot find module out/main.js" The TypeScript compilation didn't run or failed. Fix:
rm -rf out
npm run compile
./scripts/code.shCompilation fails / tree-sitter build errors / ternary-stream not found
You're using the wrong Node.js version. This is the #1 cause of build failures:
node -v # MUST be v22.22.0 — NOT v23, v24, or v25
nvm install # installs the version from .nvmrc
nvm use # switches to it
rm -rf node_modules
npm install
npm run compilenpm warnings about "Unknown project config"
These are cosmetic warnings from npm 10+/11+ about .npmrc keys (disturl, target, runtime, etc.). These keys are required by the build system — do not remove them. The warnings are harmless and do not affect the build.
Native module build failures Ensure you have C++ build tools installed:
# macOS
xcode-select --install
# Ubuntu/Debian
sudo apt install build-essential python3Note: Resonant IDE is currently available as a manual build from source. Pre-built binaries (.dmg, .AppImage, .exe) are coming soon. A registered account at dev-swat.com is required to use the AI assistant and platform features.
Resonant IDE connects to the Resonant Genesis platform — a governed execution system for AI agents with 30+ microservices:
| Service | What It Does |
|---|---|
| Gateway | API routing, auth verification, rate limiting |
| Auth Service | JWT tokens, OAuth2, 2FA, DSID binding |
| Chat Service | Multi-provider AI routing, skills, streaming |
| Agent Engine | Autonomous agent execution, planning, tools |
| Memory Service | Hash Sphere storage, semantic retrieval |
| Blockchain Node | Base Sepolia identity registry, memory anchors |
| SAST & Architecture Engine | AST analysis, SAST, dependency mapping, pipeline detection |
| Billing Service | Credits, Stripe, usage tracking |
| Marketplace | Agent templates, extensions, publishing |
| IDE Service | LOC tracking, updates, analytics |
The platform_api_search and platform_api_call tools give the AI direct access to the entire platform API — create agents, manage teams, query memories, interact with blockchain, publish to marketplace, and more.
We welcome contributions! Please read our Contributing Guide before submitting pull requests.
- Fork this repository
- Create a feature branch:
git checkout -b feature/my-feature - Make your changes
- Test locally: build and run the IDE
- Submit a pull request
- AI Tools — Add new tools in
extensions/resonant-ai/src/toolDefinitions.tsandtoolExecutor.ts - Language Support — Improve inline completions for specific languages
- Local LLM — Add support for new local inference servers
- SAST & Architecture — Extend analysis to new languages, add security rules
- UI/UX — Improve the chat interface, settings panel, profile page
- Documentation — Improve docs, add tutorials, fix typos
Resonant IDE and the entire Resonant Genesis platform were built by Louie Nemesh — an AI architect who started this project on November 11, 2025 with a singular vision: to build the world's most comprehensive AI governance platform.
"I didn't write a single line of code myself. I architected, I directed, I made every decision — but the code was written by AI. This is what the future looks like." — Louie Nemesh, Founder & AI Architect
- 30+ production microservices
- 433 API endpoints
- 137 AI tools across the platform
- 59 local IDE tools
- 53 React UI components
- 4 AI providers (OpenAI, Anthropic, Groq, Google)
- 3 smart contracts on Base Sepolia (Ethereum L2)
- 0 lines of human-written code
- Platform: dev-swat.com
- AI Portal: resonantgenesis.ai
- GitHub: github.com/DevSwat-ResonantGenesis
- Feedback: dev-swat.com/feedback
- Documentation: dev-swat.com/docs
Copyright (c) 2025-2026 Resonant Genesis / DevSwat. Founded and built by Louie Nemesh.
Licensed under the Resonant Genesis Source Available License.
- View & study: Free for everyone
- Download & use: Free with platform registration
- Contribute: Pull requests welcome
- Commercial use: Contact us
This project is built on the VS Code Open Source foundation (MIT licensed). The Resonant AI extension and all Resonant Genesis-specific modifications are covered by the Resonant Genesis Source Available License.
Built on Resonant Genesis technology by Louie Nemesh
The future of development is AI-native.






