Skip to content

Commit 6d87c15

Browse files
Roo Coderuvnet
andcommitted
docs: Audit, validate links, remove legacy, finalize
- Update CUDA kernel count from 110 to 92 (actual __global__ count across 11 .cu files, 6585 LOC) - Update agent skill count from 71 to 83 (86 directories minus 3 deprecated) - Update markdown file count from 285/214 to 267 - Rename VisionFlow to VisionClaw in docs/README.md and use-cases - Convert ASCII art diagram (Insight Ingestion Loop) to Mermaid in README.md - Convert ASCII art diagram (filter data flow) to Mermaid in filtering-nodes.md - Fix broken link: Contributing Guide -> docs/CONTRIBUTING.md - Fix broken links: rest-api.md -> rest-api-reference.md (CONTRIBUTING, reference/README, navigation-guide) - Fix broken links: binary-websocket.md -> websocket-binary-v2.md (glossary, performance-benchmarks, navigation-guide) - Fix broken link: vircadia-xr-complete-guide -> vr-development.md - Fix broken link: stress-majorization.md -> stress-majorization-guide.md - Fix broken link: docker-environment path in docs/README.md - Remove broken links to non-existent files (Neo4j ADR, implementation-status, code-quality-status, physics-implementation) - Remove broken links to archived/deleted files in ANTIGRAVITY, comfyui-sam3d-setup, SKILLS, complete-data-flows - Remove completed TODO section (Neo4j filter persistence) from filtering-nodes.md - Remove dead case study links from use-cases/quick-reference.md - Fix XR/VR TODO label in navigation-guide.md - Update dates to 2026-04-03 - Fix GitHub URLs from VisionFlow to VisionClaw Co-Authored-By: claude-flow <ruv@ruv.net>
1 parent 303b091 commit 6d87c15

15 files changed

Lines changed: 78 additions & 154 deletions

File tree

README.md

Lines changed: 29 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
[![Rust](https://img.shields.io/badge/Rust-2021-orange?style=flat-square&logo=rust)](https://www.rust-lang.org/)
1111
[![CUDA](https://img.shields.io/badge/CUDA-13.1-76B900?style=flat-square&logo=nvidia)](https://developer.nvidia.com/cuda-toolkit)
1212

13-
**110 CUDA kernels | GPU clustering, anomaly detection & PageRank | Multi-user immersive XR | 71 agent skills | OWL 2 ontology governance**
13+
**92 CUDA kernels | GPU clustering, anomaly detection & PageRank | Multi-user immersive XR | 83 agent skills | OWL 2 ontology governance**
1414

1515
<br/>
1616

@@ -40,7 +40,7 @@ VisionClaw takes the opposite approach. **Governance isn't the brake — it's wh
4040

4141
VisionClaw is an open-source platform that transforms organisations into governed agentic meshes — where autonomous AI agents, human judgment, and institutional knowledge work together through a shared semantic substrate.
4242

43-
VisionClaw gives organisations a governed intelligence layer where 71 specialist agent skills reason over a formal OWL 2 ontology before they act. Every agent decision is semantically grounded, every mutation passes consistency checking, and every reasoning chain is auditable from edge case back to first principles. The middle manager doesn't disappear — they evolve into the **Judgment Broker**, reviewing only the genuine edge cases and strategic decisions that exceed agent authority.
43+
VisionClaw gives organisations a governed intelligence layer where 83 specialist agent skills reason over a formal OWL 2 ontology before they act. Every agent decision is semantically grounded, every mutation passes consistency checking, and every reasoning chain is auditable from edge case back to first principles. The middle manager doesn't disappear — they evolve into the **Judgment Broker**, reviewing only the genuine edge cases and strategic decisions that exceed agent authority.
4444

4545
The platform ingests knowledge from Logseq notebooks via GitHub, reasons over it with an OWL 2 EL inference engine (Whelk), renders the result as an interactive 3D graph where nodes attract or repel based on their semantic relationships, and exposes everything to AI agents through 7 Model Context Protocol tools. Users collaborate in the same space through multi-user XR presence, spatial voice, and immersive graph exploration.
4646

@@ -126,7 +126,7 @@ flowchart TB
126126
end
127127
128128
subgraph Layer2["LAYER 2 — ORCHESTRATION"]
129-
Skills["71 Agent Skills\nClaude-Flow DAG Pipelines"]
129+
Skills["83 Agent Skills\nClaude-Flow DAG Pipelines"]
130130
Ontology["OWL 2 EL Reasoning\nWhelk Inference Engine"]
131131
MCP["7 MCP Tools\nKnowledge Graph Read/Write"]
132132
GPU["GPU Compute\nCUDA 13.1 Kernels"]
@@ -221,7 +221,7 @@ Opus 48kHz mono end-to-end. HRTF spatial panning from Vircadia entity positions.
221221

222222
The orchestration layer is where agents reason, coordinate, and act — always against the shared semantic substrate of the OWL 2 ontology.
223223

224-
**71 Specialist Agent Skills** — The `multi-agent-docker/` container provides a complete AI orchestration environment with Claude-Flow coordination and 71 skill modules spanning creative production, research, knowledge codification, governance, workflow discovery, financial intelligence, spatial/immersive, and identity/trust domains.
224+
**83 Specialist Agent Skills** — The `multi-agent-docker/` container provides a complete AI orchestration environment with Claude-Flow coordination and 83 skill modules spanning creative production, research, knowledge codification, governance, workflow discovery, financial intelligence, spatial/immersive, and identity/trust domains.
225225

226226
**Why OWL 2 Is the Secret Weapon** — Most agentic systems fail at scale because they lack a shared language. In VisionClaw, agents don't "guess" what a concept means — they reason against a common OWL 2 ontology. The same concept of "deliverable" means the same thing to a Creative Production agent and a Governance agent. Agent skill routing isn't keyword matching — it's ontological subsumption. The orchestration layer knows that a "risk assessment" is a sub-task of "governance review", and routes accordingly.
227227

@@ -237,18 +237,18 @@ The orchestration layer is where agents reason, coordinate, and act — always a
237237
| `ontology_validate` | Axiom consistency check against Whelk reasoner |
238238
| `ontology_status` | Service health and statistics |
239239

240-
**GPU-Accelerated Compute**110 CUDA kernel functions across 11 kernel files (6,411 lines) run server-authoritative graph layout and analytics. The physics pipeline (force-directed layout, semantic forces, ontology constraints, stress majorisation) runs at 60 Hz. The analytics pipeline (K-Means clustering, Louvain community detection, LOF anomaly detection, PageRank) runs on-demand via API and streams results to clients in the V3 binary protocol's analytics fields (cluster_id, anomaly_score, community_id at bytes 36-47).
240+
**GPU-Accelerated Compute**92 CUDA kernel functions across 11 kernel files (6,585 lines) run server-authoritative graph layout and analytics. The physics pipeline (force-directed layout, semantic forces, ontology constraints, stress majorisation) runs at 60 Hz. The analytics pipeline (K-Means clustering, Louvain community detection, LOF anomaly detection, PageRank) runs on-demand via API and streams results to clients in the V3 binary protocol's analytics fields (cluster_id, anomaly_score, community_id at bytes 36-47).
241241

242242
| Metric | Result |
243243
|:-------|-------:|
244-
| CUDA kernel functions | 110 across 11 files |
244+
| CUDA kernel functions | 92 across 11 files |
245245
| GPU vs CPU speedup | 55x |
246246
| Position + analytics size | 48 bytes/node (V3) |
247247
| WebSocket latency | 10ms |
248248
| Binary vs JSON bandwidth | 80% reduction |
249249

250250
<details>
251-
<summary><strong>Agent skill domains (71 skills)</strong></summary>
251+
<summary><strong>Agent skill domains (83 skills)</strong></summary>
252252

253253
**Creative Production** — Script, storyboard, shot-list, grade & publish workflows. ComfyUI orchestration for image, video, and 3D asset generation via containerised API middleware.
254254

@@ -369,16 +369,21 @@ The governance layer is what separates VisionClaw from every "move fast and brea
369369

370370
How shadow workflows become sanctioned organisational intelligence:
371371

372-
```
373-
┌─────────────┐ ┌─────────────────┐ ┌──────────────┐ ┌──────────────┐ ┌───────────────┐
374-
│ DISCOVERY │────▶│ CODIFICATION │────▶│ VALIDATION │────▶│ INTEGRATION │────▶│ AMPLIFICATION │
375-
│ │ │ │ │ │ │ │ │ │
376-
│ Passive agent│ │ IRIS maps the │ │ The Judgment │ │ Promoted to │ │ Mesh propaga- │
377-
│ monitoring │ │ new path as a │ │ Broker │ │ live mesh │ │ tes pattern │
378-
│ detects the │ │ proposed DAG — │ │ reviews for │ │ with SLAs, │ │ to other │
379-
│ pattern │ │ OWL 2 formalised│ │ strategic │ │ ownership, │ │ teams where │
380-
│ │ │ with provenance │ │ fit & bias │ │ quality │ │ it applies │
381-
└─────────────┘ └─────────────────┘ └──────────────┘ └──────────────┘ └───────────────┘
372+
```mermaid
373+
flowchart LR
374+
D["DISCOVERY\nPassive agent monitoring\ndetects the pattern"]
375+
C["CODIFICATION\nIRIS maps the new path\nas a proposed DAG —\nOWL 2 formalised\nwith provenance"]
376+
V["VALIDATION\nThe Judgment Broker\nreviews for strategic\nfit & bias"]
377+
I["INTEGRATION\nPromoted to live mesh\nwith SLAs, ownership,\nquality"]
378+
A["AMPLIFICATION\nMesh propagates\npattern to other\nteams where it applies"]
379+
380+
D --> C --> V --> I --> A
381+
382+
style D fill:#0A2A1A,stroke:#10B981
383+
style C fill:#0A1A2A,stroke:#00D4FF
384+
style V fill:#1A0A2A,stroke:#8B5CF6
385+
style I fill:#0A1A2A,stroke:#00D4FF
386+
style A fill:#0A2A1A,stroke:#10B981
382387
```
383388

384389
---
@@ -411,13 +416,13 @@ flowchart TB
411416
end
412417
413418
subgraph GPU["GPU Compute (CUDA 13.1)"]
414-
Kernels["110 CUDA Kernels"]
419+
Kernels["92 CUDA Kernels"]
415420
Physics["Force Simulation + Semantic Forces"]
416421
Analytics["Clustering · Anomaly · PageRank"]
417422
end
418423
419424
subgraph Agents["Multi-Agent Stack"]
420-
Skills["71 Agent Skills"]
425+
Skills["83 Agent Skills"]
421426
ClaudeFlow["Claude-Flow Orchestrator"]
422427
AgenticQE["Agentic QE Fleet"]
423428
end
@@ -518,14 +523,14 @@ flowchart LR
518523
| **Graph DB** | Neo4j 5.13 | Primary store, Cypher queries, bolt protocol |
519524
| **Relational DB** | PostgreSQL 15 | Vircadia World Server entity storage |
520525
| **Vector DB** | Qdrant | Semantic similarity search |
521-
| **GPU** | CUDA 13.1 | 110 kernel functions, 6.4K LOC across 11 .cu files via cudarc |
526+
| **GPU** | CUDA 13.1 | 92 kernel functions, 6.4K LOC across 11 .cu files via cudarc |
522527
| **Ontology** | OWL 2 EL, Whelk-rs | EL++ subsumption, consistency checking (20 source files) |
523528
| **XR** | WebXR, @react-three/xr | Meta Quest 3, hand tracking, foveated rendering |
524529
| **Multi-User** | Vircadia World Server | Avatar sync, spatial audio, entity CRUD |
525530
| **Voice** | LiveKit SFU | turbo-whisper STT, Kokoro TTS, Opus codec |
526531
| **Protocol** | Binary V5 | 48-byte position updates, delta encoding, flag-bit node typing |
527532
| **Auth** | Nostr NIP-07/NIP-98 | Browser extension signing, relay integration |
528-
| **Agents** | MCP, Claude-Flow | 71 skills, 7 ontology tools |
533+
| **Agents** | MCP, Claude-Flow | 83 skills, 7 ontology tools |
529534
| **AI/ML** | GraphRAG, RAGFlow | Knowledge retrieval, inference |
530535
| **Build** | Vite 6, Vitest, Playwright | Frontend build, unit tests, E2E tests |
531536
| **Infra** | Docker Compose | 10 compose files, multi-profile deployment |
@@ -537,7 +542,7 @@ flowchart LR
537542

538543
## Documentation
539544

540-
VisionClaw uses the [Diataxis](https://diataxis.fr/) documentation framework — 285 markdown files organised into four categories:
545+
VisionClaw uses the [Diataxis](https://diataxis.fr/) documentation framework — 267 markdown files organised into four categories:
541546

542547
| Category | Path | Content |
543548
|:---------|:-----|:--------|
@@ -637,7 +642,7 @@ VisionClaw/
637642
│ ├── rendering/ # Custom TSL materials, post-processing
638643
│ └── immersive/ # XR/VR specific code
639644
├── multi-agent-docker/ # AI agent orchestration container
640-
│ ├── skills/ # 71 agent skill modules
645+
│ ├── skills/ # 83 agent skill modules
641646
│ ├── mcp-infrastructure/ # MCP servers, config, tools
642647
│ └── management-api/ # Agent lifecycle management
643648
├── docs/ # Diataxis documentation (285 files)
@@ -650,7 +655,7 @@ VisionClaw/
650655

651656
## Contributing
652657

653-
See the [Contributing Guide](docs/how-to/development/contributing.md) for development workflow, branching conventions, and coding standards.
658+
See the [Contributing Guide](docs/CONTRIBUTING.md) for development workflow, branching conventions, and coding standards.
654659

655660
---
656661

docs/CONTRIBUTING.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -344,18 +344,18 @@ stateDiagram-v2
344344

345345
✅ Good:
346346
```markdown
347-
See [API Reference](./reference/api/rest-api.md)
347+
See [API Reference](./reference/api/rest-api-reference.md)
348348
```
349349

350350
❌ Avoid:
351351
```markdown
352-
See [API Reference](/docs/reference/api/rest-api.md)
352+
See [API Reference](/docs/reference/api/rest-api-reference.md)
353353
```
354354

355355
**Link to Specific Sections**
356356

357357
```markdown
358-
See [API Reference](./reference/api/rest-api.md#configuration)
358+
See [API Reference](./reference/api/rest-api-reference.md#configuration)
359359
```
360360

361361
**Verify Links Exist**
@@ -392,7 +392,7 @@ Link to related documents at the end of each section:
392392
---
393393

394394
**Related Documentation:**
395-
- [API Reference](./reference/api/rest-api.md)
395+
- [API Reference](./reference/api/rest-api-reference.md)
396396
- [Configuration Guide](./how-to/operations/configuration.md)
397397
- [Troubleshooting](./how-to/operations/troubleshooting.md)
398398
```

docs/README.md

Lines changed: 16 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
2-
title: VisionFlow Documentation
3-
description: Complete documentation for VisionFlow - enterprise-grade multi-agent knowledge graphing
2+
title: VisionClaw Documentation
3+
description: Complete documentation for VisionClaw - enterprise-grade multi-agent knowledge graphing
44
category: reference
5-
updated-date: 2026-02-11
5+
updated-date: 2026-04-03
66
---
77

8-
# VisionFlow Documentation
8+
# VisionClaw Documentation
99

1010
Enterprise-grade multi-agent knowledge graphing with 3D visualization, semantic reasoning, and GPU-accelerated physics. This documentation follows the [Diataxis framework](https://diataxis.fr/) for maximum discoverability.
1111

@@ -20,13 +20,13 @@ Get running in 5 minutes:
2020
## Documentation by Role
2121

2222
<details>
23-
<summary><strong>New Users</strong> - Getting started with VisionFlow</summary>
23+
<summary><strong>New Users</strong> - Getting started with VisionClaw</summary>
2424

2525
### Your Learning Path
2626

2727
| Step | Document | Time |
2828
|------|----------|------|
29-
| 1 | [What is VisionFlow?](tutorials/overview.md) | 10 min |
29+
| 1 | [What is VisionClaw?](tutorials/overview.md) | 10 min |
3030
| 2 | [Installation](tutorials/installation.md) | 15 min |
3131
| 3 | [First Graph](tutorials/creating-first-graph.md) | 20 min |
3232
| 4 | [Navigation Guide](how-to/navigation-guide.md) | 15 min |
@@ -41,7 +41,7 @@ Get running in 5 minutes:
4141
</details>
4242

4343
<details>
44-
<summary><strong>Developers</strong> - Building and extending VisionFlow</summary>
44+
<summary><strong>Developers</strong> - Building and extending VisionClaw</summary>
4545

4646
### Onboarding Path
4747

@@ -87,7 +87,7 @@ Get running in 5 minutes:
8787
### Deep Dives
8888

8989
- **Actor System** - [Actor Guide](how-to/development/actor-system.md), [Server Architecture](explanation/architecture/server/overview.md)
90-
- **Database** - [Database Architecture](explanation/architecture/database.md), [Neo4j ADR](explanation/architecture/adr/ADR-0001-neo4j-persistent-with-filesystem-sync.md)
90+
- **Database** - [Database Architecture](explanation/architecture/database.md)
9191
- **Physics** - [Semantic Physics](explanation/architecture/physics/semantic-forces.md), [GPU Communication](explanation/architecture/gpu/communication-flow.md)
9292
- **Ontology** - [Ontology Storage](explanation/architecture/ontology-storage-architecture.md), [Reasoning Pipeline](explanation/architecture/ontology/reasoning-engine.md)
9393
- **Multi-Agent** - [Multi-Agent System](explanation/architecture/agents/multi-agent.md), [Agent Orchestration](how-to/agents/agent-orchestration.md)
@@ -119,7 +119,7 @@ Get running in 5 minutes:
119119
### Infrastructure
120120

121121
- [Infrastructure Architecture](how-to/infrastructure/architecture.md)
122-
- [Docker Environment](how-to/infrastructure/docker-environment.md)
122+
- [Docker Environment](how-to/deployment/docker-environment.md)
123123
- [Port Configuration](how-to/infrastructure/port-configuration.md)
124124
- [Infrastructure Troubleshooting](how-to/infrastructure/troubleshooting.md)
125125

@@ -186,7 +186,7 @@ graph TB
186186

187187
| Task | Document |
188188
|------|----------|
189-
| **Install VisionFlow** | [Installation](tutorials/installation.md) |
189+
| **Install VisionClaw** | [Installation](tutorials/installation.md) |
190190
| **Create first graph** | [First Graph](tutorials/creating-first-graph.md) |
191191
| **Deploy AI agents** | [Agent Orchestration](how-to/agents/agent-orchestration.md) |
192192
| **Query Neo4j** | [Neo4j Integration](how-to/integration/neo4j-integration.md) |
@@ -220,7 +220,7 @@ Core mental models and foundational knowledge.
220220

221221
| Concept | Description |
222222
|---------|-------------|
223-
| [Core Concepts](explanation/concepts/README.md) | Overview of VisionFlow mental models |
223+
| [Core Concepts](explanation/concepts/README.md) | Overview of VisionClaw mental models |
224224
| [Physics Engine](explanation/concepts/physics-engine.md) | Force-directed graph simulation |
225225
| [Actor Model](explanation/concepts/actor-model.md) | Concurrent actor-based patterns |
226226
| [Hexagonal Architecture](explanation/concepts/hexagonal-architecture.md) | Ports and adapters design |
@@ -349,25 +349,22 @@ Technical specifications and APIs.
349349
</details>
350350

351351
<details>
352-
<summary>System Status (5 references)</summary>
352+
<summary>System Status (2 references)</summary>
353353

354354
- [Error Codes](reference/error-codes.md) - Error reference
355-
- [Implementation Status](reference/implementation-status.md) - Feature matrix
356-
- [Code Quality](reference/code-quality-status.md) - Build health
357355
- [Performance Benchmarks](reference/performance-benchmarks.md) - GPU metrics
358-
- [Physics Implementation](reference/physics-implementation.md) - Physics details
359356

360357
</details>
361358

362359
## Getting Help
363360

364361
| Issue Type | Resource |
365362
|------------|----------|
366-
| Documentation gaps | [GitHub Issues](https://github.com/DreamLab-AI/VisionFlow/issues) with `documentation` label |
363+
| Documentation gaps | [GitHub Issues](https://github.com/DreamLab-AI/VisionClaw/issues) with `documentation` label |
367364
| Technical problems | [Troubleshooting Guide](how-to/operations/troubleshooting.md) |
368365
| Infrastructure issues | [Infrastructure Troubleshooting](how-to/infrastructure/troubleshooting.md) |
369366
| Developer setup | [Development Setup](how-to/development/01-development-setup.md) |
370-
| Feature requests | [GitHub Discussions](https://github.com/DreamLab-AI/VisionFlow/discussions) |
367+
| Feature requests | [GitHub Discussions](https://github.com/DreamLab-AI/VisionClaw/discussions) |
371368

372369
## Documentation Stats
373370

@@ -378,10 +375,10 @@ Technical specifications and APIs.
378375
| **Explanation** | 70 |
379376
| **Reference** | 39 |
380377
| **Other (diagrams, research)** | 35 |
381-
| **Total** | ~214 markdown files |
378+
| **Total** | ~267 markdown files |
382379

383380
- **Framework**: Diataxis (Tutorials, How-To, Explanation, Reference)
384-
- **Last Updated**: 2026-03-24
381+
- **Last Updated**: 2026-04-03
385382
- **Audit**: [DOCS-AUDIT-2026-03-24.md](DOCS-AUDIT-2026-03-24.md)
386383

387384
---

docs/diagrams/data-flow/complete-data-flows.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1843,9 +1843,6 @@ Total time: 7000ms
18431843

18441844
## Related Documentation
18451845

1846-
- [System Architecture Overview - Complete Mermaid Diagrams](../mermaid-library/01-system-architecture-overview.md)
1847-
- [ASCII Diagram Deprecation - Complete Report](../../ASCII_DEPRECATION_COMPLETE.md)
1848-
- [Deployment & Infrastructure Diagrams](../mermaid-library/03-deployment-infrastructure.md)
18491846
- [Server-Side Actor System - Complete Architecture Documentation](../server/actors/actor-system-complete.md)
18501847
- [Complete State Management Architecture](../client/state/state-management-complete.md)
18511848

docs/explanation/concepts/physics-engine.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ When CUDA hardware is unavailable, VisionFlow falls back to CPU computation usin
103103
## See Also
104104

105105
- [GPU Acceleration](gpu-acceleration.md) -- detailed CUDA kernel inventory, memory hierarchy, and hardware requirements
106-
- [Stress Majorization](../architecture/stress-majorization.md) -- full algorithm reference with configuration parameters and benchmarks
106+
- [Stress Majorization Guide](../../how-to/features/stress-majorization-guide.md) -- algorithm reference with configuration parameters and benchmarks
107107
- [Actor Model](actor-model.md) -- how `PhysicsOrchestratorActor` coordinates GPU sub-actors
108108
- [Constraint System](constraint-system.md) -- LOD-aware constraint management for physics layout
109109
- [Semantic Forces](../architecture/physics/semantic-forces.md) -- force-based layout driven by ontology relationships

0 commit comments

Comments
 (0)