You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -40,7 +40,7 @@ VisionClaw takes the opposite approach. **Governance isn't the brake — it's wh
40
40
41
41
VisionClaw is an open-source platform that transforms organisations into governed agentic meshes — where autonomous AI agents, human judgment, and institutional knowledge work together through a shared semantic substrate.
42
42
43
-
VisionClaw gives organisations a governed intelligence layer where 71 specialist agent skills reason over a formal OWL 2 ontology before they act. Every agent decision is semantically grounded, every mutation passes consistency checking, and every reasoning chain is auditable from edge case back to first principles. The middle manager doesn't disappear — they evolve into the **Judgment Broker**, reviewing only the genuine edge cases and strategic decisions that exceed agent authority.
43
+
VisionClaw gives organisations a governed intelligence layer where 83 specialist agent skills reason over a formal OWL 2 ontology before they act. Every agent decision is semantically grounded, every mutation passes consistency checking, and every reasoning chain is auditable from edge case back to first principles. The middle manager doesn't disappear — they evolve into the **Judgment Broker**, reviewing only the genuine edge cases and strategic decisions that exceed agent authority.
44
44
45
45
The platform ingests knowledge from Logseq notebooks via GitHub, reasons over it with an OWL 2 EL inference engine (Whelk), renders the result as an interactive 3D graph where nodes attract or repel based on their semantic relationships, and exposes everything to AI agents through 7 Model Context Protocol tools. Users collaborate in the same space through multi-user XR presence, spatial voice, and immersive graph exploration.
46
46
@@ -126,7 +126,7 @@ flowchart TB
126
126
end
127
127
128
128
subgraph Layer2["LAYER 2 — ORCHESTRATION"]
129
-
Skills["71 Agent Skills\nClaude-Flow DAG Pipelines"]
129
+
Skills["83 Agent Skills\nClaude-Flow DAG Pipelines"]
130
130
Ontology["OWL 2 EL Reasoning\nWhelk Inference Engine"]
131
131
MCP["7 MCP Tools\nKnowledge Graph Read/Write"]
132
132
GPU["GPU Compute\nCUDA 13.1 Kernels"]
@@ -221,7 +221,7 @@ Opus 48kHz mono end-to-end. HRTF spatial panning from Vircadia entity positions.
221
221
222
222
The orchestration layer is where agents reason, coordinate, and act — always against the shared semantic substrate of the OWL 2 ontology.
223
223
224
-
**71 Specialist Agent Skills** — The `multi-agent-docker/` container provides a complete AI orchestration environment with Claude-Flow coordination and 71 skill modules spanning creative production, research, knowledge codification, governance, workflow discovery, financial intelligence, spatial/immersive, and identity/trust domains.
224
+
**83 Specialist Agent Skills** — The `multi-agent-docker/` container provides a complete AI orchestration environment with Claude-Flow coordination and 83 skill modules spanning creative production, research, knowledge codification, governance, workflow discovery, financial intelligence, spatial/immersive, and identity/trust domains.
225
225
226
226
**Why OWL 2 Is the Secret Weapon** — Most agentic systems fail at scale because they lack a shared language. In VisionClaw, agents don't "guess" what a concept means — they reason against a common OWL 2 ontology. The same concept of "deliverable" means the same thing to a Creative Production agent and a Governance agent. Agent skill routing isn't keyword matching — it's ontological subsumption. The orchestration layer knows that a "risk assessment" is a sub-task of "governance review", and routes accordingly.
227
227
@@ -237,18 +237,18 @@ The orchestration layer is where agents reason, coordinate, and act — always a
237
237
|`ontology_validate`| Axiom consistency check against Whelk reasoner |
238
238
|`ontology_status`| Service health and statistics |
239
239
240
-
**GPU-Accelerated Compute** — 110 CUDA kernel functions across 11 kernel files (6,411 lines) run server-authoritative graph layout and analytics. The physics pipeline (force-directed layout, semantic forces, ontology constraints, stress majorisation) runs at 60 Hz. The analytics pipeline (K-Means clustering, Louvain community detection, LOF anomaly detection, PageRank) runs on-demand via API and streams results to clients in the V3 binary protocol's analytics fields (cluster_id, anomaly_score, community_id at bytes 36-47).
240
+
**GPU-Accelerated Compute** — 92 CUDA kernel functions across 11 kernel files (6,585 lines) run server-authoritative graph layout and analytics. The physics pipeline (force-directed layout, semantic forces, ontology constraints, stress majorisation) runs at 60 Hz. The analytics pipeline (K-Means clustering, Louvain community detection, LOF anomaly detection, PageRank) runs on-demand via API and streams results to clients in the V3 binary protocol's analytics fields (cluster_id, anomaly_score, community_id at bytes 36-47).
241
241
242
242
| Metric | Result |
243
243
|:-------|-------:|
244
-
| CUDA kernel functions |110 across 11 files |
244
+
| CUDA kernel functions |92 across 11 files |
245
245
| GPU vs CPU speedup | 55x |
246
246
| Position + analytics size | 48 bytes/node (V3) |
**Creative Production** — Script, storyboard, shot-list, grade & publish workflows. ComfyUI orchestration for image, video, and 3D asset generation via containerised API middleware.
254
254
@@ -369,16 +369,21 @@ The governance layer is what separates VisionClaw from every "move fast and brea
369
369
370
370
How shadow workflows become sanctioned organisational intelligence:
Copy file name to clipboardExpand all lines: docs/README.md
+16-19Lines changed: 16 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
---
2
-
title: VisionFlow Documentation
3
-
description: Complete documentation for VisionFlow - enterprise-grade multi-agent knowledge graphing
2
+
title: VisionClaw Documentation
3
+
description: Complete documentation for VisionClaw - enterprise-grade multi-agent knowledge graphing
4
4
category: reference
5
-
updated-date: 2026-02-11
5
+
updated-date: 2026-04-03
6
6
---
7
7
8
-
# VisionFlow Documentation
8
+
# VisionClaw Documentation
9
9
10
10
Enterprise-grade multi-agent knowledge graphing with 3D visualization, semantic reasoning, and GPU-accelerated physics. This documentation follows the [Diataxis framework](https://diataxis.fr/) for maximum discoverability.
11
11
@@ -20,13 +20,13 @@ Get running in 5 minutes:
20
20
## Documentation by Role
21
21
22
22
<details>
23
-
<summary><strong>New Users</strong> - Getting started with VisionFlow</summary>
23
+
<summary><strong>New Users</strong> - Getting started with VisionClaw</summary>
24
24
25
25
### Your Learning Path
26
26
27
27
| Step | Document | Time |
28
28
|------|----------|------|
29
-
| 1 |[What is VisionFlow?](tutorials/overview.md)| 10 min |
29
+
| 1 |[What is VisionClaw?](tutorials/overview.md)| 10 min |
30
30
| 2 |[Installation](tutorials/installation.md)| 15 min |
31
31
| 3 |[First Graph](tutorials/creating-first-graph.md)| 20 min |
32
32
| 4 |[Navigation Guide](how-to/navigation-guide.md)| 15 min |
@@ -41,7 +41,7 @@ Get running in 5 minutes:
41
41
</details>
42
42
43
43
<details>
44
-
<summary><strong>Developers</strong> - Building and extending VisionFlow</summary>
44
+
<summary><strong>Developers</strong> - Building and extending VisionClaw</summary>
0 commit comments