Skip to content

Commit 713e60e

Browse files
committed
edit
1 parent 61d2ee0 commit 713e60e

13 files changed

Lines changed: 3518 additions & 1 deletion

BuilderLoop.md

Lines changed: 1619 additions & 0 deletions
Large diffs are not rendered by default.

commands/comprehend.md

Lines changed: 281 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,281 @@
1+
name: comprehend
2+
description: Structural Comprehension is the process of transforming an observable-but-opaque artifact into an internal working model with predictive power through progressive construction, causal tracing, perturbation testing, and adversarial self-verification.
3+
4+
# /comprehend — Structural Comprehension Analysis
5+
6+
Analyze the given input using the Structural Comprehension Framework. This transforms observable-but-opaque artifacts into tested, transferable working models through progressive model construction, perturbation testing, and adversarial self-verification.
7+
8+
## Additional Input/Instructions
9+
10+
$ARGUMENTS
11+
12+
---
13+
14+
## Instructions
15+
16+
1. Read the input and consume it. It can be raw text, a folder path with md files, code files, or image path. Consume all input.
17+
18+
2. Determine the comprehension aspect:
19+
- **Mechanistic** (default) — "How does this work?" Build a model of mechanism.
20+
- **Intent** — "Why was this built this way?" Build a model of the design space.
21+
- If the user specifies an aspect, use it. Otherwise default to mechanistic, blending intent naturally as it arises.
22+
23+
3. Determine the target depth:
24+
- If the user specifies a depth target, stop there.
25+
- If no target is specified, aim for Predictive (CV3-CV4) for code and systems, Structural (CV1) for quick orientation tasks.
26+
27+
4. Use codebase context where relevant. For code artifacts, prefer execution-based perturbation testing (actually run the code, modify inputs, observe outputs). For non-code artifacts, use scenario-based or reasoning-based perturbation.
28+
29+
5. Execute the full Structural Comprehension process described below, producing all Comprehension Versions up to the target depth.
30+
31+
6. Save the output as a markdown file (unless differently stated in additional instructions!):
32+
- **If the input was a file path** — save in the same folder as the input file.
33+
- **Otherwise** — save under `devdocs/comprehension/<suitable-name>.md` (create the directory if needed).
34+
35+
7. Print the output in the conversation as well.
36+
37+
---
38+
39+
## The Structural Comprehension Framework
40+
41+
### What This Is
42+
43+
Structural Comprehension is a systematic approach for building internal working models of observable-but-opaque artifacts. It works by progressively constructing a model through structural mapping, behavioral tracing, and causal discovery, then testing that model through falsifiable predictions and adversarial self-challenge.
44+
45+
Rather than relying on the feeling of understanding, Structural Comprehension treats comprehension as a testable process — depth is demonstrated through prediction accuracy, not declared through description fluency.
46+
47+
> **Structural Comprehension is the process of transforming an observable-but-opaque artifact into an internal working model with predictive power — through progressive construction, causal tracing, perturbation testing, and adversarial self-verification — producing understanding that is testable, transferable, and depth-aware.**
48+
49+
### Two Primary Aspects
50+
51+
**Mechanistic** — "How does this work?" Build a model of internal mechanism. Predictions target behavior: "Given input X, the output is Y because Z."
52+
53+
**Intent** — "Why was this built this way?" Build a model of the design space. Predictions target design decisions: "If constraint X were relaxed, the design would shift to Y."
54+
55+
Both aspects use the same process. They differ in what you trace, what you perturb, and what your predictions target.
56+
57+
### The Depth Hierarchy
58+
59+
| Level | What you can do | Test |
60+
|-------|----------------|------|
61+
| **Descriptive** | Describe what the artifact does | Can you predict its PURPOSE? |
62+
| **Structural** | Describe how it's organized | Can you predict where a responsibility lives? |
63+
| **Causal** | Trace causality for a given input | Can you trace a path you haven't traced before? |
64+
| **Predictive** | Predict behavior for unobserved conditions | Can you predict output for untested edge cases? |
65+
| **Generative** | State the minimal generating principles | Can you explain WHY and reconstruct from principles? |
66+
67+
**Depth is demonstrated, not declared.** You cannot claim a level without passing its test.
68+
69+
### Key Components
70+
71+
**Model Construction** — Progressive building of the internal model through structural mapping, behavioral tracing, and causal discovery.
72+
73+
**Perturbation Testing** — Change one thing, observe what changes. Use the strongest form available:
74+
- Execution-based (strongest) — actually run/change it and observe
75+
- Scenario-based (moderate) — construct a realistic scenario and reason
76+
- Reasoning-based (weakest but valid) — thought experiments
77+
78+
**Prediction Testing** — Explicit, falsifiable predictions generated BEFORE checking. "I predict X because Y" — specific enough to be wrong.
79+
80+
**Adversarial Self-Challenge** — Deliberately seek the case that would BREAK your model. 3 attempts minimum at the model's weakest points.
81+
82+
**Accommodation Trigger** — When prediction failures are systematic (not random) and cluster around specific features, don't patch the model — rebuild it. Ask: "Is my entire model wrong, or just this prediction?"
83+
84+
---
85+
86+
## Standard Comprehension Protocol
87+
88+
When applying Structural Comprehension to an artifact:
89+
90+
1. Map the structure (Static)
91+
2. Trace behavior and generate predictions (Dynamic)
92+
3. Perturb and test predictions (Differential)
93+
4. Attack your own model (Adversarial)
94+
5. Extract principles and produce transferable artifact (Stabilization)
95+
96+
Phase order is recommended, not rigid. For black-box artifacts, start with Differential (poke and observe). For white-box, start with Static (read structure). The depth-level tests are mandatory regardless of order.
97+
98+
99+
---- NOW SOLID INSTRUCTIONS START----
100+
101+
## Execute the Following Process
102+
103+
### Comprehension Version 1 (CV1 — Structural Model)
104+
105+
**Phase: Static (Structural Mapping)**
106+
107+
Map the artifact's structure:
108+
109+
* Components — what parts exist?
110+
* Relationships — how do they connect?
111+
* Boundaries — where does this artifact end and its environment begin?
112+
113+
Write down your **prior assumptions** explicitly:
114+
* "I assume this works like X because it resembles Y"
115+
* "I expect this component is responsible for Z"
116+
* These priors are your starting model. Making them explicit allows testing them later.
117+
118+
**Depth test:** Can you predict which component a given responsibility lives in? State a prediction and verify.
119+
120+
**Frontier questions:** List the questions that structural mapping RAISED but didn't answer — behavioral and causal questions you can now formulate because you see the structure. Format each as:
121+
122+
> **Q:** [The question — naturally phrased]
123+
> **Why it matters:** [What answering this would unlock or what risk it addresses]
124+
> **Depth needed:** [Which CV level would answer this]
125+
126+
Present CV1: the structural model, your explicit priors, and the frontier questions.
127+
128+
---
129+
130+
### Comprehension Version 2 (CV2 — Behavioral Model)
131+
132+
**Phase: Dynamic (Behavioral Tracing)**
133+
134+
Trace behavior through the structure:
135+
136+
* Follow execution / data flow / state changes / logical progression for at least 2 different inputs or scenarios
137+
* Note where behavior matches your prior assumptions and where it diverges
138+
139+
Generate **explicit predictions** — write each BEFORE checking:
140+
141+
For each prediction, use this format:
142+
143+
**Prediction:**
144+
[What you predict will happen]
145+
146+
**Reasoning:**
147+
[Why your current model predicts this — which components, which flow]
148+
149+
**Status:**
150+
[UNTESTED — to be verified in CV3]
151+
152+
Generate at least 5 predictions that cover different parts of the artifact. Include at least 1 prediction you're uncertain about.
153+
154+
**Depth test:** Can you trace the execution path for an input you haven't traced before? Attempt it.
155+
156+
**Frontier questions:** List the causal and dependency questions that tracing RAISED but didn't answer — questions about what depends on what, what would change if conditions differed.
157+
158+
Present CV2: the behavioral model, the untested prediction set, and the frontier questions.
159+
160+
---
161+
162+
### Comprehension Version 3 (CV3 — Causal Model)
163+
164+
**Phase: Differential (Perturbation Testing)**
165+
166+
Test your predictions. For each prediction from CV2:
167+
168+
1. Perturb the relevant input/parameter/condition (use the strongest form available: execute > scenario > reasoning)
169+
2. Observe the result
170+
3. Compare predicted vs. actual
171+
172+
For each tested prediction:
173+
174+
**Prediction:**
175+
[Restated from CV2]
176+
177+
**Test method:**
178+
[Execution-based / Scenario-based / Reasoning-based — what you actually did]
179+
180+
**Result:**
181+
[What actually happened]
182+
183+
**Verdict:**
184+
[CONFIRMED — model is correct here / FAILED — model needs correction]
185+
186+
**Model correction (if FAILED):**
187+
[What was wrong in the model and how it's now revised]
188+
189+
After testing all predictions, build the **causal dependency map**: for each major component, what does it depend on? What depends on it? This comes from observing what changed when you perturbed things.
190+
191+
**Depth test:** Can you predict behavior for edge cases you specifically DID NOT trace? Generate 2 new predictions for untested conditions and test them.
192+
193+
**Frontier questions:** List adversarial questions — conditions or scenarios that MIGHT break the model. These become the input for CV4.
194+
195+
Present CV3: the causal model, prediction results, model corrections, causal dependency map, and frontier questions.
196+
197+
---
198+
199+
### Comprehension Version 4 (CV4 — Hardened Model)
200+
201+
**Phase: Adversarial (Model-Breaking)**
202+
203+
Attack your own model. The goal is NOT to confirm — it's to BREAK.
204+
205+
Generate 3 adversarial cases:
206+
* Each should target a DIFFERENT weak point in your model
207+
* Each should be designed to produce a result your model might get wrong
208+
* Include cases where the accommodation trigger might fire (systematic failures)
209+
210+
For each adversarial case:
211+
212+
**Challenge:**
213+
[The case designed to break the model]
214+
215+
**Why this might break:**
216+
[What assumption it tests, what weakness it targets]
217+
218+
**Prediction (from current model):**
219+
[What your model predicts]
220+
221+
**Result:**
222+
[What actually happened]
223+
224+
**Verdict:**
225+
[MODEL SURVIVES — the model handled this correctly / MODEL BREAKS — revision needed]
226+
227+
**Revision (if MODEL BREAKS):**
228+
[How the model is updated. Check: is this failure systematic? Does the accommodation trigger fire? If yes — rebuild, don't patch.]
229+
230+
If all 3 cases survive: try 3 more from a completely different angle. If those also survive, the model is genuinely robust at this depth.
231+
232+
Build the **confidence map**: for each major area of the artifact, rate your model's confidence as HIGH (tested and survived adversarial challenge), MEDIUM (tested but not adversarially challenged), or LOW (not directly tested, inferred from adjacent areas).
233+
234+
**Frontier questions:** List generative questions — questions about design principles, rationale, and the "why" behind the artifact that adversarial testing raised but didn't resolve.
235+
236+
Present CV4: adversarial results, model revisions, confidence map, and frontier questions.
237+
238+
---
239+
240+
### Comprehension Version 5 (CV5 — Generative Model)
241+
242+
**Phase: Stabilization (Synthesis & Transfer)**
243+
244+
Synthesize the surviving model into a transferable artifact.
245+
246+
**Principle extraction:** Identify the N minimal rules or design decisions from which all observed behavior follows. State them clearly:
247+
* "Everything else follows from these N decisions: [list]"
248+
* For each principle: what does it explain? What would change if it were different?
249+
250+
**Transferable comprehension document:** Write a document that someone who has NEVER seen this artifact could use to:
251+
* Predict its behavior for novel inputs
252+
* Modify it safely
253+
* Explain its design rationale
254+
255+
Structure the document as:
256+
1. **What it is** (1-2 sentences — Descriptive level)
257+
2. **How it's organized** (components + relationships — Structural level)
258+
3. **How it works** (key behavioral flows — Causal level)
259+
4. **What depends on what** (causal dependencies — Predictive level)
260+
5. **The generating principles** (minimal rules that explain everything — Generative level)
261+
6. **Known unknowns** (what ISN'T comprehended — explicit gaps)
262+
263+
**Depth test:** Could someone else act correctly on this artifact using only your document? If not, what's missing?
264+
265+
**Frontier questions:** List beyond-scope questions — questions about the system around the artifact, about contexts you haven't tested, about future evolution. These are the questions that would drive the NEXT comprehension session or a different discipline entirely.
266+
267+
Present CV5: the generating principles, the transferable document, explicit remaining unknowns, and frontier questions.
268+
269+
---
270+
271+
### Final Summary
272+
273+
After the last CV, present:
274+
275+
1. **Aspect** — Mechanistic / Intent / Both
276+
2. **Depth reached** — which level, with evidence (which tests passed)
277+
3. **Prediction scorecard** — total predictions made, confirmed, failed, model revisions triggered
278+
4. **Confidence map** — HIGH / MEDIUM / LOW areas
279+
5. **Key surprises** — where the model was wrong and what was learned from the correction
280+
6. **Remaining unknowns** — what is NOT comprehended
281+
7. **Accumulated frontier** — the full set of unanswered questions from all CVs, deduplicated. These are the direction signals for next steps — what to investigate next, what to test, where the model's boundaries are. Group by depth level (Structural / Causal / Predictive / Generative / Beyond-scope).

commands/install.sh

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@ commands=(
3333
explore.md
3434
wayfinding.md
3535
inquiry.md
36+
comprehend.md
3637
arch-small-summary.md
3738
arch-intro.md
3839
arch-traces.md
@@ -73,7 +74,7 @@ echo ""
7374
echo "Done. Installed ${#commands[@]} slash commands to $COMMANDS_DIR"
7475
echo "Done. Installed ${#hooks[@]} hooks to $HOOKS_DIR"
7576
echo ""
76-
echo "Slash commands: /devdocs-foundation, /devdocs-foundation-concepts, /devdocs-foundation-simplified-concepts, /devdocs-foundation-identify-modules, /devdocs-foundation-architecture, /elaborate, /task-desc, /task-plan, /critic, /critic-d, /sense-making, /innovate, /td-critique, /decompose, /explore, /wayfinding, /inquiry, /arch-small-summary, /arch-intro, /arch-traces, /arch-traces-2, /arch-top-improvements, /dead-code-index, /dead-code-concepts, /roadmap, /overview-report, /align, /align-modes, /devdocs-archivist"
77+
echo "Slash commands: /devdocs-foundation, /devdocs-foundation-concepts, /devdocs-foundation-simplified-concepts, /devdocs-foundation-identify-modules, /devdocs-foundation-architecture, /elaborate, /task-desc, /task-plan, /critic, /critic-d, /sense-making, /innovate, /td-critique, /decompose, /explore, /wayfinding, /inquiry, /comprehend, /arch-small-summary, /arch-intro, /arch-traces, /arch-traces-2, /arch-top-improvements, /dead-code-index, /dead-code-concepts, /roadmap, /overview-report, /align, /align-modes, /devdocs-archivist"
7778
echo ""
7879
echo "To activate the devdocs metadata hook, add this to .claude/settings.json:"
7980
echo ""

0 commit comments

Comments
 (0)