|
| 1 | +# AlignStack Thinking Disciplines |
| 2 | + |
| 3 | +Thinking Disciplines are natural cognitive operations — borrowed from how humans actually think — formalized into repeatable structures with defined components, processes, and failure modes. They are domain-agnostic: each one works for coding, business, design, research, or any field where that type of thinking is needed. |
| 4 | + |
| 5 | +They are not frameworks (too generic), not tools (too mechanical), not tips (too shallow). They are practiced methodologies for specific cognitive tasks — like martial arts disciplines are practiced methodologies for specific physical situations. You study them, use them, and get better at them over time. |
| 6 | + |
| 7 | +Each thinking discipline has: a philosophy/definition, structural components, a process, failure modes, and a coverage or quality strategy. Each one transforms a specific cognitive state into another. |
| 8 | + |
| 9 | +--- |
| 10 | + |
| 11 | +## Built |
| 12 | + |
| 13 | +### 1. Structural Sensemaking |
| 14 | + |
| 15 | +**Transform:** Ambiguity → Stable understanding |
| 16 | + |
| 17 | +**What it is:** A systematic process for constructing stable meaning from vague, ambiguous, or complex input. Works by organizing cognitive anchors into constrained conceptual structures through perspective integration, ambiguity collapse, and degrees-of-freedom reduction. |
| 18 | + |
| 19 | +**Components:** Cognitive anchors (constraints, insights, structural points, principles, meaning-nodes), boundary construction operations (perspective checking, ambiguity collapse, degrees-of-freedom reduction), six progressive Sense Versions (SV1–SV6). |
| 20 | + |
| 21 | +**Command:** `/sense-making` |
| 22 | +**Files:** `commands/sense-making.md` |
| 23 | + |
| 24 | +--- |
| 25 | + |
| 26 | +### 2. Structural Innovation |
| 27 | + |
| 28 | +**Transform:** Seed → Novel viable ideas |
| 29 | + |
| 30 | +**What it is:** A framework for producing novelty through systematic mechanism application. Seven mechanisms (4 Generators + 3 Framers) cover the innovation space. Intuition provides direction, mechanisms provide coverage, testing catches blind spots of both. |
| 31 | + |
| 32 | +**Components:** Intuition (context, valuation, motivation), seeds, seven mechanisms (Lens Shifting, Combination, Inversion, Constraint Manipulation, Absence Recognition, Domain Transfer, Extrapolation), Generator/Framer split, five testing criteria, six failure modes. |
| 33 | + |
| 34 | +**Command:** `/innovate` |
| 35 | +**Files:** `commands/innovate.md`, `devdocs/inno/innovaiton_framework.md`, `devdocs/inno/intuiton.md` |
| 36 | + |
| 37 | +--- |
| 38 | + |
| 39 | +## To Build |
| 40 | + |
| 41 | +### 3. Structural Critique |
| 42 | + |
| 43 | +**Transform:** Plan or idea → Identified risks, errors, and conflicts |
| 44 | + |
| 45 | +**What it is:** A framework for systematically evaluating plans, designs, and ideas to find what could go wrong. Not nitpicking — finding the risks that actually matter. The `/critic` and `/critic-d` commands already do this but have no formal framework defining what good critique IS, what its failure modes are, or how to ensure coverage. |
| 46 | + |
| 47 | +**Components to define:** |
| 48 | +- What are the dimensions of critique? (correctness, coherence, feasibility, completeness, security, performance — are there others?) |
| 49 | +- What's the coverage strategy? (how do you know you've checked enough dimensions?) |
| 50 | +- What's the severity model? (how to distinguish noise from real risks) |
| 51 | +- What are the failure modes of bad critique? (rubber-stamping, nitpicking, missing systemic risks, severity inflation, checking the plan instead of the assumptions behind the plan) |
| 52 | + |
| 53 | +**Existing commands:** `/critic`, `/critic-d` |
| 54 | +**Priority:** High — used constantly, currently ad-hoc |
| 55 | + |
| 56 | +--- |
| 57 | + |
| 58 | +### 4. Structural Decomposition |
| 59 | + |
| 60 | +**Transform:** Complex whole → Manageable independent parts |
| 61 | + |
| 62 | +**What it is:** A framework for breaking complex tasks, systems, or problems into pieces that can be worked on independently. The #1 bottleneck for long autonomous tasks — bad decomposition means pieces that can't be implemented independently, hidden dependencies that surface mid-build, and compounding errors across subtasks. |
| 63 | + |
| 64 | +**Components to define:** |
| 65 | +- How to detect natural boundaries (where does one piece end and another begin?) |
| 66 | +- How to verify independence (can piece A be built without piece B existing?) |
| 67 | +- How to map dependencies (what ordering constraints exist?) |
| 68 | +- How to size pieces (when is a piece too big? too small?) |
| 69 | +- What are the failure modes? (premature decomposition before understanding, splitting at the wrong boundaries, hidden coupling between "independent" pieces, uniform sizing that ignores natural complexity variation) |
| 70 | + |
| 71 | +**Existing commands:** `/decompose` (planned, not yet built) |
| 72 | +**Priority:** High — the bottleneck for every complex task |
| 73 | + |
| 74 | +--- |
| 75 | + |
| 76 | +### 5. Structural Diagnosis |
| 77 | + |
| 78 | +**Transform:** Failure → Root cause localization |
| 79 | + |
| 80 | +**What it is:** A framework for systematically finding where and why things went wrong. Not "what broke" but "why it broke, at which layer, through what causal chain." Every debugging session needs this — currently it's pure intuition and grep. |
| 81 | + |
| 82 | +**Components to define:** |
| 83 | +- Symptom detection (what's the observable failure?) |
| 84 | +- Hypothesis generation (what could cause this?) |
| 85 | +- Hypothesis testing (how to confirm or eliminate each hypothesis?) |
| 86 | +- Root cause isolation (distinguishing symptoms from causes, proximate causes from root causes) |
| 87 | +- Layer attribution (which alignment layer did the failure originate at?) |
| 88 | +- What are the failure modes? (treating symptoms not causes, stopping at the first explanation, misattributing the layer, confirmation bias toward familiar bugs, assuming the most recent change is the cause) |
| 89 | + |
| 90 | +**Existing commands:** `/verify`, `/probe` (planned) |
| 91 | +**Priority:** High — every debugging session, currently unstructured |
| 92 | + |
| 93 | +--- |
| 94 | + |
| 95 | +### 6. Structural Exploration |
| 96 | + |
| 97 | +**Transform:** Unknown territory → Mapped understanding |
| 98 | + |
| 99 | +**What it is:** A framework for systematically mapping unfamiliar territory — codebases, domains, problem spaces. Different from Sensemaking (which clarifies what's ambiguous) — Exploration maps what's unknown. You don't know what you don't know; the framework gives you a method for discovering it. |
| 100 | + |
| 101 | +**Components to define:** |
| 102 | +- Breadth-first scan (what exists at the surface level?) |
| 103 | +- Depth probes (where should we go deeper?) |
| 104 | +- Boundary detection (where does this territory end?) |
| 105 | +- Knowledge gap identification (what do we NOT know after scanning?) |
| 106 | +- Confidence mapping (what are we sure about, what's uncertain, what's unknown?) |
| 107 | +- What are the failure modes? (exploring too deep before scanning breadth, mistaking surface understanding for deep understanding, stopping exploration when it feels "enough" rather than when gaps are closed) |
| 108 | + |
| 109 | +**Existing commands:** `/arch-small-summary`, `/arch-intro`, `/arch-traces`, `/arch-traces-2` |
| 110 | +**Priority:** Medium — the archaeology commands work well, but a framework would make them more principled |
| 111 | + |
| 112 | +--- |
| 113 | + |
| 114 | +### 7. Structural Reflection |
| 115 | + |
| 116 | +**Transform:** Completed work → Extracted patterns and insights |
| 117 | + |
| 118 | +**What it is:** A framework for learning from what was done. Not just "what happened" (that's a report) but "what does it mean, what patterns emerged, what should change going forward." The meta-process that makes all other processes improve over time. |
| 119 | + |
| 120 | +**Components to define:** |
| 121 | +- Timeline reconstruction (what actually happened, in what order?) |
| 122 | +- Pattern extraction (what repeated? what was surprising? what was predicted correctly/incorrectly?) |
| 123 | +- Decision evaluation (which decisions were good? which looked good but weren't? which looked bad but were right?) |
| 124 | +- Trajectory identification (where is the project heading based on the arc of work, not just the latest commit?) |
| 125 | +- What are the failure modes? (recency bias, success bias, confusing activity with progress, reflecting on what was done instead of what was learned) |
| 126 | + |
| 127 | +**Existing commands:** `/overview-report`, `/compare-intent` (planned) |
| 128 | +**Priority:** Medium — valuable but less frequently needed than critique, decomposition, diagnosis |
| 129 | + |
| 130 | +--- |
| 131 | + |
| 132 | +### 8. Structural Recovery |
| 133 | + |
| 134 | +**Transform:** Broken state → Restored function |
| 135 | + |
| 136 | +**What it is:** A framework for systematically getting back to a known-good state after a failure. Where Diagnosis finds the problem, Recovery fixes it — with minimum collateral damage and maximum confidence that the fix is complete. |
| 137 | + |
| 138 | +**Components to define:** |
| 139 | +- Damage assessment (what exactly is broken? what still works?) |
| 140 | +- Known-good state identification (what are we restoring TO?) |
| 141 | +- Rollback vs forward-fix decision (revert or patch?) |
| 142 | +- Minimal fix path (smallest change that restores function) |
| 143 | +- Verification of restoration (how do we confirm it's actually fixed?) |
| 144 | +- What are the failure modes? (incomplete recovery, fixing the symptom not the cause, introducing new problems during recovery, restoring to a state that was already degraded) |
| 145 | + |
| 146 | +**Existing commands:** None — identified gap |
| 147 | +**Priority:** Lower — important but more operational than philosophical |
| 148 | + |
| 149 | +--- |
| 150 | + |
| 151 | +### 9. Structural Evaluation |
| 152 | + |
| 153 | +**Transform:** Output → Intent comparison |
| 154 | + |
| 155 | +**What it is:** A framework for verifying that what was built matches what was intended. Not "does it work?" (that's testing) but "does it do what was asked for?" Catches the case where implementation is correct but doesn't match intent — the right code for the wrong problem. |
| 156 | + |
| 157 | +**Components to define:** |
| 158 | +- Intent extraction (what was actually asked for? success criteria, implied requirements, unstated expectations) |
| 159 | +- Output mapping (what was actually built? what does it do?) |
| 160 | +- Gap analysis (what was asked but not built? what was built but not asked?) |
| 161 | +- Alignment scoring (what percentage of intent is fulfilled?) |
| 162 | +- What are the failure modes? (vague intent making comparison impossible, measuring what's easy instead of what matters, confusing "working" with "correct") |
| 163 | + |
| 164 | +**Existing commands:** `/compare-intent` (planned) |
| 165 | +**Priority:** Medium — narrow scope, partially covered by critique |
| 166 | + |
| 167 | +--- |
| 168 | + |
| 169 | +## Discipline Relationships |
| 170 | + |
| 171 | +``` |
| 172 | +Exploration → Sensemaking → Innovation |
| 173 | + ↓ |
| 174 | + Decomposition → Critique → (implement) → Evaluation |
| 175 | + ↓ |
| 176 | + Diagnosis → Recovery |
| 177 | +
|
| 178 | +Reflection spans all — it operates on the output of any framework |
| 179 | +``` |
| 180 | + |
| 181 | +Each discipline is standalone and domain-agnostic. Together they cover the cognitive operations that development requires. The AlignStack Agent uses them as the methodology behind its seven modes. |
0 commit comments