|
| 1 | +# /align-modes — Multi-Modal Task Assessment |
| 2 | + |
| 3 | +For a given task, assess what is needed across all seven intent modes and all six alignment layers. This produces a complete picture of what the task requires — not just what to build, but what to explore, what to innovate, what might break, what to maintain, and what to reflect on. |
| 4 | + |
| 5 | +## Additional Input/Instructions |
| 6 | + |
| 7 | +$ARGUMENTS |
| 8 | + |
| 9 | +--- |
| 10 | + |
| 11 | +## Instructions |
| 12 | + |
| 13 | +### Step 1: Identify the Task |
| 14 | + |
| 15 | +Determine which task to assess: |
| 16 | +- If a path is given (e.g., `devdocs/scoped/feat_3_auth/`), read its docs |
| 17 | +- If a task description is given, use it directly |
| 18 | +- If nothing is given, check recent conversation context |
| 19 | +- If still unclear, ask |
| 20 | + |
| 21 | +Read the task's desc.md, plan, critic, and any related docs. Understand what the task is trying to achieve. |
| 22 | + |
| 23 | +### Step 2: Assess Each Mode |
| 24 | + |
| 25 | +For the given task, analyze what is needed across each mode. Read relevant codebase files and devdocs to ground each assessment in reality — do not speculate without checking. |
| 26 | + |
| 27 | +#### Exploration — "What do we need to understand first?" |
| 28 | + |
| 29 | +| Layer | What needs exploring? | |
| 30 | +|-------|----------------------| |
| 31 | +| Workspace | What codebase areas need reading before starting? What context is missing? | |
| 32 | +| Task | What about the task is unclear or ambiguous? What assumptions haven't been validated? | |
| 33 | +| Action-Space | What approaches exist for this? What has been tried before in this codebase? | |
| 34 | +| Action-Set | What implementation patterns does this codebase use for similar work? | |
| 35 | +| Coherence | What existing systems does this task touch? What are their current states? | |
| 36 | +| Outcome | What does success look like? Are the criteria measurable and testable? | |
| 37 | + |
| 38 | +#### Alignment — "What needs to be correct?" |
| 39 | + |
| 40 | +| Layer | What alignment is needed? | |
| 41 | +|-------|--------------------------| |
| 42 | +| Workspace | Is the right context loaded? Are archaeology docs fresh for the relevant areas? | |
| 43 | +| Task | Is intent clear? Is scope bounded? Are success criteria specific enough to verify? | |
| 44 | +| Action-Space | Are the viable approaches identified and evaluated? Is the chosen approach justified? | |
| 45 | +| Action-Set | Does a plan exist? Is it sequenced correctly? Are dependencies explicit? | |
| 46 | +| Coherence | What existing features, modules, or contracts must not break? What needs checking after implementation? | |
| 47 | +| Outcome | How will we verify the result matches the intent? What tests or probes are needed? | |
| 48 | + |
| 49 | +#### Innovation — "Where might we need novel approaches?" |
| 50 | + |
| 51 | +| Layer | Where might innovation be needed? | |
| 52 | +|-------|-----------------------------------| |
| 53 | +| Workspace | Does the task require new tooling, environments, or context structures? | |
| 54 | +| Task | Is the task framing correct, or might the real problem be different from what's stated? | |
| 55 | +| Action-Space | Are existing approaches sufficient, or does this task require something that doesn't exist yet? | |
| 56 | +| Action-Set | Can this be built with standard patterns, or does it need novel combinations? | |
| 57 | +| Coherence | Does this task require deliberately breaking existing patterns to establish better ones? | |
| 58 | +| Outcome | Should the success criteria be redefined? Is the original goal still the right goal? | |
| 59 | + |
| 60 | +#### Diagnostic — "What could go wrong?" |
| 61 | + |
| 62 | +| Layer | What should we watch for? | |
| 63 | +|-------|--------------------------| |
| 64 | +| Workspace | What context could be stale or misleading? | |
| 65 | +| Task | Where is the task description most likely to be misunderstood? | |
| 66 | +| Action-Space | What approach pitfalls exist? What has failed before in similar tasks? | |
| 67 | +| Action-Set | What plan steps are most likely to fail? Where is the plan weakest? | |
| 68 | +| Coherence | What is most likely to break? What are the fragile points in affected modules? | |
| 69 | +| Outcome | What's the most likely way the result could look correct but be wrong? | |
| 70 | + |
| 71 | +#### Maintenance — "What needs upkeep?" |
| 72 | + |
| 73 | +| Layer | What maintenance is relevant? | |
| 74 | +|-------|------------------------------| |
| 75 | +| Workspace | Are archaeology docs fresh for the areas this task touches? | |
| 76 | +| Task | Are there stale or superseded task docs that should be archived? | |
| 77 | +| Action-Space | Are previous approach evaluations still valid? | |
| 78 | +| Action-Set | Do existing plans for related work still match the codebase? | |
| 79 | +| Coherence | Has anything drifted since the task was planned? | |
| 80 | +| Outcome | Are previous verification results still valid? | |
| 81 | + |
| 82 | +#### Recovery — "What's the fallback?" |
| 83 | + |
| 84 | +| Layer | What recovery options exist? | |
| 85 | +|-------|----------------------------| |
| 86 | +| Workspace | Can we restore context if something goes wrong? | |
| 87 | +| Task | If the task needs re-scoping mid-implementation, what's the rollback point? | |
| 88 | +| Action-Space | If the chosen approach fails, what's the next best alternative? | |
| 89 | +| Action-Set | If implementation breaks things, what's the minimal revert? | |
| 90 | +| Coherence | What's the known-good state we'd restore to? | |
| 91 | +| Outcome | If the result doesn't match intent, what's the recovery path? | |
| 92 | + |
| 93 | +#### Reflection — "What should we learn from this?" |
| 94 | + |
| 95 | +| Layer | What's worth reflecting on? | |
| 96 | +|-------|---------------------------| |
| 97 | +| Workspace | Did the context setup work well? What would make it better next time? | |
| 98 | +| Task | Did the task description capture the real need? What was missing? | |
| 99 | +| Action-Space | Were the right approaches considered? Did we miss better options? | |
| 100 | +| Action-Set | Did the plan hold up during implementation? Where did it deviate? | |
| 101 | +| Coherence | Did we anticipate the right risks? What surprised us? | |
| 102 | +| Outcome | Did success criteria actually capture what mattered? | |
| 103 | + |
| 104 | +### Step 3: Synthesize |
| 105 | + |
| 106 | +After the per-mode assessment, produce a synthesis: |
| 107 | + |
| 108 | +```markdown |
| 109 | +## Synthesis |
| 110 | + |
| 111 | +### Critical items before starting |
| 112 | +[What MUST be done before implementation — missing exploration, unresolved alignment gaps, potential innovation needs] |
| 113 | + |
| 114 | +### Highest risk areas |
| 115 | +[Where things are most likely to go wrong — from diagnostic assessment] |
| 116 | + |
| 117 | +### Mode sequence recommendation |
| 118 | +[Suggested order of modes for this task. E.g., "Exploration first (modules X and Y need reading), then Alignment (plan needs critique), then proceed to implementation. Diagnostic focus on auth middleware during implementation."] |
| 119 | + |
| 120 | +### Maintenance actions |
| 121 | +[Any housekeeping needed before or after this task] |
| 122 | + |
| 123 | +### Recovery plan |
| 124 | +[If things go wrong, here's the fallback] |
| 125 | +``` |
| 126 | + |
| 127 | +--- |
| 128 | + |
| 129 | +## Rules |
| 130 | + |
| 131 | +1. **Check reality, don't speculate.** Read the actual codebase files and devdocs for each assessment. "This module might be fragile" is weak. "This module has 3 undocumented side effects (from traces)" is useful. |
| 132 | +2. **Be concise per cell.** Each mode × layer cell should be 1-3 sentences. The synthesis is where you elaborate. |
| 133 | +3. **Skip cells that don't apply.** If innovation is clearly not needed for this task, say "Not applicable — standard approaches are sufficient" and move on. Don't force novelty. |
| 134 | +4. **The synthesis is the most valuable part.** The tables are the analysis. The synthesis is the actionable output — what to do, in what order, watching for what. |
| 135 | + |
| 136 | +--- |
| 137 | + |
| 138 | +## Output |
| 139 | + |
| 140 | +Full multi-modal assessment printed in conversation. Not saved to file by default — this is a working assessment, not an artifact. The user can save it if they want. |
0 commit comments