diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml new file mode 100644 index 0000000..1a19009 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -0,0 +1,77 @@ +name: πŸ› Bug Report +description: Report a bug or unexpected behavior +title: "[Bug]: " +labels: ["bug", "triage"] +body: + - type: markdown + attributes: + value: | + Thanks for taking the time to report a bug! Please fill out the form below. + + - type: textarea + id: description + attributes: + label: Description + description: A clear description of what the bug is. + placeholder: What happened? + validations: + required: true + + - type: textarea + id: steps + attributes: + label: Steps to Reproduce + description: Steps to reproduce the behavior. + placeholder: | + 1. Run `git-msg generate --dry-run` + 2. See error... + validations: + required: true + + - type: textarea + id: expected + attributes: + label: Expected Behavior + description: What did you expect to happen? + validations: + required: true + + - type: textarea + id: actual + attributes: + label: Actual Behavior + description: What actually happened? + validations: + required: true + + - type: input + id: version + attributes: + label: git-msg Version + description: Output of `git-msg --version` + placeholder: "v0.1.0" + validations: + required: true + + - type: dropdown + id: os + attributes: + label: Operating System + options: + - macOS + - Linux + validations: + required: true + + - type: textarea + id: logs + attributes: + label: Logs / Error Output + description: Any relevant error messages or logs. + render: shell + + - type: textarea + id: additional + attributes: + label: Additional Context + description: Any other context about the problem. diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml new file mode 100644 index 0000000..b440bba --- /dev/null +++ b/.github/ISSUE_TEMPLATE/config.yml @@ -0,0 +1,8 @@ +blank_issues_enabled: false +contact_links: + - name: πŸ’¬ Discussions + url: https://github.com/madstone-tech/git-msg/discussions + about: Ask questions and discuss ideas + - name: πŸ“– Documentation + url: https://github.com/madstone-tech/git-msg#readme + about: Read the documentation diff --git a/.github/ISSUE_TEMPLATE/feature_request.yml b/.github/ISSUE_TEMPLATE/feature_request.yml new file mode 100644 index 0000000..db1fc35 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/feature_request.yml @@ -0,0 +1,53 @@ +name: ✨ Feature Request +description: Suggest a new feature or enhancement +title: "[Feature]: " +labels: ["enhancement"] +body: + - type: markdown + attributes: + value: | + Thanks for suggesting a feature! Please describe your idea below. + + - type: textarea + id: problem + attributes: + label: Problem Statement + description: What problem does this feature solve? + placeholder: I'm always frustrated when... + validations: + required: true + + - type: textarea + id: solution + attributes: + label: Proposed Solution + description: Describe the solution you'd like. + placeholder: It would be great if git-msg could... + validations: + required: true + + - type: textarea + id: alternatives + attributes: + label: Alternatives Considered + description: Any alternative solutions or workarounds you've considered? + + - type: dropdown + id: interface + attributes: + label: Which area would this affect? + multiple: true + options: + - generate (commit message generation) + - config (configuration management) + - prompt (template management) + - hook (git hook lifecycle) + - LLM provider (new provider or provider behaviour) + - Setup wizard + - Other + + - type: textarea + id: additional + attributes: + label: Additional Context + description: Any other context, mockups, or examples. diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 6c53c04..eb956f8 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -66,7 +66,7 @@ jobs: - name: golangci-lint uses: golangci/golangci-lint-action@v7 with: - version: v2.5.0 + version: v2.11.3 constitution: name: Constitution Audit diff --git a/.gitignore b/.gitignore index 6bdabfe..f3b06bf 100644 --- a/.gitignore +++ b/.gitignore @@ -27,3 +27,9 @@ Thumbs.db # Logs *.log + +# Internal development tooling (local only) +.opencode/ +.specify/ +specs/ +.claude/ diff --git a/.golangci.yml b/.golangci.yml index d8358b6..45e4772 100644 --- a/.golangci.yml +++ b/.golangci.yml @@ -3,7 +3,6 @@ version: "2" run: timeout: 5m - go: "1.26" tests: false linters: diff --git a/.goreleaser.yaml b/.goreleaser.yaml index 78dd9fb..2e76ebf 100644 --- a/.goreleaser.yaml +++ b/.goreleaser.yaml @@ -118,7 +118,7 @@ release: sudo mv git-msg /usr/local/bin/ # Or using Go - go install github.com/madstone0-0/git-msg@{{ .Tag }} + go install github.com/madstone-tech/git-msg@{{ .Tag }} # Or using Homebrew brew tap madstone-tech/tap diff --git a/.opencode/command/speckit.analyze.md b/.opencode/command/speckit.analyze.md deleted file mode 100644 index 98b04b0..0000000 --- a/.opencode/command/speckit.analyze.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation. ---- - -## User Input - -```text -$ARGUMENTS -``` - -You **MUST** consider the user input before proceeding (if not empty). - -## Goal - -Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`. - -## Operating Constraints - -**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually). - -**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasksβ€”not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`. - -## Execution Steps - -### 1. Initialize Analysis Context - -Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths: - -- SPEC = FEATURE_DIR/spec.md -- PLAN = FEATURE_DIR/plan.md -- TASKS = FEATURE_DIR/tasks.md - -Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command). -For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). - -### 2. Load Artifacts (Progressive Disclosure) - -Load only the minimal necessary context from each artifact: - -**From spec.md:** - -- Overview/Context -- Functional Requirements -- Non-Functional Requirements -- User Stories -- Edge Cases (if present) - -**From plan.md:** - -- Architecture/stack choices -- Data Model references -- Phases -- Technical constraints - -**From tasks.md:** - -- Task IDs -- Descriptions -- Phase grouping -- Parallel markers [P] -- Referenced file paths - -**From constitution:** - -- Load `.specify/memory/constitution.md` for principle validation - -### 3. Build Semantic Models - -Create internal representations (do not include raw artifacts in output): - -- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" β†’ `user-can-upload-file`) -- **User story/action inventory**: Discrete user actions with acceptance criteria -- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases) -- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements - -### 4. Detection Passes (Token-Efficient Analysis) - -Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary. - -#### A. Duplication Detection - -- Identify near-duplicate requirements -- Mark lower-quality phrasing for consolidation - -#### B. Ambiguity Detection - -- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria -- Flag unresolved placeholders (TODO, TKTK, ???, ``, etc.) - -#### C. Underspecification - -- Requirements with verbs but missing object or measurable outcome -- User stories missing acceptance criteria alignment -- Tasks referencing files or components not defined in spec/plan - -#### D. Constitution Alignment - -- Any requirement or plan element conflicting with a MUST principle -- Missing mandated sections or quality gates from constitution - -#### E. Coverage Gaps - -- Requirements with zero associated tasks -- Tasks with no mapped requirement/story -- Non-functional requirements not reflected in tasks (e.g., performance, security) - -#### F. Inconsistency - -- Terminology drift (same concept named differently across files) -- Data entities referenced in plan but absent in spec (or vice versa) -- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note) -- Conflicting requirements (e.g., one requires Next.js while other specifies Vue) - -### 5. Severity Assignment - -Use this heuristic to prioritize findings: - -- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality -- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion -- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case -- **LOW**: Style/wording improvements, minor redundancy not affecting execution order - -### 6. Produce Compact Analysis Report - -Output a Markdown report (no file writes) with the following structure: - -## Specification Analysis Report - -| ID | Category | Severity | Location(s) | Summary | Recommendation | -|----|----------|----------|-------------|---------|----------------| -| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version | - -(Add one row per finding; generate stable IDs prefixed by category initial.) - -**Coverage Summary Table:** - -| Requirement Key | Has Task? | Task IDs | Notes | -|-----------------|-----------|----------|-------| - -**Constitution Alignment Issues:** (if any) - -**Unmapped Tasks:** (if any) - -**Metrics:** - -- Total Requirements -- Total Tasks -- Coverage % (requirements with >=1 task) -- Ambiguity Count -- Duplication Count -- Critical Issues Count - -### 7. Provide Next Actions - -At end of report, output a concise Next Actions block: - -- If CRITICAL issues exist: Recommend resolving before `/speckit.implement` -- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions -- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'" - -### 8. Offer Remediation - -Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.) - -## Operating Principles - -### Context Efficiency - -- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation -- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis -- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow -- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts - -### Analysis Guidelines - -- **NEVER modify files** (this is read-only analysis) -- **NEVER hallucinate missing sections** (if absent, report them accurately) -- **Prioritize constitution violations** (these are always CRITICAL) -- **Use examples over exhaustive rules** (cite specific instances, not generic patterns) -- **Report zero issues gracefully** (emit success report with coverage statistics) - -## Context - -$ARGUMENTS diff --git a/.opencode/command/speckit.checklist.md b/.opencode/command/speckit.checklist.md deleted file mode 100644 index b7624e2..0000000 --- a/.opencode/command/speckit.checklist.md +++ /dev/null @@ -1,295 +0,0 @@ ---- -description: Generate a custom checklist for the current feature based on user requirements. ---- - -## Checklist Purpose: "Unit Tests for English" - -**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain. - -**NOT for verification/testing**: - -- ❌ NOT "Verify the button clicks correctly" -- ❌ NOT "Test error handling works" -- ❌ NOT "Confirm the API returns 200" -- ❌ NOT checking if code/implementation matches the spec - -**FOR requirements quality validation**: - -- βœ… "Are visual hierarchy requirements defined for all card types?" (completeness) -- βœ… "Is 'prominent display' quantified with specific sizing/positioning?" (clarity) -- βœ… "Are hover state requirements consistent across all interactive elements?" (consistency) -- βœ… "Are accessibility requirements defined for keyboard navigation?" (coverage) -- βœ… "Does the spec define what happens when logo image fails to load?" (edge cases) - -**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works. - -## User Input - -```text -$ARGUMENTS -``` - -You **MUST** consider the user input before proceeding (if not empty). - -## Execution Steps - -1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list. - - All file paths must be absolute. - - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). - -2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST: - - Be generated from the user's phrasing + extracted signals from spec/plan/tasks - - Only ask about information that materially changes checklist content - - Be skipped individually if already unambiguous in `$ARGUMENTS` - - Prefer precision over breadth - - Generation algorithm: - 1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts"). - 2. Cluster signals into candidate focus areas (max 4) ranked by relevance. - 3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit. - 4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria. - 5. Formulate questions chosen from these archetypes: - - Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?") - - Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?") - - Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?") - - Audience framing (e.g., "Will this be used by the author only or peers during PR review?") - - Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?") - - Scenario class gap (e.g., "No recovery flows detectedβ€”are rollback / partial failure paths in scope?") - - Question formatting rules: - - If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters - - Limit to A–E options maximum; omit table if a free-form answer is clearer - - Never ask the user to restate what they already said - - Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope." - - Defaults when interaction impossible: - - Depth: Standard - - Audience: Reviewer (PR) if code-related; Author otherwise - - Focus: Top 2 relevance clusters - - Output the questions (label Q1/Q2/Q3). After answers: if β‰₯2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more. - -3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers: - - Derive checklist theme (e.g., security, review, deploy, ux) - - Consolidate explicit must-have items mentioned by user - - Map focus selections to category scaffolding - - Infer any missing context from spec/plan/tasks (do NOT hallucinate) - -4. **Load feature context**: Read from FEATURE_DIR: - - spec.md: Feature requirements and scope - - plan.md (if exists): Technical details, dependencies - - tasks.md (if exists): Implementation tasks - - **Context Loading Strategy**: - - Load only necessary portions relevant to active focus areas (avoid full-file dumping) - - Prefer summarizing long sections into concise scenario/requirement bullets - - Use progressive disclosure: add follow-on retrieval only if gaps detected - - If source docs are large, generate interim summary items instead of embedding raw text - -5. **Generate checklist** - Create "Unit Tests for Requirements": - - Create `FEATURE_DIR/checklists/` directory if it doesn't exist - - Generate unique checklist filename: - - Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`) - - Format: `[domain].md` - - File handling behavior: - - If file does NOT exist: Create new file and number items starting from CHK001 - - If file exists: Append new items to existing file, continuing from the last CHK ID (e.g., if last item is CHK015, start new items at CHK016) - - Never delete or replace existing checklist content - always preserve and append - - **CORE PRINCIPLE - Test the Requirements, Not the Implementation**: - Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for: - - **Completeness**: Are all necessary requirements present? - - **Clarity**: Are requirements unambiguous and specific? - - **Consistency**: Do requirements align with each other? - - **Measurability**: Can requirements be objectively verified? - - **Coverage**: Are all scenarios/edge cases addressed? - - **Category Structure** - Group items by requirement quality dimensions: - - **Requirement Completeness** (Are all necessary requirements documented?) - - **Requirement Clarity** (Are requirements specific and unambiguous?) - - **Requirement Consistency** (Do requirements align without conflicts?) - - **Acceptance Criteria Quality** (Are success criteria measurable?) - - **Scenario Coverage** (Are all flows/cases addressed?) - - **Edge Case Coverage** (Are boundary conditions defined?) - - **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?) - - **Dependencies & Assumptions** (Are they documented and validated?) - - **Ambiguities & Conflicts** (What needs clarification?) - - **HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**: - - ❌ **WRONG** (Testing implementation): - - "Verify landing page displays 3 episode cards" - - "Test hover states work on desktop" - - "Confirm logo click navigates home" - - βœ… **CORRECT** (Testing requirements quality): - - "Are the exact number and layout of featured episodes specified?" [Completeness] - - "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity] - - "Are hover state requirements consistent across all interactive elements?" [Consistency] - - "Are keyboard navigation requirements defined for all interactive UI?" [Coverage] - - "Is the fallback behavior specified when logo image fails to load?" [Edge Cases] - - "Are loading states defined for asynchronous episode data?" [Completeness] - - "Does the spec define visual hierarchy for competing UI elements?" [Clarity] - - **ITEM STRUCTURE**: - Each item should follow this pattern: - - Question format asking about requirement quality - - Focus on what's WRITTEN (or not written) in the spec/plan - - Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.] - - Reference spec section `[Spec Β§X.Y]` when checking existing requirements - - Use `[Gap]` marker when checking for missing requirements - - **EXAMPLES BY QUALITY DIMENSION**: - - Completeness: - - "Are error handling requirements defined for all API failure modes? [Gap]" - - "Are accessibility requirements specified for all interactive elements? [Completeness]" - - "Are mobile breakpoint requirements defined for responsive layouts? [Gap]" - - Clarity: - - "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec Β§NFR-2]" - - "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec Β§FR-5]" - - "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec Β§FR-4]" - - Consistency: - - "Do navigation requirements align across all pages? [Consistency, Spec Β§FR-10]" - - "Are card component requirements consistent between landing and detail pages? [Consistency]" - - Coverage: - - "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]" - - "Are concurrent user interaction scenarios addressed? [Coverage, Gap]" - - "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]" - - Measurability: - - "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec Β§FR-1]" - - "Can 'balanced visual weight' be objectively verified? [Measurability, Spec Β§FR-2]" - - **Scenario Classification & Coverage** (Requirements Quality Focus): - - Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios - - For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?" - - If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]" - - Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]" - - **Traceability Requirements**: - - MINIMUM: β‰₯80% of items MUST include at least one traceability reference - - Each item should reference: spec section `[Spec Β§X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]` - - If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]" - - **Surface & Resolve Issues** (Requirements Quality Problems): - Ask questions about the requirements themselves: - - Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec Β§NFR-1]" - - Conflicts: "Do navigation requirements conflict between Β§FR-10 and Β§FR-10a? [Conflict]" - - Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]" - - Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]" - - Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]" - - **Content Consolidation**: - - Soft cap: If raw candidate items > 40, prioritize by risk/impact - - Merge near-duplicates checking the same requirement aspect - - If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]" - - **🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test: - - ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior - - ❌ References to code execution, user actions, system behavior - - ❌ "Displays correctly", "works properly", "functions as expected" - - ❌ "Click", "navigate", "render", "load", "execute" - - ❌ Test cases, test plans, QA procedures - - ❌ Implementation details (frameworks, APIs, algorithms) - - **βœ… REQUIRED PATTERNS** - These test requirements quality: - - βœ… "Are [requirement type] defined/specified/documented for [scenario]?" - - βœ… "Is [vague term] quantified/clarified with specific criteria?" - - βœ… "Are requirements consistent between [section A] and [section B]?" - - βœ… "Can [requirement] be objectively measured/verified?" - - βœ… "Are [edge cases/scenarios] addressed in requirements?" - - βœ… "Does the spec define [missing aspect]?" - -6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### ` lines with globally incrementing IDs starting at CHK001. - -7. **Report**: Output full path to checklist file, item count, and summarize whether the run created a new file or appended to an existing one. Summarize: - - Focus areas selected - - Depth level - - Actor/timing - - Any explicit user-specified must-have items incorporated - -**Important**: Each `/speckit.checklist` command invocation uses a short, descriptive checklist filename and either creates a new file or appends to an existing one. This allows: - -- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`) -- Simple, memorable filenames that indicate checklist purpose -- Easy identification and navigation in the `checklists/` folder - -To avoid clutter, use descriptive types and clean up obsolete checklists when done. - -## Example Checklist Types & Sample Items - -**UX Requirements Quality:** `ux.md` - -Sample items (testing the requirements, NOT the implementation): - -- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec Β§FR-1]" -- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec Β§FR-1]" -- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]" -- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]" -- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]" -- "Can 'prominent display' be objectively measured? [Measurability, Spec Β§FR-4]" - -**API Requirements Quality:** `api.md` - -Sample items: - -- "Are error response formats specified for all failure scenarios? [Completeness]" -- "Are rate limiting requirements quantified with specific thresholds? [Clarity]" -- "Are authentication requirements consistent across all endpoints? [Consistency]" -- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]" -- "Is versioning strategy documented in requirements? [Gap]" - -**Performance Requirements Quality:** `performance.md` - -Sample items: - -- "Are performance requirements quantified with specific metrics? [Clarity]" -- "Are performance targets defined for all critical user journeys? [Coverage]" -- "Are performance requirements under different load conditions specified? [Completeness]" -- "Can performance requirements be objectively measured? [Measurability]" -- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]" - -**Security Requirements Quality:** `security.md` - -Sample items: - -- "Are authentication requirements specified for all protected resources? [Coverage]" -- "Are data protection requirements defined for sensitive information? [Completeness]" -- "Is the threat model documented and requirements aligned to it? [Traceability]" -- "Are security requirements consistent with compliance obligations? [Consistency]" -- "Are security failure/breach response requirements defined? [Gap, Exception Flow]" - -## Anti-Examples: What NOT To Do - -**❌ WRONG - These test implementation, not requirements:** - -```markdown -- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec Β§FR-001] -- [ ] CHK002 - Test hover states work correctly on desktop [Spec Β§FR-003] -- [ ] CHK003 - Confirm logo click navigates to home page [Spec Β§FR-010] -- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec Β§FR-005] -``` - -**βœ… CORRECT - These test requirements quality:** - -```markdown -- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec Β§FR-001] -- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec Β§FR-003] -- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec Β§FR-010] -- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec Β§FR-005] -- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap] -- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec Β§FR-001] -``` - -**Key Differences:** - -- Wrong: Tests if the system works correctly -- Correct: Tests if the requirements are written correctly -- Wrong: Verification of behavior -- Correct: Validation of requirement quality -- Wrong: "Does it do X?" -- Correct: "Is X clearly specified?" diff --git a/.opencode/command/speckit.clarify.md b/.opencode/command/speckit.clarify.md deleted file mode 100644 index 657439f..0000000 --- a/.opencode/command/speckit.clarify.md +++ /dev/null @@ -1,181 +0,0 @@ ---- -description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec. -handoffs: - - label: Build Technical Plan - agent: speckit.plan - prompt: Create a plan for the spec. I am building with... ---- - -## User Input - -```text -$ARGUMENTS -``` - -You **MUST** consider the user input before proceeding (if not empty). - -## Outline - -Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file. - -Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases. - -Execution steps: - -1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields: - - `FEATURE_DIR` - - `FEATURE_SPEC` - - (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.) - - If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment. - - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). - -2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked). - - Functional Scope & Behavior: - - Core user goals & success criteria - - Explicit out-of-scope declarations - - User roles / personas differentiation - - Domain & Data Model: - - Entities, attributes, relationships - - Identity & uniqueness rules - - Lifecycle/state transitions - - Data volume / scale assumptions - - Interaction & UX Flow: - - Critical user journeys / sequences - - Error/empty/loading states - - Accessibility or localization notes - - Non-Functional Quality Attributes: - - Performance (latency, throughput targets) - - Scalability (horizontal/vertical, limits) - - Reliability & availability (uptime, recovery expectations) - - Observability (logging, metrics, tracing signals) - - Security & privacy (authN/Z, data protection, threat assumptions) - - Compliance / regulatory constraints (if any) - - Integration & External Dependencies: - - External services/APIs and failure modes - - Data import/export formats - - Protocol/versioning assumptions - - Edge Cases & Failure Handling: - - Negative scenarios - - Rate limiting / throttling - - Conflict resolution (e.g., concurrent edits) - - Constraints & Tradeoffs: - - Technical constraints (language, storage, hosting) - - Explicit tradeoffs or rejected alternatives - - Terminology & Consistency: - - Canonical glossary terms - - Avoided synonyms / deprecated terms - - Completion Signals: - - Acceptance criteria testability - - Measurable Definition of Done style indicators - - Misc / Placeholders: - - TODO markers / unresolved decisions - - Ambiguous adjectives ("robust", "intuitive") lacking quantification - - For each category with Partial or Missing status, add a candidate question opportunity unless: - - Clarification would not materially change implementation or validation strategy - - Information is better deferred to planning phase (note internally) - -3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints: - - Maximum of 5 total questions across the whole session. - - Each question must be answerable with EITHER: - - A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR - - A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words"). - - Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation. - - Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved. - - Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness). - - Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests. - - If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic. - -4. Sequential questioning loop (interactive): - - Present EXACTLY ONE question at a time. - - For multiple‑choice questions: - - **Analyze all options** and determine the **most suitable option** based on: - - Best practices for the project type - - Common patterns in similar implementations - - Risk reduction (security, performance, maintainability) - - Alignment with any explicit project goals or constraints visible in the spec - - Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice). - - Format as: `**Recommended:** Option [X] - ` - - Then render all options as a Markdown table: - - | Option | Description | - |--------|-------------| - | A |