Skip to content

Commit 44cc88c

Browse files
committed
Add roadmap and tests templates for feature specification and progress tracking
1 parent 8d98d6f commit 44cc88c

4 files changed

Lines changed: 391 additions & 0 deletions

File tree

templates/commands/roadmap.md

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
---
2+
description: Create or update the project feature roadmap, syncing status with existing specs.
3+
---
4+
5+
## User Input
6+
7+
```text
8+
$ARGUMENTS
9+
```
10+
11+
You **MUST** consider the user input before proceeding (if not empty).
12+
13+
## Outline
14+
15+
Manage the project roadmap at `/memory/roadmap.md`.
16+
17+
### If roadmap doesn't exist:
18+
19+
1. Copy template from `/templates/roadmap-template.md` to `/memory/roadmap.md`
20+
2. Ask user about project phases and initial features to track
21+
3. Fill in the template with provided information
22+
23+
### If roadmap exists:
24+
25+
1. Load `/memory/roadmap.md`
26+
2. Scan `/specs/` directory for existing specifications
27+
3. Update the Progress Summary table:
28+
- Match features to spec directories
29+
- Update Status based on what exists:
30+
- Spec exists with plan.md → "Spec done"
31+
- Spec exists with tasks.md → "Implementing"
32+
- Spec exists with completed tasks → "Done"
33+
- Update Spec column with paths to existing specs
34+
4. If user provided input, add new features or update existing ones
35+
5. Report changes made
36+
37+
### User commands:
38+
39+
- `add [feature]` - Add a new feature to track
40+
- `update` - Sync status with existing specs (default if no args)
41+
- `status` - Show current progress summary

templates/commands/tests.md

Lines changed: 143 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,143 @@
1+
---
2+
description: Generate a reviewer-friendly test plan documenting what behaviors will be verified for the feature.
3+
handoffs:
4+
- label: Generate Tasks
5+
agent: speckit.tasks
6+
prompt: Generate implementation tasks including test tasks
7+
send: true
8+
- label: Analyze Consistency
9+
agent: speckit.analyze
10+
prompt: Check consistency across spec, plan, and tests
11+
send: true
12+
---
13+
14+
## User Input
15+
16+
```text
17+
$ARGUMENTS
18+
```
19+
20+
You **MUST** consider the user input before proceeding (if not empty).
21+
22+
## Purpose
23+
24+
Generate a `tests.md` file that:
25+
1. Documents what behaviors will be tested (for domain reviewers)
26+
2. Provides implementation guidance (for developers/AI)
27+
3. Ensures traceability from requirements to tests
28+
29+
The primary audience is **non-technical reviewers** who want to verify that
30+
important requirements have test coverage. Technical details go in an appendix.
31+
32+
## Outline
33+
34+
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root
35+
and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
36+
37+
2. **Load source documents**: Read from FEATURE_DIR:
38+
- **Required**: spec.md (user stories, acceptance scenarios, edge cases)
39+
- **Required**: plan.md (project structure, test file locations)
40+
- If spec.md doesn't exist, ERROR with message to run `/speckit.specify` first
41+
- If plan.md doesn't exist, ERROR with message to run `/speckit.plan` first
42+
43+
3. **Check for existing test artifacts**:
44+
- Check if tasks.md exists in FEATURE_DIR
45+
- Check if test files exist in locations specified by plan.md (e.g., `tests/unit/`)
46+
- Note what was found (will be used in step 7)
47+
48+
4. **Extract test requirements from spec.md**:
49+
- User stories with priorities (P1, P2, P3...)
50+
- Acceptance scenarios (Given/When/Then)
51+
- Edge cases section
52+
- Functional requirements (FR-001, FR-002...)
53+
- Success criteria
54+
55+
5. **Extract from plan.md**:
56+
- Project structure (test directory layout)
57+
- Testing framework
58+
- Technical context
59+
60+
6. **Generate tests.md** using `.specify/templates/tests-template.md`:
61+
- Fill feature name and branch from spec.md
62+
- Generate Test Summary with key behaviors (plain language)
63+
- Generate Behavior Coverage section for each user story
64+
- Map acceptance scenarios to test descriptions
65+
- Generate Edge Cases table from spec.md edge cases
66+
- Fill Coverage Checklist
67+
- Fill Implementation Notes with file paths and structure from plan.md
68+
69+
7. **Reconciliation** (if existing artifacts were found in step 3):
70+
- Compare generated tests.md against existing tasks.md and/or test files
71+
- Report discrepancies in a clear summary:
72+
- **Covered**: Spec requirements that have existing test coverage
73+
- **Gaps**: Spec requirements that lack test coverage
74+
- **Orphans**: Existing tests that don't map to spec requirements
75+
- For each gap, ask user: "Add to tasks?" or "Note as out-of-scope?"
76+
- For each orphan, ask user: "Missing from spec?" or "Remove test?"
77+
- Update tests.md to reflect decisions (mark covered items, note gaps)
78+
79+
8. **Report**: Output path to generated tests.md and summary:
80+
- Count of user stories covered
81+
- Count of edge cases covered
82+
- Count of acceptance scenarios mapped
83+
- Flag any gaps (user stories without tests, unmapped scenarios)
84+
85+
## Generation Rules
86+
87+
### Language Guidelines
88+
89+
**Primary sections (for reviewers)** - Use plain, non-technical language:
90+
- "Load a configuration file and verify all values are accessible"
91+
- "Attempt to save with missing required fields, confirm error message"
92+
- NOT: "Assert model.field == expected using pytest fixture"
93+
- NOT: "Mock the database connection and verify query parameters"
94+
95+
**Implementation Notes (for developers)** - Technical details allowed:
96+
- File paths, test names, fixture locations
97+
- Framework-specific patterns
98+
- Directory structure
99+
100+
### Mapping Acceptance Scenarios
101+
102+
For each acceptance scenario in spec.md:
103+
104+
```
105+
Given [initial state], When [action], Then [expected outcome]
106+
```
107+
108+
Create a test description:
109+
110+
```
111+
| Requirement | How It's Tested |
112+
|-------------|-----------------|
113+
| [Action] produces [outcome] | [Plain description of test] |
114+
```
115+
116+
### Edge Case Coverage
117+
118+
For each edge case in spec.md, create a row:
119+
120+
```
121+
| Scenario | Expected Behavior | Covered |
122+
|----------|-------------------|---------|
123+
| [Edge case from spec] | [Expected handling] | [ ] |
124+
```
125+
126+
### Handling Missing Information
127+
128+
- If spec.md has no edge cases section: Add placeholder row with TODO
129+
- If acceptance scenarios are vague: Flag in Open Questions section
130+
- If plan.md lacks test directory structure: Ask user to clarify before proceeding
131+
132+
### Coverage Checklist
133+
134+
Always include these standard checks:
135+
- [ ] Every user story has test coverage defined
136+
- [ ] All acceptance scenarios from spec.md are addressed
137+
- [ ] Edge cases from spec.md are covered
138+
- [ ] Invalid inputs produce clear error messages
139+
- [ ] Each test maps back to a requirement
140+
141+
## Output Location
142+
143+
Write to: `FEATURE_DIR/tests.md`

templates/roadmap-template.md

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
# [PROJECT NAME] Feature Roadmap
2+
3+
Track features requiring specifications, organized by implementation phase.
4+
5+
## Progress Summary
6+
7+
| ID | Feature | Status | Spec | Code |
8+
|----|---------|--------|------|------|
9+
| 1.1 | [Feature name] | Not started | - | - |
10+
| 1.2 | [Feature name] | Not started | - | - |
11+
12+
**Status**: Not started → Speccing → Spec done → Implementing → Done
13+
14+
---
15+
16+
## Phase 1: [Phase Name]
17+
18+
[Description of what this phase delivers]
19+
20+
### 1.1 [Feature Name]
21+
- **Status**: Not started
22+
- **What**: [Brief description]
23+
- **Why**: [Rationale or constitution principle]
24+
- **Depends on**: [Dependencies]
25+
26+
---
27+
28+
## Open Questions
29+
30+
- [ ] [Question needing resolution]
31+
32+
---
33+
34+
## Notes
35+
36+
- [Any relevant notes]

templates/tests-template.md

Lines changed: 171 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,171 @@
1+
---
2+
description: "Test plan template for feature specification"
3+
---
4+
5+
# Test Plan: [FEATURE NAME]
6+
7+
**Feature Branch**: `[###-feature-name]`
8+
**Spec**: [`spec.md`](./spec.md)
9+
**Status**: Draft
10+
11+
---
12+
13+
## About This Document
14+
15+
This test plan describes what behaviors will be verified by automated tests.
16+
It helps reviewers confirm that important requirements have coverage before
17+
implementation begins.
18+
19+
- **For reviewers**: Focus on "Behavior Coverage" sections to verify the right
20+
things are being tested. You don't need programming knowledge to review this.
21+
- **For implementers**: See "Implementation Notes" at the end for technical details.
22+
23+
---
24+
25+
## Test Summary
26+
27+
**Coverage goal**: [e.g., All user stories and edge cases covered]
28+
29+
**Key behaviors being tested**:
30+
- [Behavior 1 in plain language]
31+
- [Behavior 2 in plain language]
32+
- [Behavior 3 in plain language]
33+
34+
---
35+
36+
## Behavior Coverage by User Story
37+
38+
<!--
39+
For each user story from spec.md, describe what behaviors will be verified.
40+
Write in plain language - focus on WHAT is tested, not HOW.
41+
42+
Example:
43+
Requirement: "Users can reset their password"
44+
How it's tested: "Attempt password reset, confirm new password works"
45+
-->
46+
47+
### User Story 1 - [Title] (Priority: P1)
48+
49+
**What we're verifying**:
50+
51+
| Requirement | How It's Tested |
52+
|-------------|-----------------|
53+
| [Requirement from spec] | [Plain description of verification] |
54+
| [Requirement from spec] | [Plain description of verification] |
55+
56+
**Specific checks**:
57+
- [ ] [Plain description of what's checked]
58+
- [ ] [Plain description of what's checked]
59+
60+
---
61+
62+
### User Story 2 - [Title] (Priority: P2)
63+
64+
**What we're verifying**:
65+
66+
| Requirement | How It's Tested |
67+
|-------------|-----------------|
68+
| [Requirement from spec] | [Plain description of verification] |
69+
70+
**Specific checks**:
71+
- [ ] [Plain description]
72+
73+
---
74+
75+
<!--
76+
Repeat for each user story from spec.md.
77+
Match priority order (P1, P2, P3...) from the spec.
78+
-->
79+
80+
---
81+
82+
## Edge Cases & Error Handling
83+
84+
<!--
85+
What boundary conditions and error scenarios are tested?
86+
Extract from "Edge Cases" section of spec.md.
87+
-->
88+
89+
| Scenario | Expected Behavior | Covered |
90+
|----------|-------------------|---------|
91+
| [What could go wrong] | [How system should respond] | [ ] |
92+
| [Unusual input or condition] | [Expected handling] | [ ] |
93+
| [Boundary condition] | [Expected behavior] | [ ] |
94+
95+
---
96+
97+
## Coverage Checklist
98+
99+
<!--
100+
Checklist for reviewers to assess test completeness.
101+
-->
102+
103+
**Requirements**
104+
- [ ] Every user story has test coverage defined
105+
- [ ] All acceptance scenarios from spec.md are addressed
106+
- [ ] Edge cases from spec.md are covered
107+
108+
**Error Handling**
109+
- [ ] Invalid inputs produce clear error messages
110+
- [ ] Errors identify what went wrong and where
111+
112+
**Traceability**
113+
- [ ] Each test maps back to a requirement or acceptance scenario
114+
115+
---
116+
117+
## Open Questions
118+
119+
<!--
120+
Flag items needing reviewer input before implementation.
121+
-->
122+
123+
- [ ] [Question about test scope, priorities, or expected behavior]
124+
125+
---
126+
---
127+
128+
# Implementation Notes
129+
130+
<!--
131+
Technical details for developers and AI assistants.
132+
Reviewers can skip this section.
133+
-->
134+
135+
## Naming Convention
136+
137+
Use consistent test names following this pattern:
138+
139+
```
140+
test_[component]_[behavior]_[scenario]
141+
```
142+
143+
**Examples**:
144+
- `test_loader_valid_input_succeeds`
145+
- `test_loader_missing_field_returns_error`
146+
- `test_validator_boundary_value_accepted`
147+
148+
## Test File Organization
149+
150+
```
151+
tests/
152+
├── unit/ # Isolated component tests
153+
├── integration/ # Multi-component tests
154+
└── fixtures/ # Sample data for tests
155+
```
156+
157+
## Test Data
158+
159+
| File | Purpose |
160+
|------|---------|
161+
| `valid_minimal.[ext]` | Smallest valid input for basic tests |
162+
| `valid_complete.[ext]` | Full-featured input covering all options |
163+
| `invalid_*.[ext]` | Various invalid inputs for error tests |
164+
165+
## Story-to-File Mapping
166+
167+
| User Story | Test Location |
168+
|------------|---------------|
169+
| US1 | `tests/unit/test_[component].py` |
170+
| US2 | `tests/unit/test_[component].py` |
171+
| Integration | `tests/integration/test_[flow].py` |

0 commit comments

Comments
 (0)