Requirements traceability for Python projects. Connect specs to tests, see what's verified.
SpecTrace uses uv for fast, reliable package management:
# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# or: pip install uv# Install with uv (recommended)
make install
# or: uv pip install -e .
# Setup database
make migrate
# Create admin user
make setup
# Import your specs
python spectrace/manage.py parse_specs specs/
# Run development server
make run
# Open http://localhost:8000/admin/SpecTrace includes a Makefile for common development tasks (uses uv for package management):
| Command | Description |
|---|---|
make help |
Show all available commands |
make install |
Install package in editable mode (uses uv pip install) |
make install-dev |
Install with dev dependencies (uses uv pip install) |
make test |
Run tests with pytest |
make migrate |
Run Django migrations |
make makemigrations |
Create new migrations |
make shell |
Open Django shell |
make run |
Start development server |
make clean |
Remove caches and build artifacts |
make setup |
Create admin user (admin/admin) |
make demo |
Run the SpecTrace demo |
Note: If you don't have uv installed, the Makefile commands will fail. Install it first: pip install uv
# 1. Import requirements from specs
python spectrace/manage.py parse_specs specs/
# 2. Run tests with JUnit output
make test
# or: pytest --junitxml=test_results.xml
# 3. Extract test-requirement links
python spectrace/manage.py extract_links --output links.json
# 4. Import results and compute status
python spectrace/manage.py import_results test_results.xml --links links.json
# 5. View dashboard
make run
# Open http://localhost:8000/admin/See the Document Pipeline Example for a comprehensive demonstration of spec-trace features:
- Nested requirement hierarchy (3 levels)
- Multiple verification methods (test, inapp, both)
- Passing, failing, and skipped tests
- SLO integration with OpenSLO YAML
- Various pytest patterns (parametrized, async, class-based, xfail)
- CI/CD workflow example
Run the demo:
make demo
# or: python scripts/demo_pipeline.pyCreate markdown files in specs/ with frontmatter:
---
id: REQ-AUTH-001
title: User Login
priority: high
tags: [authentication, security]
verification_method: test # test, inapp, or both
---
Users must be able to log in with email and password.Use the @pytest.mark.requirement decorator:
import pytest
@pytest.mark.requirement("REQ-AUTH-001")
def test_user_can_login():
# test implementation
pass
@pytest.mark.requirement("REQ-AUTH-001", "REQ-AUTH-002")
def test_login_creates_session():
# test can link to multiple requirements
passSpecTrace provides Django management commands for various operations:
| Command | Description |
|---|---|
parse_specs <dir> |
Import requirements from markdown specs |
extract_links |
Extract test-requirement links from test files |
import_results <xml> |
Import pytest JUnit XML and compute status |
validate_links <json> |
Validate links for drift detection (CI) |
import_slos <dir> |
Import SLOs from OpenSLO YAML files |
update_slo_status --from-json <file> |
Update SLO status from observability data |
import_inapp_validations <json> |
Import in-app validation results |
check_invariants |
Validate data consistency (INV-A through INV-K) |
Agent Task Commands (see docs/agent-tasks.md):
| Command | Description |
|---|---|
agent_register |
Register an agent with role (planner/coder/reviewer) |
agent_tasks |
List tasks with filtering |
agent_claim |
Claim an unclaimed task with lease |
agent_start |
Begin work on claimed task |
agent_submit |
Submit work for review |
agent_review |
Approve or request changes |
agent_merge |
Mark approved task as merged |
expire_leases |
Release stale task claims (cron) |
All commands are run via: python spectrace/manage.py <command>
- Passing - All linked tests pass
- Failing - Any linked test fails
- Untested - No tests linked to requirement
Requirements can specify how they should be verified:
- test - Verified by automated tests (default)
- inapp - Verified by in-app validation buttons/endpoints
- both - Must pass both test and in-app validation
Link requirements to Service Level Objectives using OpenSLO YAML:
apiVersion: openslo/v1
kind: SLO
metadata:
name: api-availability
labels:
requirement: REQ-API-001
spec:
service: api-gateway
objectives:
- target: 0.999
timeWindow:
duration: 30dImport with: python spectrace/manage.py import_slos slos/
External systems can push status updates:
| Endpoint | Method | Description |
|---|---|---|
/api/slo/status/ |
POST | Update SLO status from observability platforms |
/api/validation/result/ |
POST | Submit in-app validation results |
/api/requirement/<id>/status/ |
GET | Get requirement verification status |
curl -X POST http://localhost:8000/api/slo/status/ \
-H "Content-Type: application/json" \
-d '{
"slos": [
{"name": "api-availability", "status": "met", "current_value": 0.9995}
]
}'curl -X POST http://localhost:8000/api/validation/result/ \
-H "Content-Type: application/json" \
-d '{
"source": "production-app",
"validations": [
{"requirement_id": "REQ-AUTH-001", "name": "Login Flow", "status": "success"}
]
}'Validate test-requirement links in CI to catch drift:
python spectrace/manage.py validate_links links.json --strict--strict- Exit with error on warnings (missing coverage)--format json- Output JSON for programmatic parsing
Example in CI pipeline:
# .github/workflows/test.yml
- name: Run tests
run: make test
- name: Validate requirements coverage
run: |
python spectrace/manage.py extract_links --output links.json
python spectrace/manage.py validate_links links.json --strictMIT