Quick reference for UPLC-CAPE commands and workflows.
💡 Tip: Type
usagein the UPLC-CAPE shell to view this cheat sheet anytime!
# BENCHMARKS
cape benchmark list # List all benchmarks
cape benchmark fibonacci # Show benchmark details
cape benchmark new my-benchmark # Create new benchmark
# SUBMISSIONS
cape submission list # List all submissions
cape submission list fibonacci # Show benchmark submissions
cape submission new fibonacci Aiken 1.0.8 myhandle # Create submission
cape submission verify # Verify correctness and validate schemas
cape submission measure # Measure UPLC performance
cape submission aggregate # Generate CSV report
cape submission report fibonacci # Generate HTML report with charts for benchmark
cape submission report --all # Generate HTML reports for all benchmarks
# PR PREVIEWS
# Note: PRs with submission data changes automatically get preview sites at:
# https://intersectmbo.github.io/UPLC-CAPE/pr-<number>/
# See README.md "PR Preview Sites" section for details
# HELP
cape --help # Main help
cape benchmark --help # Command help
cape submission new --help # Subcommand help# 1. List benchmarks → 2. Create submission → 3. Fill files → 4. Verify → 5. Commit
cape benchmark list
cape submission new fibonacci MyCompiler 1.0.0 myhandle
# Edit: submissions/fibonacci/MyCompiler_1.0.0_myhandle/fibonacci.uplc
# Edit: submissions/fibonacci/MyCompiler_1.0.0_myhandle/metrics.json
# Edit: submissions/fibonacci/MyCompiler_1.0.0_myhandle/metadata.json
cape submission verify submissions/fibonacci/MyCompiler_1.0.0_myhandle
git add . && git commit -m "Add MyCompiler fibonacci submission"# ARCHITECTURE DECISION RECORDS
adr new "My Decision Title" # Create ADR
adr preview # View in browser
adr build # Build static site
adr help # Show help
# ALIASES
adr n "Quick ADR" # new
adr p # preview
adr b # build
adr h # helpscenarios/ # Benchmark definitions
scenarios/TEMPLATE/ # New benchmark template
submissions/ # Performance submissions
submissions/TEMPLATE/ # New submission template
doc/adr/ # Architecture decisions
scripts/ # CAPE CLI tools# METRICS (submissions/*/metrics.json)
# Note: metrics.json is generated automatically by 'cape submission measure'
# Contains both raw measurements and derived metrics
# METADATA (submissions/*/metadata.json)
{
"compiler": {"name": "Aiken", "version": "1.0.8", "commit_hash": "abc123"},
"compilation_config": {"optimization_level": "O2", "target": "uplc"},
"contributors": [{"name": "myhandle"}],
"submission": {"date": "2025-07-18T10:00:00Z", "source_available": true}
}The metrics.json file (generated by cape submission measure) includes both raw measurements and derived metrics.
See doc/metrics.md for comprehensive documentation.
Raw Measurements:
cpu_units,memory_units: Execution costs (with aggregations: max, sum, min, median, sum_positive, sum_negative)script_size_bytes: On-chain storage sizeterm_size: AST complexity
Derived Metrics (Conway Era):
execution_fee_lovelace: Runtime cost (CPU + memory)reference_script_fee_lovelace: Storage cost (tiered pricing)total_fee_lovelace: Combined costtx_*_budget_pct: Transaction budget usage (limits: 14M mem, 10B cpu)block_*_budget_pct: Block budget usage (limits: 62M mem, 40B cpu)scripts_per_tx,scripts_per_block: Capacity metrics
Key Points:
- Fees assume reference script deployment (Conway era)
- Base transaction fee NOT included (comparing script performance)
- Budget >50% highlighted in reports (capacity warning)
nix develop # Enter dev environment
direnv allow # Auto-enter (recommended)
cape benchmark list # Test CLI
glow README.md # View docs in terminal
glow USAGE.md # View this cheat sheet# FORMAT FILES
treefmt # Format entire project
treefmt file.sh # Format specific file
treefmt README.md # Format markdown
treefmt *.yml # Format YAML files
# INDIVIDUAL FORMATTERS (called by treefmt)
shfmt script.sh # Format shell script
prettier file.md --write # Format markdown/YAML/JSON
# AUTOMATIC FORMATTING
# Pre-commit hook automatically runs treefmt on staged files
git commit # Formats files before commit# EDITOR CONFIGURATION
.editorconfig # Editor settings (indentation, etc.)
# FORMATTING CONFIGURATION
treefmt.toml # treefmt main configuration
.prettierrc # Prettier-specific settings
.prettierignore # Files to exclude from Prettier💡 Tip: All CAPE commands support interactive prompts if you omit arguments.
Preferred entrypoint:
- Use
cape submission verifyfor correctness and schema validation. Themeasurecommand exposes a lower-level interface used by the wrapper internally.
- Optional per-benchmark test suite can be provided at
scenarios/{benchmark}/cape-tests.json. - Wrappers auto-discover the test suite based on the submission path (submissions/{benchmark}/...); no wrapper flags exist to override the scenario or test file.
Exit codes from the measure tool:
- 0: success (all tests passed or simple measurement completed)
- 1: Test suite failed (one or more test cases did not pass)
- 2: UPLC parse error or malformed input
Verification control:
- Low-level tool:
measureaccepts a test suite via-t/--tests <cape-tests.json>.
measure -i <input.uplc> -t <cape-tests.json> -o <metrics.json>- Wrapper:
cape submission measureinfers the benchmark from the submissions path and, ifscenarios/{benchmark}/cape-tests.jsonexists, passes that file tomeasureas--tests (-t)automatically. - For testing, provide full mock submissions in
test/fixtures/submissions/{benchmark}/...and place the test suite atscenarios/{benchmark}/cape-tests.json. Do not rely on CLI override flags.
Store negative fixture submissions under:
test/fixtures/submissions/{benchmark}/{Compiler}_{version}_{contributor}/
These are ignored by normal workflows and reports.
- All submissions:
cape submission verify --all- Specific path:
cape submission verify submissions/<benchmark>/<Compiler>_<version>_<contributor>- All submissions:
cape submission measure --all- Specific path (measure all .uplc files under a directory, e.g., a submission):
cape submission measure submissions/<benchmark>/<Compiler>_<version>_<contributor>- Single file explicitly (write to a specific output):
cape submission measure -i path/to/script.uplc -o path/to/metrics.json