Every accepted contribution must improve at least one of:
- source traceability,
- formal correctness,
- executable alignment,
- artifact reproducibility,
- contributor usability.
- No public declaration without source linkage.
- No schema changes without migration notes.
- No generated artifact without a reproducible generator.
- No portal-only truth; portal renders from canonical corpus/manifests/export.
- Admit a paper into the corpus.
- Improve claim/assumption/symbol extraction.
- Formalize mapped claims in Lean.
- Add or extend executable kernels.
- Improve validation/coverage/graph tooling.
- Improve portal rendering and observability.
just fmt
just lint
just test
just validate
just buildIf setup/build fails, run just doctor first.
Use one canonical local sequence:
just bootstrapjust checkjust benchmark
If just is not available (common on Windows/PowerShell without Bash), follow the equivalent no-just commands in Contributor playbook – Local CI.
Test coverage: Run just test locally. The workspace pytest configuration includes pipeline/tests and kernels/adsorption/tests (MCP contract/integration tests live in the pipeline suite). Confirm current counts with uv run pytest --collect-only -q. For MCP, ensure uv sync --extra mcp when exercising MCP-backed tests.
Contributor dry-run (smoke): from repo root, bash scripts/contributor_dry_run.sh (or follow the same steps manually). A monthly workflow runs validate and tests on a clean runner; see .github/workflows/contributor-dry-run-monthly.yml.
Pre-merge CI (local): see Contributor playbook – Local CI (validate, tests, lake build, portal build, benchmark).
Public push, clean-room, branch protection, triage, launch cadence: docs/maintainers.md.
Reuse modes (export, kernels, benchmarks): see Contributor playbook – Reusing Scientific Memory.
just validate enforces:
- schema validity,
- normalization integrity,
- provenance integrity,
- graph integrity (theorem-card dependencies + kernel/card/manifest cross-links),
- coverage integrity,
- extraction run requirement (papers with claims must have
extraction_run.json; runjust extraction-report <paper_id>if missing), - claim status in allowed enum;
disputedclaims must have non-emptyreview_notes, - migration doc updated when schemas change.
The same command may print warn-only stderr lines (exit code still 0) for snapshot baseline quality, dependency-graph bootstrap hints on papers with multiple theorem cards but empty dependency_ids, and invalid optional llm_* / suggested_* sidecar files. See docs/reference/trust-boundary-and-extraction.md.
just metrics(includes reviewer lifecycle warnings formachine_checked_but_unreviewedcards)uv run --project pipeline python -m sm_pipeline.cli validate-all --report-json path/to/gate-report.json(machine-readable gate report after successful validation)just repo-snapshot(regenerate docs/status/repo-snapshot.md from manifests)just check-paper-blueprint <paper_id>just export-diff-baseline(creates snapshot baseline; see release-policy.md for naming conventions)just export-portal-datajust check-toolingjust extract-from-source <paper_id>just build-versojust mcp-server(requiresuv sync --extra mcp)uv run --project pipeline python -m sm_pipeline.cli batch-admit <csv_path> [--dry-run](admit multiple papers from CSV;--dry-runvalidates without writing)uv run --project pipeline python -m sm_pipeline.cli proof-repair-proposals -o proposals.json(human-review-only proposals; never auto-applied)- Optional LLM sidecars (API key in root
.env):just llm-claim-proposals <paper_id>,just llm-mapping-proposals <paper_id>,just llm-lean-proposals <paper_id>; review then apply per docs/tooling/prime-intellect-llm.md (Lean path usesllm-lean-proposals-to-apply-bundleandproof-repair-apply) - Optional LLM eval smoke:
just llm-live-eval --paper-ids <paper_id> --use-fake-provider(writes JSON underbenchmarks/reports/, gitignored); human rubric docs/testing/llm-human-eval-rubric.md
Use .github/PULL_REQUEST_TEMPLATE.md and include:
- artifact impact,
- provenance impact,
- verification-boundary impact,
- coverage impact,
- schema impact,
- risk notes.
For review expectations, see Contributor playbook – Reviewer guide (includes theorem-card reviewer lifecycle policy).