Tamper-evident audit trails for AI systems.
We scanned 30 popular AI projects and found 202 high-confidence LLM call sites. Zero had tamper-evident audit trails. Full results.
Assay adds independently verifiable execution evidence to AI systems: cryptographically signed receipt bundles that a third party can verify offline without trusting your server logs. Two lines of code. Four exit codes.
# macOS / Linux
python3 -m pip install assay-ai# Windows
py -m pip install assay-aiRequires Python 3.9+. Verify the CLI is on PATH:
assay versionIf pip isn't on your PATH, use the Python launcher (python3 -m pip on macOS/Linux, py -m pip on Windows).
Prefer a deterministic setup path? Start here: docs/START_HERE.md
Boundary: Assay proves tamper-evident internal consistency and completeness relative to scanned call sites. It does not prevent a fully compromised machine from fabricating a consistent story. That's what trust tiers are for.
Not this: Assay is not a logging framework, an observability dashboard, or a monitoring tool. It produces signed evidence bundles that a third party can verify offline. If you need Datadog, this isn't it.
Try it now (no API key needed -- demos use synthetic data; with real calls, Assay instruments OpenAI, Anthropic, Gemini, LiteLLM, LangChain, and local models):
python3 -m pip install assay-ai
assay demo-challenge # tamper detection: one valid pack, one with a single byte changedTwo packs, one byte changed ("gpt-4" -> "gpt-5" in the receipts). Here's what happens (pack IDs and timestamps will differ on your machine):
$ assay verify-pack challenge_pack/good/
VERIFICATION PASSED
Pack ID: pack_20260222_ca2bb665
Integrity: PASS
Claims: PASS
Receipts: 3
Signature: Ed25519 valid
Exit code: 0
$ assay verify-pack challenge_pack/tampered/
VERIFICATION FAILED
Pack ID: pack_20260222_ca2bb665
Integrity: FAIL
Error: Hash mismatch for receipt_pack.jsonl
Exit code: 2
One byte changed. Verification fails. No server access needed. No trust required. Just math.
Now try the policy violation demo:
assay demo-incident # two-act scenario: honest PASS vs honest FAIL Act 1: Agent uses gpt-4 with guardian check
Integrity: PASS Claims: PASS Exit code: 0
Act 2: Someone swaps model to gpt-3.5-turbo, removes guardian
Integrity: PASS Claims: FAIL Exit code: 1
Act 2 is an honest failure -- authentic evidence proving the run violated its declared standards. The evidence is real. The failure is real. Nobody can edit the history. Exit code 1.
Exit 1 is audit gold: a control failed, but the failure is detectable and retained. Auditors love that more than systems that always claim to pass.
Assay separates two questions on purpose:
- Integrity: "Were these bytes tampered with after creation?" (signatures, hashes, required files)
- Claims: "Does this evidence satisfy our declared governance checks?" (receipt types, counts, field values)
| Integrity | Claims | Exit | Meaning |
|---|---|---|---|
| PASS | PASS | 0 | Evidence checks out, behavior meets standards |
| PASS | FAIL | 1 | Honest failure: authentic evidence of a standards violation |
| FAIL | -- | 2 | Tampered evidence |
| -- | -- | 3 | Bad input (missing files, invalid arguments) |
The split is the point. Systems that can prove they failed honestly are more trustworthy than systems that always claim to pass.
With real calls: assay scan . finds your actual OpenAI / Anthropic / Gemini / LiteLLM / LangChain call sites. assay patch . instruments them. Every real LLM call emits a signed receipt. The demos above use synthetic data so you can see verification without configuring anything.
Installing Assay gives you the CLI, receipt store, and proof-pack builder. It does not automatically record your app.
Receipts are emitted only when your runtime is instrumented:
assay patch .inserts the right Assay integration for supported SDKspatch()wrappers emit receipts when model calls happenAssayCallbackHandler()does the same for LangChain callback flowsemit_receipt(...)lets you record events manually in any stack
assay run -- <your command> then does three things:
- creates a trace id
- runs your app with
ASSAY_TRACE_IDin the environment - packages any emitted receipts into
proof_pack_<trace_id>/
The result is a signed, offline-verifiable artifact:
app execution
-> instrumented SDK or emit_receipt(...)
-> receipts written to ~/.assay/...
-> assay run packages them into proof_pack_<trace_id>/
-> assay verify-pack checks the artifact offline
# 1. Find uninstrumented LLM calls
assay scan . --report
# 2. Patch (one line per SDK, or auto-patch all)
assay patch .
# 3. Run + build a signed evidence pack
# -c receipt_completeness runs the built-in completeness check (see `assay cards list` for all options)
# everything after -- is your normal run command
assay run -c receipt_completeness -- python my_app.py
# 4. Verify
assay verify-pack ./proof_pack_*/
# 5. Generate report artifacts for security/compliance review
assay report . -o evidence_report.html --sarif
# 6. Optional: set and enforce score gates in CI
assay gate save-baseline
assay gate check . --min-score 60 --fail-on-regressionassay scan . --report finds every LLM call site (OpenAI, Anthropic, Google
Gemini, LiteLLM, LangChain) and generates a self-contained HTML gap report.
assay patch inserts the two-line integration. assay run wraps your command,
collects receipts, and produces a signed 5-file evidence pack. assay verify-pack
checks integrity + claims and exits with one of the four codes above. Then run
assay explain on any pack for a plain-English summary.
Local models: Any OpenAI-compatible server (Ollama, LM Studio, vLLM,
llama.cpp) works automatically -- Assay patches the OpenAI SDK at the class
level, so OpenAI(base_url="http://localhost:11434/v1") emits receipts like
any other provider. LiteLLM users get the same coverage via the LiteLLM
integration (ollama/llama3, etc.).
Why now: EU AI Act Article 12 requires automatic logging for high-risk AI systems; Article 19 requires providers to retain automatically generated logs for at least 6 months. High-risk obligations apply from 2 Aug 2026 (Annex III) and 2 Aug 2027 (regulated products). SOC 2 CC7.2 requires monitoring of system components and analysis of anomalies as security events. "We have logs on our server" is not independently verifiable evidence. Assay produces evidence that is. See compliance citations for exact references.
Fastest path (recommended):
assay ci init github --run-command "python my_app.py" --min-score 60This generates a 3-job GitHub Actions workflow:
assay-gate(score enforcement, regression checks, JSON gate report artifact)assay-verify(proof pack generation + cryptographic verification)assay-report(HTML evidence report artifact + SARIF upload)
Manual path (advanced):
assay gate save-baseline
assay gate check . --min-score 60 --fail-on-regression --save-report assay_gate_report.json --verbose --json
assay run -c receipt_completeness -- python my_app.py
assay verify-pack ./proof_pack_*/ --lock assay.lock --require-claim-pass
assay report . -o evidence_report.html --sarifThe lockfile catches config drift. Verify-pack catches tampering. Gate
enforces score regressions. Report produces the shareable artifact + SARIF.
assay diff remains useful for deep forensics and budget/drift analysis. See
Decision Escrow for the protocol model.
# Lock your verification contract
assay lock write --cards receipt_completeness -o assay.lockRegression forensics:
assay diff ./proof_pack_*/ --against-previous --why--against-previous auto-discovers the baseline pack.
--why traces receipt chains to explain what regressed and which call sites caused it.
Cost/latency drift (from receipts):
assay analyze --history --since 7Shows cost, latency percentiles, error rates, and per-model breakdowns from your local trace history.
What Assay detects, what it doesn't, and how to strengthen guarantees.
Assay detects:
- Retroactive tampering (edit one byte, verification fails)
- Selective omission under a completeness contract
- Claiming checks that were never run
- Policy drift from a locked baseline
Assay does not prevent:
- A fully fabricated false run (attacker controls the machine)
- Dishonest receipt content (receipts are self-attested)
- Timestamp fraud without an external time anchor
Completeness is enforced relative to the call sites enumerated by the scanner and/or declared by policy. Undetected call sites are a known residual risk, reduced via multi-detector scanning and CI gating.
To strengthen guarantees:
- Transparency ledger (independent witness)
- CI-held org key + branch protection (separation of signer and committer)
- External timestamping (RFC 3161)
The cost of cheating scales with the complexity of the lie. Assay doesn't make fraud impossible -- it makes fraud expensive.
Assay is an evidence compiler for AI execution. If you've used a build system, you already know the mental model:
| Concept | Build System | Assay |
|---|---|---|
| Source | .c / .ts files |
Receipts (one per LLM call) |
| Artifact | Binary / bundle | Evidence pack (5 files, 1 signature) |
| Tests | Unit / integration tests | Verification (integrity + claims) |
| Lock | package-lock.json |
assay.lock |
| Gate | CI deploy check | CI evidence gate |
The core path is 6 commands:
assay quickstart # discover
assay scan / assay patch # instrument
assay run # produce evidence
assay verify-pack # verify evidence
assay diff # catch regressions
assay score # evidence readiness (0-100, A-F)
Full command reference:
Getting started
| Command | Purpose |
|---|---|
assay quickstart |
One command: demo + scan + next steps |
assay status |
One-screen operational dashboard |
assay start demo|ci|mcp |
Guided entrypoints for trying, CI setup, or MCP auditing |
assay onboard |
Guided setup: doctor -> scan -> first run plan |
assay doctor |
Preflight check: is Assay ready here? |
assay version |
Print installed version |
Instrument + produce evidence
| Command | Purpose |
|---|---|
assay scan |
Find uninstrumented LLM call sites (--report for HTML) |
assay patch |
Auto-insert SDK integration patches into your entrypoint |
assay run |
Wrap command, collect receipts, build signed evidence pack |
Verify + analyze
| Command | Purpose |
|---|---|
assay verify-pack |
Verify integrity + claims (the 4 exit codes) |
assay verify-signer |
Extract and verify signer identity from a pack manifest |
assay explain |
Plain-English summary of an evidence pack |
assay analyze |
Cost, latency, error breakdown from pack or --history |
assay diff |
Compare packs: claims, cost, latency (--against-previous, --why, --gate-*) |
assay score |
Evidence Readiness Score (0-100, A-F) with anti-gaming caps |
Workflows + CI
| Command | Purpose |
|---|---|
assay flow try|adopt|ci|mcp|audit |
Guided workflow executor (dry-run by default, --apply to execute) |
assay ci init github |
Generate a GitHub Actions workflow |
assay ci doctor |
CI-profile preflight checks |
assay audit bundle |
Create portable audit bundle (tar.gz with verify instructions) |
assay compliance report |
Generate compliance evidence report |
Pack + baseline management
| Command | Purpose |
|---|---|
assay packs list |
List local proof packs |
assay packs show |
Show pack details |
assay packs pin-baseline |
Pin a pack as the diff baseline |
assay baseline set|get |
Set or get the baseline pack for diff |
Key management
| Command | Purpose |
|---|---|
assay key generate |
Generate a new Ed25519 signing key |
assay key list |
List local signing keys and active signer |
assay key info |
Show key details (fingerprint, creation date) |
assay key set-active |
Set active signing key for future runs |
assay key rotate |
Generate a new key and switch active signer |
assay key export|import |
Export or import keys for CI or team sharing |
assay key revoke |
Revoke a signing key |
Lockfile + cards
| Command | Purpose |
|---|---|
assay lock write |
Freeze verification contract to lockfile |
assay lock check |
Validate lockfile against current card definitions |
assay lock init |
Initialize a new lockfile interactively |
assay cards list |
List built-in run cards and their claims |
assay cards show |
Show card details, claims, and parameters |
MCP + policy
| Command | Purpose |
|---|---|
assay mcp-proxy |
Transparent MCP proxy: intercept tool calls, emit receipts |
assay mcp policy init |
Generate a starter MCP policy YAML file |
assay mcp policy validate |
Validate a policy file against the schema |
assay policy impact |
Analyze policy impact on existing evidence |
Incident forensics
| Command | Purpose |
|---|---|
assay incident timeline |
Build incident timeline from receipts |
assay incident replay |
Replay an incident from receipt chain |
Demos
| Command | Purpose |
|---|---|
assay demo-incident |
Two-act scenario: passing run vs failing run |
assay demo-challenge |
CTF-style good + tampered pack pair |
assay demo-pack |
Generate demo packs (no config needed) |
- Start Here -- 6 steps from install to evidence in CI
- Full Picture -- architecture, trust tiers, repo boundaries, release history
- Quickstart -- install, golden path, command reference
- For Compliance Teams -- what auditors see, evidence artifacts, framework alignment
- Compliance Citations -- exact regulatory references (EU AI Act, SOC 2, ISO 42001)
- Decision Escrow -- protocol model: agent actions don't settle until verified
- Roadmap -- phases, product boundary, execution stack
- Repo Map -- what lives where across the Assay ecosystem
- Pilot Program -- early adopter program details
-
"No receipts emitted" after
assay run: First, check whether your code has call sites:assay scan .-- if scan finds 0 sites, you may not be using a supported SDK yet. Installing Assay alone does not emit receipts; your runtime must be instrumented. If scan finds sites, check: (1) Is# assay:patchedin the file, or did you addpatch()/ a callback? Runassay scan . --reportto see patch status per file. (2) Did you install the SDK extra (python3 -m pip install "assay-ai[openai]")? (3) Didpatch()execute before the first model call? (4) Did you use--before your command (assay run -- python app.py)? Runassay doctorfor a full diagnostic. -
LangChain projects:
assay patchauto-instruments OpenAI and Anthropic SDKs but not LangChain (which uses callbacks, not monkey-patching). For LangChain, addAssayCallbackHandler()to your chain'scallbacksparameter manually. Seesrc/assay/integrations/langchain.pyfor the handler. -
assay run python app.pygives "No command provided": You need the--separator:assay run -c receipt_completeness -- python app.py. Everything after--is passed to the subprocess. -
Quickstart blocked on large directories:
assay quickstartguards against scanning system directories (>10K Python files). Use--forceto bypass:assay quickstart --force. -
macOS:
ModuleNotFoundErrorinsideassay runbut works outside it: On macOS,python3on PATH may point to a different Python version than where assay and your SDK are installed (e.g.python3→ 3.14, but packages are in 3.11). Use a virtual environment (recommended), or specify the exact interpreter:assay run -- python3.11 app.py. Check withpython3 --versionand compare to the Python where you installed Assay.
- Try it:
python3 -m pip install assay-ai && assay quickstart - Questions / feedback: GitHub Discussions
- Bug reports: Issues
- Want this in your stack in 2 weeks? Pilot program -- we instrument your AI workflows, set up CI gates, and hand you a working evidence pipeline. Open a pilot inquiry.
| Repo | Purpose |
|---|---|
| assay | Core CLI, SDK, conformance corpus (this repo) |
| assay-verify-action | GitHub Action for CI verification |
| assay-ledger | Public transparency ledger |
Apache-2.0