Blanc Quant LOB Engine (BQL Engine) is a deterministic C++20 replay + benchmarking harness for limit-order-book (LOB) workloads, built to answer one question:
Can we replay this exactly, under load, and prove it didn’t get slower (especially at the tails)?
It ships golden-state determinism checks, CI-enforced tail latency
budgets (p50/p95/p99/p99.9/p99.99), and audit-friendly artifacts
(bench.jsonl, metrics.prom, HTML report) so regressions fail early and
evidence is reproducible.
Scope note (important):
- This repo is the OSS exercise / benchmark harness (synthetic + proof tooling).
- BQL 2.0 (Patent-Pending) is the production-shaped system (real ITCH replay
- deterministic matching + binary audit journal + coordinated deterministic protection + invariance proofs). See the relevant docs sections below.
| Metric Tier | Current (Jan 2026) | Target (vNext) | Status |
|---|---|---|---|
| Tier A: Match-only (Core engine speed) |
p50: 1.25μs p95: 3.29μs p99: 5.67μs |
p50: 100–300μs p95: 200–600μs p99: 300–900μs |
✅ EXCEEDS TARGET |
| Tier B: In-process Wire-to-Wire (No network/disk) |
Not yet separated | p50: 0.5–1.5ms p95: 1–3ms p99: 2–5ms |
🎯 Planned |
| Tier C: Proof Pipeline (Full deterministic replay) |
p50: ~16ms p95: ~18ms p99: ~20ms p99.9: ~22ms p99.99: ~24ms |
p50: 2–6ms p95: 4–10ms p99: 6–15ms p99.9: ≤3× p99 p99.99: advisory |
🚧 Optimization Phase 2 |
| Throughput | 1M events/sec | 1–5M ops/sec | ✅ Baseline Established |
| Deterministic Replay | ✅ Verified (100% digest consistency) | ✅ Enhanced with SCM | ✅ Production Ready |
Tail Latency Purity — p99.9 and p99.99 are measured on every run (≥1k samples for p99.9 and ≥10k samples for stable p99.99). Runs emit
samples,p999_valid, andp9999_validto prevent under-sampled tails from being misinterpreted. p99.9 is gated at ≤ 3× the p99 budget; p99.99 is advisory. Tail-delta gating is validated bytests/test_tail_latency.cpp.
Selective Coordination Mode brings the “smallest breaker trips first” principle from power systems into trading engines. Instead of halting everything when there’s a slowdown, the engine disables or sheds only the affected subsystem — keeping the rest running and making incident boundaries clean and replayable.
- Zones: The engine is divided into protection zones (core match, risk checks, telemetry, journaling, snapshotting, adapters).
- Trip Ladder: If a zone (like telemetry) gets slow, only that zone is tripped first. If the problem persists, the next zone up the ladder is tripped, and so on — up to a full halt as a last resort.
- Coordination Curves: Each zone has its own latency budget and trip logic (e.g., “if p99 latency is breached for M out of N events, trip this zone”).
- Escalation & Recovery: The system escalates only if the problem persists, and recovers only after a sustained period of good performance (hysteresis).
- Deterministic Journal: Every trip, recovery, and action is logged so you can replay and audit exactly what happened.
┌────────────┐ trip ┌──────────────┐ trip ┌──────────────┐ trip ┌────────────┐ trip ┌───────┐
│ Telemetry │ ───────▶ │ Journaling │ ───────▶ │ Risk Checks │ ───────▶ │ Core Match │ ───────▶ │ HALT │
└────────────┘ └──────────────┘ └──────────────┘ └────────────┘ └───────┘
Gate policy details live in docs/gates.md; CI wiring is under
.github/workflows/verify-bench.yml.
- Golden digest + explicit tail budgets so regressions fail CI early.
- Observability-first artifacts:
bench.jsonl+metrics.promfor diffing, dashboards, and automated SLO checks. - Conformance + bench scripts are wired for cron / CI, not just local runs.
- CI-ready: determinism, bench, and CodeQL workflows pinned to SHAs.
- Designed to slot into HFT / research pipelines as a replay + guardrail module rather than a one-off benchmark toy.
Prereqs: CMake ≥ 3.20, Ninja, modern C++20 compiler, Boost, and
nlohmann-json.
cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Release
cmake --build build -j
ls build/bin/replayNotes:
build/compile_commands.jsonaids IDEs.- Release builds add stack protector, FORTIFY, PIE when supported.
- Enable sanitizers via
-DENABLE_SANITIZERS=ONon Debug builds.
# Default run
build/bin/replay
# Custom input and limits
build/bin/replay --input path/to/input.bin \
--gap-ppm 0 --corrupt-ppm 0 --skew-ppm 0 --burst-ms 0Artifacts land in artifacts/bench.jsonl, artifacts/metrics.prom, and the
new HTML analytics dashboard at artifacts/report/index.html.
Deterministic fixtures live under data/golden/; regenerate with gen_synth
as needed.
Build the image and run the containerized replay:
# Build (from repo root)
docker build -t blanc-quant-lob-engine:local .
# Run default golden replay inside the container
docker run --rm blanc-quant-lob-engine:local /app/replay --input /app/data/golden/itch_1m.bin
# Pass a custom file mounted from host
docker run --rm -v "$PWD/data:/data" blanc-quant-lob-engine:local \
/app/replay --input /data/your_trace.binscripts/verify_golden.sh # digest determinism check
scripts/bench.sh 9 # multi-run benchmark harness
scripts/prom_textfile.sh ... # emit metrics.prom schema
scripts/verify_bench.py # release gate enforcement
scripts/bench_report.py # render HTML latency/digest dashboard- Golden digest resides at
data/golden/itch_1m.fnv. ctest -R golden_stateplusscripts/verify_golden.shensure reproducibility.- Use
cmake --build build -t golden_sample(ormake golden) to refresh fixtures after new traces are accepted.
Ubuntu:
sudo apt-get update
sudo apt-get install -y cmake ninja-build libboost-all-dev \
libnlohmann-json3-dev jqmacOS:
brew update
brew install cmake ninja jq nlohmann-jsonEnable tests with -DBUILD_TESTING=ON and run ctest --output-on-failure -R book_snapshot from build/. Tests expect ./bin/replay within the working
directory.
./scripts/release_package.sh creates rights-marked zips plus manifests.
cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Release
cmake --build build -j
./scripts/release_package.sh --build-dir build --art-dir artifacts \
--out-dir artifacts/release --git-sha "$(git rev-parse --short HEAD)"Add --sign for optional detached GPG signatures. The snapshot-nightly
workflow runs this and uploads the bundle automatically.
scripts/pin_actions_by_shas.shkeeps workflowuses:entries pinned..github/workflows/verify-bench.ymlexposes a manual/cron gate run..github/workflows/determinism.ymlsurfaces p50/p95/p99 in the job summary and emits notices for easy viewing..github/workflows/ci.ymlmirrors bench summary surfacing in the job summary..github/workflows/container-scan.ymlpins Trivy to v0.67.2, runs fs & image scans non-blocking, and uploads SARIF to the Security tab.docs/technology_transition.md+docs/deliverable_marking_checklist.mdcover gov delivery and rights-marking guidance.
build/bin/replay --input data/golden/itch_1m.bin --cpu-pin 3
# or
CPU_PIN=3 make benchPinning reduces tail variance on some hosts; measure on your hardware.
include/ # headers
src/ # replay engine, detectors, telemetry
scripts/ # bench, verify, release, pin helpers
artifacts/ # generated outputs (gitignored)
SECURITY.md documents coordinated disclosure. CI integrates detect-secrets
and CodeQL. Signing helpers live under scripts/ if you need to stamp
artifacts. Blanc LOB Engine is opinionated toward safety-by-default: determinism,
repeatable benches, and explicit tail SLOs are non-negotiable controls rather
than after-the-fact monitoring.
See CONTRIBUTING.md for workflow expectations. Pull requests should pin new
dependencies, ship matching tests, and update docs for externally visible
changes.
Distributed under the Business Source License 1.1 (LICENSE.txt). Research and
non-commercial evaluation are permitted; production use requires a commercial
license until the change date defined in COMMERCIAL_LICENSE.md.
Research users can clone and run the engine today; commercial or production
deployment should follow the terms in COMMERCIAL_LICENSE.md.
This release includes the prebuilt binaries and necessary artifacts for version 1.00 of the Blanc LOB Engine. If you are interested in accessing the full source code, please reach out directly for further details. The project is fully open and available for students and hobbyists to explore and use.
This section documents the HTML analytics report generated by
scripts/bench_report.py and visitor tracking integration.
Run the benchmark report generator after completing benchmark runs:
python3 scripts/bench_report.py --bench-file artifacts/bench.jsonl \
--metrics-file artifacts/metrics.prom --output-dir artifacts/reportThe report will be generated at artifacts/report/index.html.
The repository uses visitor badges to track page views. Badge format:
Project badge:
Issue-specific badge:
Replace <issue_id> with the GitHub issue number.