Skip to content

feat(bench): load-100k.ts harness with p50/p90/p99 output#363

Merged
rohitg00 merged 1 commit into
mainfrom
feat/bench-load-100k-346
May 13, 2026
Merged

feat(bench): load-100k.ts harness with p50/p90/p99 output#363
rohitg00 merged 1 commit into
mainfrom
feat/bench-load-100k-346

Conversation

@rohitg00
Copy link
Copy Markdown
Owner

@rohitg00 rohitg00 commented May 13, 2026

Closes #346.

What this is

A reproducible load harness for the three hot REST endpoints, so we have a number to point at when somebody asks "what's p99 at 100k memories under concurrency 100?"

Hand-rolled, dependency-free Node 20 / TypeScript. Hits a local agentmemory daemon over real HTTP, collects per-request latency with performance.now(), summarizes as nearest-rank p50 / p90 / p99 plus min / max / errors / throughput, writes a schema-versioned JSON report per run.

Matrix dimensions

  • N (seeded memories before the cell runs): {1000, 10000, 100000} — set via BENCH_N= (comma list).
  • C (concurrent in-flight requests during the cell): {1, 10, 100} — set via BENCH_C=.
  • ops per cell: 200 — set via BENCH_OPS=. Enough samples for stable p99 without dragging a 100k-seed run past tens of minutes.

N is processed ascending so each cell builds on the previous seed work — no re-seeding between cells.

Endpoints under test

  • POST /agentmemory/remember
  • POST /agentmemory/smart-search
  • GET /agentmemory/memories?latest=true

Content generation

Synthetic memory bodies come from a small noun / verb / concept vocabulary fed by a mulberry32(BENCH_SEED) PRNG. Default seed 0xC0FFEE. Same seed + same daemon build = byte-identical seed corpus, so re-running against the same git sha gives the same content mixture going in and latency variance has to come from the daemon, not JSON payload jitter.

Output

Each run writes benchmark/results/load-100k-<short-git-sha>.json (mkdir -p'd if missing). Top-level schema_version: 1 so future format changes don't silently break consumers. The harness also prints a compact table to stdout for quick eyeball comparison.

Verification run (small-N)

BENCH_N=1000 BENCH_C=10 BENCH_OPS=200 npm run bench:load against a fresh daemon — three cells, zero errors, JSON written. The full 100k matrix is intentionally deferred to CI / release time, per the issue's "single-process for now" scope.

endpoint N C ops err p50 ms p90 ms p99 ms ops/s
POST /agentmemory/remember 1000 10 200 0 577.43 607.34 675.27 17.38
POST /agentmemory/smart-search 1000 10 200 0 160.06 185.61 224.35 61.26
GET /agentmemory/memories?latest=true 1000 10 200 0 395.46 475.71 542.65 24.84

p99 is the headline number for capacity planning. p50 + throughput are context. The raw JSON is committed at benchmark/results/load-100k-96c0ed0.json as the example result.

What's wired

  • benchmark/load-100k.ts — main harness. File header documents env knobs.
  • benchmark/lib/percentiles.ts — zero-dep nearest-rank pXX(sorted, p) helper.
  • benchmark/README.md — how to run, what gets measured, where results land, and why p99 is the number you want.
  • package.jsonnpm run bench:load (uses tsx already in devDeps).
  • CHANGELOG.md[Unreleased] entry + a ### Performance section placeholder describing where per-release p50/p90/p99 bullets should land going forward.

Out of scope

  • Multi-node / distributed load.
  • LLM-compression throughput (separate shape, separate issue).
  • Fixing the suspected BM25 full-scan bottleneck — that's the next PR (per the issue acceptance criteria).

Test plan

  • npm run bench:load with BENCH_N=1000 BENCH_C=10 BENCH_OPS=200 against a freshly-started daemon — three cells, zero errors, JSON written.
  • npm test excluding the pre-existing test/mcp-standalone.test.ts failures (11 failures present on main HEAD, unrelated to this PR): 875 / 875 pass.
  • CI: run full-matrix BENCH_N=1000,10000,100000 BENCH_C=1,10,100 at release time, attach the resulting JSON to the release notes.
  • Confirm AGENTMEMORY_BENCH_AUTOSTART=1 path on a fresh checkout (npm run build && AGENTMEMORY_BENCH_AUTOSTART=1 BENCH_N=1000 npm run bench:load).

Summary by CodeRabbit

  • New Features

    • Added a load-testing harness to measure performance across multiple endpoints, capturing latency percentiles (p50, p90, p99), throughput, error rates, and other key metrics. Configure memory counts and concurrency levels for your testing scenarios. Run benchmarks using npm run bench:load.
  • Documentation

    • Added comprehensive benchmarking documentation and framework for publishing per-release performance results.

Review Change Stack

Adds a reproducible, dependency-free load harness so we can answer
"what's p99 at 100k memories under concurrency 100?" with a number
instead of a shrug.

The harness seeds N synthetic memories against a local agentmemory
daemon (defaults to http://localhost:3111, optional autostart via
AGENTMEMORY_BENCH_AUTOSTART=1), then drives a matrix of
(N, concurrency, endpoint) cells with hand-rolled Promise.allSettled
batches. Per-request latency is collected via performance.now() and
summarized as nearest-rank p50 / p90 / p99 plus min / max / errors
and wall-clock throughput. Results are written to
benchmark/results/load-100k-<short-git-sha>.json with a
schema_version field so future format changes don't silently break
consumers.

Defaults match issue #346: N in {1000, 10000, 100000} x C in
{1, 10, 100} x three endpoints (POST /agentmemory/remember,
POST /agentmemory/smart-search, GET /agentmemory/memories?latest=true).
Each cell issues BENCH_OPS=200 requests by default — enough samples
for stable p99 without dragging a 100k-seed run past tens of minutes.

Content is generated by a small noun/verb/concept vocabulary fed by a
mulberry32(BENCH_SEED) PRNG so re-running the harness against the
same daemon build yields the same seed corpus. Reproducibility, not
realism, is the point — latency variance comes from the daemon, not
JSON payload jitter.

Files:
- benchmark/load-100k.ts: main harness
- benchmark/lib/percentiles.ts: zero-dep pXX helper, nearest-rank
- benchmark/README.md: how to run, what gets measured, where results
  land, and why p99 is the number you want for capacity planning
- benchmark/results/load-100k-96c0ed0.json: example result from a
  small-N (N=1000, C=10) verification run against a fresh daemon
- package.json: wires `npm run bench:load`
- CHANGELOG.md: Unreleased entry + a Performance section placeholder
  describing where per-release numbers should land going forward

Verified locally at BENCH_N=1000 BENCH_C=10 BENCH_OPS=200 — three
cells, zero errors, JSON written. Full 100k matrix is intentionally
deferred to CI/release time. Closes #346.
@vercel
Copy link
Copy Markdown

vercel Bot commented May 13, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
agentmemory Ready Ready Preview, Comment May 13, 2026 7:51pm

Request Review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 13, 2026

📝 Walkthrough

Walkthrough

This PR introduces a complete load-testing harness that seeds synthetic memories and measures REST API latency percentiles (p50/p90/p99) and throughput across configurable memory counts (1k, 10k, 100k) and concurrency levels (1, 10, 100), writing schema-versioned JSON results and integrating via npm script and release documentation.

Changes

Load-Testing Harness Implementation and Integration

Layer / File(s) Summary
Percentile calculation utility
benchmark/lib/percentiles.ts
Exports pXX() helper to compute nearest-rank percentiles from pre-sorted numeric arrays, with edge-case handling for empty input, boundary percentiles, and clamped rank computation.
Configuration and readiness infrastructure
benchmark/load-100k.ts (lines 72–165)
RunConfig parses environment variables and defaults; waitForLivez polls daemon readiness; maybeStartDaemon spawns and drains a local daemon process; shortGitSha generates reproducible file identifiers.
Load driver and statistical aggregation
benchmark/load-100k.ts (lines 166–242)
driveLoad executes fixed-count async operations with bounded concurrency, collecting per-request latencies and errors; summarize computes throughput, p50/p90/p99, and min/max from latency samples and wall time.
Synthetic data generation and memory seeding
benchmark/load-100k.ts (lines 26–71, 244–286)
Mulberry32 PRNG and predefined pools generate reproducible synthetic "observation" records; seedMemories POSTs these to /agentmemory/remember with configurable concurrency to pre-populate test state.
Endpoint-specific measurement wrappers
benchmark/load-100k.ts (lines 287–366)
measureRemember (POST), measureSmartSearch (POST with fixed queries), and measureMemoriesLatest (GET) wrap driveLoad for each endpoint and return aggregated latency/throughput statistics.
Orchestration, reporting, and entrypoint
benchmark/load-100k.ts (lines 1–25, 367–528)
Main lifecycle coordinates daemon startup/shutdown, readiness polling, incremental seeding, measurement matrix execution, JSON report generation with schema metadata, stdout table printing, and error handling.
Documentation and project integration
CHANGELOG.md, benchmark/README.md, benchmark/results/load-100k-96c0ed0.json, package.json
CHANGELOG documents the new harness; README specifies measurement semantics and defaults; example results JSON demonstrates schema (v1) and output format; npm script bench:load wires the harness entry point.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 A harness springs forth, bundled tight,
With synthetic mem'ries, latencies bright,
P50 and P99 metrics align,
No deps required—just benchmarks divine,
From seeded chaos, performance will shine! 📊✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 17.65% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed Title directly matches the main deliverable—a load-100k.ts harness that outputs p50/p90/p99 percentiles.
Linked Issues check ✅ Passed The PR implements all core requirements from #346: dependency-free harness seeding N ∈ {1k,10k,100k} memories, driving C ∈ {1,10,100} concurrency, testing three endpoints, outputting p50/p90/p99 to schema-versioned JSON, and exposing npm run bench:load.
Out of Scope Changes check ✅ Passed All changes are directly scoped to benchmark harness requirements: the load test implementation, percentile utility, documentation, package.json script, example results, and CHANGELOG entry. No unrelated modifications.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/bench-load-100k-346

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 ESLint

If the error stems from missing dependencies, add them to the package.json file. For unrecoverable errors (e.g., due to private dependencies), disable the tool in the CodeRabbit configuration.

ESLint skipped: no ESLint configuration detected in root package.json. To enable, add eslint to devDependencies.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
benchmark/load-100k.ts (1)

140-151: ⚡ Quick win

maybeStartDaemon swallows spawn failures.

There's no 'error' or 'exit' listener on the child, so a failed spawn (e.g. EACCES on the CLI binary, or the daemon crashing on startup) shows up only as a waitForLivez timeout 30 seconds later. Attaching an error/early-exit listener lets the harness fail fast with a useful message instead of looking like a network problem.

♻️ Proposed handler — fail fast on spawn / early exit
   const child = spawn(process.execPath, [cliPath, "start"], {
     stdio: ["ignore", "pipe", "pipe"],
     detached: false,
   });
+  child.on("error", (err) => {
+    console.error(`[load-100k] daemon spawn error:`, err);
+  });
+  child.once("exit", (code, signal) => {
+    if (code !== 0 && code !== null) {
+      console.error(
+        `[load-100k] daemon exited early code=${code} signal=${signal ?? "-"}`,
+      );
+    }
+  });
   child.stdout?.on("data", () => {
     /* drain */
   });
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@benchmark/load-100k.ts` around lines 140 - 151, maybeStartDaemon currently
spawns a child process with spawn(...) and drains stdio but doesn’t attach
'error' or early 'exit' handlers, so spawn failures only surface as a
waitForLivez timeout; update maybeStartDaemon to attach child.on('error', ...)
to immediately reject/throw with the spawn error and child.on('exit', (code,
signal) => ...) to fail fast if the daemon exits before becoming healthy
(include code/signal in the message), and ensure these handlers clean up (remove
listeners or resolve/reject only once) to avoid memory leaks and duplicate
resolution.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@benchmark/load-100k.ts`:
- Around line 99-100: The current use of parseInt(...) || fallback for
opsPerCell and seed treats 0 as falsy and replaces legitimate zero values with
defaults; update the parsing so it distinguishes parse failures from zero—parse
with parseInt(process.env["BENCH_OPS"] ?? "200", 10) and
parseInt(process.env["BENCH_SEED"] ?? "12648430", 10) (or parse then test
Number.isNaN) and only fall back to 200 and 12648430 when the parsed result is
NaN; change the assignments for opsPerCell and seed accordingly to preserve a
configured 0 while still defaulting on invalid input.
- Around line 244-285: seedMemories currently shares one rng across concurrent
workers which makes content for a given i non-deterministic; change content
generation to be a pure function of (seed, i) by deriving a per-record RNG from
the seed and index instead of using the shared rng. Concretely, update
buildContent (or an overload) to accept a seed and the record index (or accept a
per-record rng created by calling mulberry32(seed + i)) and use that to generate
content inside seedMemories.worker; replace uses of the shared rng in
seedMemories and in measureRemember/driveLoad (where probeRng is shared) so each
call computes its own mulberry32(cfg.seed + i) (or equivalent deterministic
derivation) before generating the body, ensuring reproducible content for every
i.

---

Nitpick comments:
In `@benchmark/load-100k.ts`:
- Around line 140-151: maybeStartDaemon currently spawns a child process with
spawn(...) and drains stdio but doesn’t attach 'error' or early 'exit' handlers,
so spawn failures only surface as a waitForLivez timeout; update
maybeStartDaemon to attach child.on('error', ...) to immediately reject/throw
with the spawn error and child.on('exit', (code, signal) => ...) to fail fast if
the daemon exits before becoming healthy (include code/signal in the message),
and ensure these handlers clean up (remove listeners or resolve/reject only
once) to avoid memory leaks and duplicate resolution.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c8b52df9-73fe-432f-b0f8-189246073ef1

📥 Commits

Reviewing files that changed from the base of the PR and between 96c0ed0 and 643f009.

📒 Files selected for processing (6)
  • CHANGELOG.md
  • benchmark/README.md
  • benchmark/lib/percentiles.ts
  • benchmark/load-100k.ts
  • benchmark/results/load-100k-96c0ed0.json
  • package.json

Comment thread benchmark/load-100k.ts
Comment on lines +99 to +100
opsPerCell: parseInt(process.env["BENCH_OPS"] || "200", 10) || 200,
seed: parseInt(process.env["BENCH_SEED"] || "12648430", 10) || 12648430,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

BENCH_SEED=0 (and BENCH_OPS=0) are silently replaced with defaults.

The parseInt(...) || fallback idiom on lines 99–100 conflates "unparseable" with "legitimate zero". BENCH_SEED=0 is a perfectly valid seed for mulberry32, but this code rewrites it to 12648430, which surprises anyone trying to reproduce a previously-published run that used seed 0. Same issue, less impactful, for BENCH_OPS=0.

🛡️ Proposed fix — distinguish parse failure from zero
-    opsPerCell: parseInt(process.env["BENCH_OPS"] || "200", 10) || 200,
-    seed: parseInt(process.env["BENCH_SEED"] || "12648430", 10) || 12648430,
+    opsPerCell: (() => {
+      const v = parseInt(process.env["BENCH_OPS"] ?? "", 10);
+      return Number.isFinite(v) && v > 0 ? v : 200;
+    })(),
+    seed: (() => {
+      const v = parseInt(process.env["BENCH_SEED"] ?? "", 10);
+      return Number.isFinite(v) ? v : 12648430;
+    })(),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
opsPerCell: parseInt(process.env["BENCH_OPS"] || "200", 10) || 200,
seed: parseInt(process.env["BENCH_SEED"] || "12648430", 10) || 12648430,
opsPerCell: (() => {
const v = parseInt(process.env["BENCH_OPS"] ?? "", 10);
return Number.isFinite(v) && v > 0 ? v : 200;
})(),
seed: (() => {
const v = parseInt(process.env["BENCH_SEED"] ?? "", 10);
return Number.isFinite(v) ? v : 12648430;
})(),
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@benchmark/load-100k.ts` around lines 99 - 100, The current use of
parseInt(...) || fallback for opsPerCell and seed treats 0 as falsy and replaces
legitimate zero values with defaults; update the parsing so it distinguishes
parse failures from zero—parse with parseInt(process.env["BENCH_OPS"] ?? "200",
10) and parseInt(process.env["BENCH_SEED"] ?? "12648430", 10) (or parse then
test Number.isNaN) and only fall back to 200 and 12648430 when the parsed result
is NaN; change the assignments for opsPerCell and seed accordingly to preserve a
configured 0 while still defaulting on invalid input.

Comment thread benchmark/load-100k.ts
Comment on lines +244 to +285
async function seedMemories(
baseUrl: string,
count: number,
rng: () => number,
seedConcurrency = 32,
): Promise<{ seeded: number; errors: number; wallMs: number }> {
let issued = 0;
let seeded = 0;
let errors = 0;
const t0 = performance.now();
async function worker(): Promise<void> {
while (true) {
const i = issued++;
if (i >= count) return;
const body = JSON.stringify({
content: buildContent(rng, i),
type: "observation",
});
try {
const res = await fetch(`${baseUrl}/agentmemory/remember`, {
method: "POST",
headers: { "content-type": "application/json" },
body,
signal: AbortSignal.timeout(30_000),
});
if (res.ok) {
seeded++;
} else {
errors++;
}
// drain body to free the socket
await res.text().catch(() => "");
} catch {
errors++;
}
}
}
await Promise.allSettled(
Array.from({ length: seedConcurrency }, () => worker()),
);
return { seeded, errors, wallMs: performance.now() - t0 };
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Shared mutable RNG across concurrent workers breaks the "deterministic content" guarantee.

seedMemories shares one rng closure across seedConcurrency workers. After the first synchronous burst (each worker grabs an i and consumes ~5 rng values before its first await fetch), every subsequent rng consumption is interleaved with HTTP completions, so the rng→i pairing depends on network timing. Two runs against the same daemon build with the same seed will produce different content for the same i once i >= seedConcurrency. The PR description explicitly promises reproducibility via mulberry32(BENCH_SEED), so this is a real regression of that contract — and it also leaks into measureRemember below (same pattern with probeRng).

Derive the per-record rng from i so content is a pure function of (seed, i) regardless of completion order:

🛡️ Proposed fix — make content a pure function of (seed, i)
 async function seedMemories(
   baseUrl: string,
   count: number,
-  rng: () => number,
+  baseSeed: number,
   seedConcurrency = 32,
 ): Promise<{ seeded: number; errors: number; wallMs: number }> {
   let issued = 0;
   let seeded = 0;
   let errors = 0;
   const t0 = performance.now();
   async function worker(): Promise<void> {
     while (true) {
       const i = issued++;
       if (i >= count) return;
+      const wrng = mulberry32((baseSeed + i) >>> 0);
       const body = JSON.stringify({
-        content: buildContent(rng, i),
+        content: buildContent(wrng, i),
         type: "observation",
       });

…and update the call site (around line 406) to pass cfg.seed + seededSoFar instead of a constructed rng. Apply the same pattern in measureRemember (lines 294–307) where probeRng is shared across driveLoad workers.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async function seedMemories(
baseUrl: string,
count: number,
rng: () => number,
seedConcurrency = 32,
): Promise<{ seeded: number; errors: number; wallMs: number }> {
let issued = 0;
let seeded = 0;
let errors = 0;
const t0 = performance.now();
async function worker(): Promise<void> {
while (true) {
const i = issued++;
if (i >= count) return;
const body = JSON.stringify({
content: buildContent(rng, i),
type: "observation",
});
try {
const res = await fetch(`${baseUrl}/agentmemory/remember`, {
method: "POST",
headers: { "content-type": "application/json" },
body,
signal: AbortSignal.timeout(30_000),
});
if (res.ok) {
seeded++;
} else {
errors++;
}
// drain body to free the socket
await res.text().catch(() => "");
} catch {
errors++;
}
}
}
await Promise.allSettled(
Array.from({ length: seedConcurrency }, () => worker()),
);
return { seeded, errors, wallMs: performance.now() - t0 };
}
async function seedMemories(
baseUrl: string,
count: number,
baseSeed: number,
seedConcurrency = 32,
): Promise<{ seeded: number; errors: number; wallMs: number }> {
let issued = 0;
let seeded = 0;
let errors = 0;
const t0 = performance.now();
async function worker(): Promise<void> {
while (true) {
const i = issued++;
if (i >= count) return;
const wrng = mulberry32((baseSeed + i) >>> 0);
const body = JSON.stringify({
content: buildContent(wrng, i),
type: "observation",
});
try {
const res = await fetch(`${baseUrl}/agentmemory/remember`, {
method: "POST",
headers: { "content-type": "application/json" },
body,
signal: AbortSignal.timeout(30_000),
});
if (res.ok) {
seeded++;
} else {
errors++;
}
// drain body to free the socket
await res.text().catch(() => "");
} catch {
errors++;
}
}
}
await Promise.allSettled(
Array.from({ length: seedConcurrency }, () => worker()),
);
return { seeded, errors, wallMs: performance.now() - t0 };
}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@benchmark/load-100k.ts` around lines 244 - 285, seedMemories currently shares
one rng across concurrent workers which makes content for a given i
non-deterministic; change content generation to be a pure function of (seed, i)
by deriving a per-record RNG from the seed and index instead of using the shared
rng. Concretely, update buildContent (or an overload) to accept a seed and the
record index (or accept a per-record rng created by calling mulberry32(seed +
i)) and use that to generate content inside seedMemories.worker; replace uses of
the shared rng in seedMemories and in measureRemember/driveLoad (where probeRng
is shared) so each call computes its own mulberry32(cfg.seed + i) (or equivalent
deterministic derivation) before generating the body, ensuring reproducible
content for every i.

@rohitg00 rohitg00 merged commit d151746 into main May 13, 2026
5 checks passed
@rohitg00 rohitg00 deleted the feat/bench-load-100k-346 branch May 13, 2026 20:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bench/load-100k.ts harness — published p50/p99 per release

1 participant