Skip to content

perf(pm): bump manifests-concurrency-limit 64 → 256 for p1_resolve#2916

Draft
elrrrrrrr wants to merge 29 commits intonextfrom
perf/p1-resolve-concurrency
Draft

perf(pm): bump manifests-concurrency-limit 64 → 256 for p1_resolve#2916
elrrrrrrr wants to merge 29 commits intonextfrom
perf/p1-resolve-concurrency

Conversation

@elrrrrrrr
Copy link
Copy Markdown
Contributor

What

Cuts utoo's p1_resolve wall to close the gap with bun. Two-part change:

  1. Instrumentation (`crates/ruborist/src/util/timing.rs`): per-fetch atomic accumulator that records (request, body, parse) split timings inside `fetch_full_manifest` + `fetch_version_manifest`. Dumped at INFO level after preload + bfs phases.

  2. Concurrency bump: `manifests-concurrency-limit` default 64 → 256, matching bun's observed working point against npmjs.org.

Evidence (local Mac, ant-design, cold)

```
preload_wall = 22378ms
bfs_wall = 236ms (1% of p1)

fetch-timings: n=2730
sum_request = 1089s (88% — TCP+TLS+HTTP RTT)
sum_body = 138s (11% — body)
sum_parse = 2s (0.16% — JSON)
```

Effective parallel during preload was 1089s / 22.4s ≈ 49× — the 64 cap was at saturation. Per-request RTT dominates; the only lever is more in-flight requests.

Pcap from prior PRs:

  • bun: ~260 TCP streams on resolve, p1 wall 2.3s
  • utoo @ cap=64: ~70 streams, p1 wall 3.2s

Expected on GHA

CI phases bench should show p1_resolve drop toward bun's range. The fetch-breakdown lines in CI logs will tell us exactly how the 64→256 bump redistributes time.

Risk

  • npmjs.org has demonstrated it accepts 260+ concurrent requests from one origin (bun proves it). 256 has headroom.
  • Antgroup's npm mirror also configured to support large concurrency in past benchmarks.
  • The new INFO-level breakdown lines are safe to leave in — they fire once per resolve, atomic recording is lock-free.

🤖 Generated with Claude Code

…down

p1_resolve has been ~0.9s behind bun on phases bench for the past
several PRs. Pcap on prior runs measured bun opening ~260 parallel
TCP streams against registry.npmjs.org for resolve, while utoo
opened ~70 (the 64 manifests-concurrency-limit cap was at saturation).

Adding fetch-breakdown timing in ruborist showed where p1's 22s
(local Mac) actually goes:

  fetch-timings: n=2730
    sum_request   = 1089s   (88% — TCP+TLS+HTTP RTT to first byte)
    sum_body      = 138s    (11% — body download)
    sum_parse     = 2s      (0.16% — simd_json on rayon)

The dominant cost is per-request RTT, not parsing or body transfer.
The lever is the cap on concurrent in-flight requests.

This commit:

1. Adds `crates/ruborist/src/util/timing.rs` — process-wide atomic
   accumulator that records per-fetch (request_us, body_us,
   parse_us, bytes) inside both `fetch_full_manifest` and
   `fetch_version_manifest`. Reset before each preload phase, dumped
   at INFO level after preload + bfs.

2. Bumps `manifests-concurrency-limit` default 64 → 256 to match
   bun's observed working point against npmjs.org.

CI bench will validate. Expected: p1 utoo wall drops toward bun's
range (~2.3s on GHA).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@elrrrrrrr elrrrrrrr added benchmark Run pm-bench on PR A-Pkg Manager Area: Package Manager labels May 8, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request increases the manifest fetch concurrency limit to 256 and introduces a performance tracking utility, FetchTimings, to measure and log durations for network requests, body downloads, and JSON parsing during dependency resolution. Review feedback notes that the current timing reset logic may result in stale data if the preload phase is skipped and points out that the use of a global static for metrics hinders thread safety for concurrent resolutions.

return;
}

crate::util::FETCH_TIMINGS.reset();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The FETCH_TIMINGS reset is currently performed inside run_preload_phase, but it is skipped if config.skip_preload is true (line 755). In a long-running process or library context where multiple resolutions occur, this leads to stale timing data from previous runs being reported in the BFS phase logs. It is recommended to move the reset to the start of the entry point build_deps_with_config to ensure each resolution starts with a clean state.

Comment on lines +107 to +113
pub static FETCH_TIMINGS: FetchTimings = FetchTimings {
count: AtomicU64::new(0),
request_us: AtomicU64::new(0),
body_us: AtomicU64::new(0),
parse_us: AtomicU64::new(0),
bytes: AtomicU64::new(0),
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using a global static FETCH_TIMINGS for accumulating fetch metrics makes the library non-thread-safe for concurrent dependency resolutions. If multiple resolutions are executed in parallel within the same process, their metrics will be interleaved and the reset() calls will interfere with each other. For a library like ruborist, it would be better to encapsulate these metrics within a context object (like BuildDepsConfig) or pass them explicitly through the call stack to support concurrent usage.

elrrrrrrr and others added 22 commits May 8, 2026 22:25
Two changes after the GHA bench on the previous commit (PR #2916,
run 25559625024) showed the concurrency=256 hypothesis was wrong on
GHA's environment.

Revert concurrency 256 → 64
---------------------------

The new fetch-timing instrumentation shipped in the previous commit
caught the surprise: GHA's pcap-vs-local profile is the *opposite*
of what local Mac measurements suggested.

  metric          local Mac    GHA Linux
  avg_request     399ms        70ms      ← network MUCH faster on GHA
  avg_body         50ms        20ms
  avg_parse       730µs        266ms     ← parse 365× SLOWER on GHA

Mechanism: `parse_json_off_runtime` dispatches to `rayon::spawn`,
and rayon's pool size is `num_cpus` (= 2 on GHA ubuntu-latest).
Bumping concurrency 64 → 256 queued 256 manifest parses behind 2
rayon workers — head-of-line blocking. avg_parse jumped from ~10ms
to 266ms wall, dragging p1 utoo wall from 3.10s up to 3.33s.

Restore manifest-bench
----------------------

Brought back `crates/manifest-bench` (originally landed in the
post-#2818 driver hunt, dropped in af714eb once #2818 graduated).
It's a single-binary HTTP-only fetch tool that strips out the
ruborist pipeline (no BFS, no dedup, no parse, no project cache,
no lockfile write) — fires `GET <registry>/<name>` in parallel
and reports the same diag shape as the new `p1-breakdown` lines.

Goal: separate the network ceiling from the resolver pipeline so
the next round of p1 experiments (parse offload, partial parse,
dedicated parse pool, etc.) can be evaluated against a stable
"pure network" baseline.

Knobs (unchanged from the original drop):
  --concurrency N    sweep without rebuilding utoo
  --reps N           run same workload back-to-back
  --single-version   use /<name>/latest (smaller bodies)
  --user-agent X     UA-fingerprint experiments
  --http1-only       H2 vs H1 toggle
  --accept X         override Accept header

Same TLS stack as ruborist (rustls + aws-lc-rs, native roots).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…inux

build-linux now also builds + uploads `manifest-bench` when a phases
bench is going to run (label or dispatch). bench-phases-linux
downloads the binary and runs it after the regular phase-isolated
benchmark.

Sweep mirrors the original (#2818-era) wire-in:

  concurrency: 32 / 64 / 96 / 128 / 192 / 256  (HTTP/1.1, full manifest)
  protocol:    H1 vs H2-negotiate  (cap=128)
  endpoint:    full vs `/<name>/latest`  (cap=128, smaller bodies)
  UA:          default vs `Bun/1.2.21`  (cap=128)

Output goes to /tmp/pm-bench-output/manifest-bench-npmjs.log and
ships in the existing pm-bench-logs-linux artifact — no PR comment
surface (the headline phases bench comment stays the same).

Why now: the new ruborist `p1-breakdown` instrumentation showed
sum_parse on GHA can dominate when concurrency is bumped (256:
sum_parse 728s vs sum_request 193s). To attribute the bun-vs-utoo
gap on p1_resolve we need a "pure HTTP" baseline that strips out
ruborist's parse / BFS / dedup / lockfile path. manifest-bench is
that baseline: same TLS stack as ruborist (rustls + aws-lc-rs,
native roots), no resolver pipeline.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI fetch-breakdown on GHA (run 25562552058, conc=64) showed parse
queueing on rayon dominates the gap to manifest-bench's pure-HTTP
baseline:

  manifest-bench (pure HTTP, conc=64): 2.12s wall
  utoo p1 (full ruborist):             3.10s wall  ← +1.0s overhead
  ↑ sum_parse 95s vs sum_request 95s, parse 50% of work-time
  ↑ avg_parse 30ms wall vs ~5ms actual CPU — the 25ms extra is rayon
    queue wait

Mechanism: 64 concurrent tasks all dispatching parse to rayon's pool
(size = num_cpus = 2 on GHA). Queue depth grows to ~32 per worker.
Each parse waits 25ms+ in queue before running its 5ms of CPU work.

Round 1 fix: inline parse, drop the rayon hop. simd_json on a tokio
worker thread is fast (~5ms for 115KB JSON), and the tokio runtime's
cooperative budget naturally rebalances CPU across the 64 tasks.

Expected on next CI:
- avg_parse drops from 30ms wall → ~5-10ms wall (close to CPU-only)
- preload_wall drops from 5.4s → ~3.5-4s for cold runs
- p1 hyperfine wall drops from 3.10s → 2.3-2.5s, narrowing the gap
  to manifest-bench's 2.12s ceiling

If parse becomes the new bottleneck (CPU-bound), next round could
look at partial parse / lazy field access. If wall doesn't drop,
hypothesis is wrong and we look elsewhere (BFS, dedup, lockfile).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Round 1 (inline parse) reverted on data: GHA showed +0.37s p1
regression because parse blocked tokio runtime workers, dropping
eff_parallel 42 → 35 even though per-fetch work-time fell. avg_request
went up from 35ms → 52ms — symptomatic of socket reads being delayed
by the parsing task on the same worker.

  metric           round 0 (rayon)  round 1 (inline)
  p1 wall          3.27s            3.64s   ⚠️ +0.37s
  avg_parse        30ms (queued)    300µs   ✓
  avg_request      35ms             52ms    ⚠️ +17ms (worker contention)
  eff_parallel     42               35      ⚠️

Round 2 attempts the third option: `tokio::task::spawn_blocking`.

  - rayon's pool was too small (num_cpus = 2 on GHA) — 64 concurrent
    parses queued behind 2 workers, parse wall 30ms.
  - inline parse held tokio worker hostage during simd_json call,
    starving in-flight socket reads.
  - tokio's blocking pool has a much larger default cap (512), so 64
    concurrent parses never queue. Unlike rayon there's no contention
    with the install path's parallel-write rayon usage. Unlike inline
    the tokio runtime workers stay free to drive network I/O.

Expected on next CI:
  - avg_parse drops to ~5-10ms wall (close to CPU floor, no queue)
  - avg_request stays ~35ms (workers free for I/O)
  - eff_parallel returns to ~50, possibly higher
  - p1 wall drops toward manifest-bench's 2.10s ceiling

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Round 2 moved parse_json_off_runtime off rayon (-0.11s p1). But
fetch-breakdown still showed avg_request 41ms vs round 0's 35ms,
hinting at a second source of rayon contention.

Found it: `extract_core_version_off_runtime` is also on
`rayon::spawn`. On npmjs.org's `!supports_semver` path EVERY fetch
resolves through `resolve_via_full_manifest`, which fetches the
full packument once per package name (deduped via inflight_full)
and then calls `extract_core_version_off_runtime` per (name, spec)
to materialize the chosen version into a `CoreVersionManifest`.

So per fetch we hit rayon TWICE — once for the JSON parse (round 2
moved to spawn_blocking), and once for `get_core_version` (still on
rayon). The second hop has the same head-of-line blocking signature
as the first: 64 concurrent resolves dispatching to a 2-thread
rayon pool.

Round 3: move extract_core_version_off_runtime to spawn_blocking
for the same reasons. The work is JSON lazy-reparse (`raw_json`
sub-tree decoding) — genuinely blocking, well-suited for tokio's
blocking pool.

Expected: utoo p1 wall drops further toward manifest-bench's 2.10s
ceiling. avg_request should fall back from 41ms → ~35ms (rayon
contention removed from the fetch task's await chain).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two changes for round 4 of p1 optimization:

1. Revert `extract_core_version_off_runtime` from spawn_blocking back
   to rayon::spawn (round 3). Within-run measurement showed +0.42s
   regression vs utoo-next (round 2 was +0.11s). Likely cause: this
   function is called per (name, spec), so multi-spec packages call
   it 2-5x per fetch. spawn_blocking's per-dispatch overhead exceeds
   rayon queue savings at this multiplier.

2. Add `serialize_us` and `cache_export_us` to the p1-breakdown line
   so we can attribute the remaining gap. Currently:

     manifest-bench wall:     2.10s   (pure HTTP ceiling)
     utoo p1 wall (round 2):  3.16s
     gap:                     1.06s

   We have:
     preload_wall  ≈ 2.7s   (logged)
     bfs_wall      ≈ 0.3s   (logged)
     serialize_us  ?
     cache_export_us ?      ← suspected: full manifest deep-clone
                              into ProjectCacheData for ~2730 entries

   Next round will have data to choose between attacking serialize,
   cache export, or the BFS loop body.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Round 4 measured serialize_us = 15ms and cache_export_us = 34ms — both
tiny — confirming the 1s gap from manifest-bench (utoo p1 = 3.16s vs
mb wall = 2.10s) is not in post-build code.

Per-fetch math also pointed at main-loop bookkeeping:

  manifest-bench: eff_parallel = 52 (sum_work 111s / wall 2.14s)
  utoo preload  : eff_parallel = 43 (sum_work 120s / wall 2.85s)

Same conc=64 cap, but utoo loses 9 effective slots — most likely
the main loop's serial bookkeeping (dedup hash insert, format!
key, extract_transitive_deps, queue push, 3-4 receiver events)
holds the flow between futures.next() returning and the next
fetch dispatch.

This commit splits the main loop into two timed segments:

  preload_loop_dispatch_us: time spent in the `while in_flight <
                            concurrency` block — popping pending,
                            dedup check, futures.push.
  preload_loop_result_us:   time spent processing each completed
                            future — extract_transitive_deps,
                            pending.extend, on_manifest.

If dispatch+result sum approaches preload_wall, the main loop is
the bottleneck and we need to either (a) split processing onto a
dedicated task, or (b) use unbounded futures with a downstream
consumer. If they're small, the gap is elsewhere (per-task
overhead in resolve_package's inflight gates).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Round 5 main-loop instrumentation showed the preload main loop
itself is fast (15-25ms total dispatch+result). The 0.8s gap from
manifest-bench's 2.10s wall lives INSIDE the spawned fetch tasks.

Per-fetch wall (warm runs):
  measured: avg_request 30ms + avg_body 6ms + avg_parse 2.5ms = ~38ms
  derived:  preload_wall 2.4s × eff_parallel(43) / 2730 = 38ms
  delta:    ~12ms unaccounted per task

That 12ms is `extract_core_version_off_runtime` queueing on rayon's
2-thread pool. extract is called per (name, spec) — for ant-design
that's ~3000+ calls. With pool=2 and 64 concurrent fetches each
dispatching extract, the queue depth grows; each task waits its
turn before extract returns.

Bump rayon pool to `max(num_cpus, 8)` for non-Windows. Sizing the
pool above the CPU count for short blocking JSON ops (parse + extract)
replaces FIFO queueing with parallel dispatch. Real CPU contention
is bounded by num_cpus (the kernel scheduler still gates), so the
extra pool threads just hold ready-to-run dispatches in parallel
rather than serialised in a queue.

Why not just spawn_blocking (round 3 attempt): tokio's blocking pool
defaults to 512 threads, but its per-dispatch overhead was higher
than rayon's even when queueing — round 3 regressed by 0.5s.

Expected: extract queue wait drops from ~12ms to ~1-2ms wall, p1
preload_wall narrows toward manifest-bench's 2.10s.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds `BuildDepsOptions::skip_preload` so callers without a pipeline
consumer (utoo deps / package-lock-only) can drop the up-front
preload phase entirely. BFS now batches prefetch per level across
the whole frontier, then runs the existing sequential
process_dependency walk against the warmed cache.

For install paths (Context::pipeline_deps_options), skip_preload
stays false so PackageResolved events still feed the
download/clone pipeline.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds resolver::fast_preload, a manifest-bench-style flat
FuturesUnordered over service::manifest::fetch_full_manifest. It
warms MemoryCache (both full_manifests and version_manifests slots)
synchronously after each fetch, so the BFS phase is pure cache-hit:
no rayon hop on extract_core_version, no OnceMap gates, no
DiskManifestStore writes, no PackageResolved events.

Wired into service::api::build_deps: when the caller asks to skip
preload (Context::build_deps for `utoo deps`) and there's no warm
project cache, fast_preload runs ahead of build_deps_with_config.
Install paths still go through preload_manifests so the pipeline
keeps its early-start signal.

Also reverts the per-level prefetch I added in 394f6c9 — with
fast_preload pre-warming everything, BFS doesn't need its own
prefetch wave.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
v1 of fast_preload called settle_spec inline on the tokio worker —
each settle ran simd_json::to_borrowed_value over the full
manifest's raw bytes (5–10ms per spec) right on the runtime
thread. CI showed it starved sibling fetches: avg_request rose
+3ms, avg_parse jumped 5→11ms, p1_resolve regressed +1.0s vs the
preload+BFS baseline (4.0s vs 3.0s).

Fix: route every settle through extract_core_version_off_runtime
(the same rayon::spawn helper the BFS path uses), and merge fetch
and settle completions into a single FuturesUnordered so
backpressure on either side throttles the other. Sibling specs
that arrived during a fetch are now stashed by name (HashMap, not
linear scan), then dispatched as their own settle futures when
the fetch lands.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Standalone manifest-bench HTTP-only sweep (npmjs, h1) shows wall
bottoming at concurrency=96 (1817ms) — earlier 256 regression was
caused by rayon-queued parses behind 2 workers, no longer relevant
since fetch parse is on spawn_blocking and settle is rayon-dispatched
off the runtime.

fast_preload's wave-shaped transitive walk currently runs at
eff_parallel ~35 against the 64 cap because pending refills lag
settles; raising the cap to 96 gives headroom for sustained
in-flight on the deep waves without crossing the npmjs per-IP
tail-latency cliff that conc 128+ trips.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
… path

`UnifiedRegistry::resolve_version_manifest`'s first cache check
(service/registry.rs:347) keys on `(name, spec)` — the original spec
string the caller passed, e.g. `^4.0.0`. settle_future was only
populating `(name, resolved_version)` (e.g. `4.17.21`), so on every
BFS edge for `lodash@^4.0.0`-style specs the warm path missed and
fell into the OnceMap inflight gate + `resolve_via_full_manifest`
re-walk before recovering the manifest from the
`(name, resolved_version)` slot we'd already set.

Now settle writes both keys so BFS hits the early-return at
service/registry.rs:347 with no further dispatch. Saves ~1
OnceMap+resolve_target_version round-trip per unique (name, spec)
the BFS encounters (≈3000 calls on ant-design-x).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Previous fast_preload (v2) dispatched primary settles to rayon as
separate FuturesUnordered futures. CI breakdown showed
eff_parallel ~44 against the conc=96 cap — the wave-shaped
transitive walk was held back by settle dispatch RTT: each fetch
landed → primary settle queued → settle popped → only then did
`pending` get transitive deps and fill the next dispatch wave.

v3 folds the primary settle into the fetch task itself via
`tokio::task::spawn_blocking`. The fetch task does the network
round-trip and the primary version-extract on the same blocking
pool slot, then returns with the resolved CoreVersionManifest
attached. Main loop pulls one Fetched event, immediately extends
`pending`, no second `next().await` to wait through the queue.

Sibling specs (rare; same name, different range) still go through
the rayon settle_future path so the primary path stays lean.

Carries primary_spec through FastEvent so the fused path can
populate both `(name, primary_spec)` and `(name, resolved_version)`
cache slots — preserves the 6455852 BFS fast-path win.

FetchOutcome enum replaces by-value FetchManifestResult to avoid a
full FullManifest clone (HashMap+Vec) per fetch event.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…json

The fast_preload hot path was paying TWO simd_json passes per
manifest:
  1. fetch_full_manifest's parse_json_off_runtime did a typed
     simd_json::serde::from_slice<FullManifest> (envelope + IgnoredAny
     visitor on `versions` keys, ~3-5ms on a 100KB body).
  2. Primary settle re-parsed the same raw bytes with
     simd_json::to_borrowed_value (~5-10ms) to extract one version's
     subtree.

Both passes went through simd_json's Tape constructor — duplicated
work. CI showed avg_parse 5-7ms × 2700 fetches = 14-19s of CPU sum
on 2-core GHA, where the spawn_blocking pool's overlapping schedule
masked some of the cost but not all.

Adds `service::manifest::fetch_full_manifest_with_settle`: same HTTP
+ retry + ETag machinery as `fetch_full_manifest`, but the parse
step does ONE `to_borrowed_value` and extracts:
  * envelope (`name`, `dist-tags`, `versions` keys) into FullManifest
    manually (no typed serde), and
  * the resolved version's subtree as a typed CoreVersionManifest
    (serde-deserializing that single subtree via the borrowed value).

fast_preload's fetch task switches to this entry point — primary
settle is now a free byproduct of the fetch parse, not a separate
`to_borrowed_value` pass. Sibling specs (same name, different
range) still go through the rayon settle_future path.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
After 671ac98's combined-parse fetch path eliminated the
double simd_json pass, the spawn_blocking pool's contention
ceiling rose enough that bumping concurrency past 96 no longer
queues parses behind 2-core CPU. manifest-bench's most recent
good-network sweep on GHA showed conc=128 hitting 1500ms vs
conc=96 at 1566ms — small but real headroom for fast_preload's
late-wave saturation now that initial waves fill faster.

Risk: on slower-network runs (npmjs per-IP throttle), conc=128
widens p99. Earlier conc-sweep data was mixed — accepting that
variance for the average-case improvement.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
542d7f1's conc=128 bench landed in a slow-network run (mb best
2010ms vs 1500ms in the prior good-network run; bun also bumped
to 2.14s vs 1.83s). Adjusted gap to mb best stayed flat (~700ms
either way), so conc=128 didn't beat 96 across runs.

Picking 96 as the conservative default: at-or-near best on every
GHA run we've measured, never the worst, and leaves headroom for
npmjs's per-IP throttling to absorb without compounding p99.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…preload)

Adds resolver::mb_resolve module + service::build_deps_mb entry point
as a parallel-track alternative to fast_preload, structured to
match manifest-bench's main-loop shape as closely as correctness
allows. Hypothesis under test: fast_preload's eff_parallel caps at
~50/96 because the FastEvent enum match + cache writes + sibling
deferred bookkeeping in the main loop competes with tokio runtime
workers for the 2 CPU cores on GHA, stalling socket I/O drive.

mb_fetch pushes ALL per-fetch work into the spawned future itself
(including cache writes), so the main loop is reduced to:

  while let Some(deps) = futs.next().await {
      pending.extend(deps);
      refill_to_cap(...);
  }

Sibling specs (multiple ranges on same package) are NOT deferred at
queue level — racing fetches for the same name both proceed. The
race converges naturally: first fetch to land populates
full_manifests, subsequent racers find the cache hit on entry and
short-circuit to a sibling-style settle. Wastes ~5-50 network
requests in real workloads but eliminates the HashMap probe + drain
overhead from the hot loop.

Wired in via UTOO_RESOLVE=mb env var:
- Context::build_deps (utoo deps) routes through build_deps_mb
- pipeline::resolve_with_pipeline (utoo install) also routes
  through it; pipeline workers still start but don't pipeline
  during fetch (mb_fetch emits no PackageResolved events) — install
  becomes phase-sequential, useful for resolve-phase A/B.

bench script enables UTOO_RESOLVE=mb so CI measures the new path
against existing baselines (utoo-next/utoo-npm/bun ignore the env
var). Comment the export line to A/B back against fast_preload.

Old fast_preload + UnifiedRegistry paths untouched.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
v1/v2 ran parse work in spawn_blocking inside each fetch future,
which competed with tokio runtime workers for the 2 GHA cores. CI
showed eff_parallel capped at 47/96 vs manifest-bench standalone's
75/96 on the same box. Hypothesis: parse CPU starves socket drive.

v3 separates the two phases:

* PHASE 1 — `mb_style_pure_fetch` is a structural copy of
  `manifest-bench`'s main loop: future body does ONLY GET + body
  recv, refill 1-for-1 on completion. Zero per-future CPU work, so
  tokio runtime workers retain full CPU for socket drive.

* PHASE 2 — bulk rayon par_iter parse: for each body, parse
  `FullManifest` envelope via simd_json::to_borrowed_value, resolve
  every queued spec for this name against the just-parsed manifest,
  populate cache slots, collect transitive deps. Runs off the
  tokio runtime entirely (spawn_blocking → rayon par_iter).

Phases alternate until pending exhausted. Typical project: 3-5
iterations as the dep tree fans out wave by wave.

The point of the split is the `phase1_http_wall` trace — measured
in isolation from any parse work, it should match manifest-bench's
standalone wall (~1.5-2.0s for 2733 names @ conc=96). If it does,
the remaining gap to mb is concentrated in phase 2 work, which is
inherent to discovering transitive deps from a non-flat name list.

Tracing per iteration:
  p1-breakdown mb_fetch iter=N phase1_http_wall=Xms n=Y bytes=Z
  p1-breakdown mb_fetch iter=N phase2_parse_wall=Xms settles=Y new_transitives=Z
  p1-breakdown mb_fetch total_wall=Xms iters=Y

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
v3 dropped the (name, spec) HashSet from v1/v2 thinking name-level
dedup via done_names was sufficient. It wasn't: sibling-settle's
extract_transitive can re-introduce specs we've already settled
(peer/optional dep cycles trivially trigger this), so the outer
while-loop never terminated.

CI 25589397823 hung on `Run phase-isolated benchmark · npmjs` for
~25 min before being cancelled — the bench's first utoo p1_resolve
hyperfine run got stuck in an infinite settle loop.

Fix: maintain `seen_specs: HashSet<(String, String)>` across all
iterations; filter both initial seed and every wave of new
transitives through it before extending pending_specs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
New crate `crates/preload-bench/` is a fully-standalone bench that:
* Uses the SAME HTTP setup as `manifest-bench` (own reqwest::Client
  built per rep with aws-lc-rs TLS, pool_max_idle_per_host(256), no
  proxy, default DNS, no retry, h1_only).
* Discovers names by walking transitive deps from a package.json
  root — instead of consuming a flat name list like manifest-bench.
* Per-future does GET + body recv + spawn_blocking parse → returns
  transitive deps → main loop refills on completion.
* No dependency on ruborist or any utoo internals (own simd_json,
  own dedup, own everything).

The point: prove (or disprove) that a fully ruborist-independent
streaming preload can hit standalone manifest-bench's wall on the
same workload. ruborist's path runs at ~2.18s for ant-design's
~2700 names; manifest-bench standalone runs the same workload at
~1.6s. The gap could be in any number of things — DNS layer, retry,
pool config, parse-CPU contention, registry single-flight gates.
preload-bench eliminates all of those simultaneously so we can read
the wall directly.

Wired into bench-phases-linux: builds + uploads preload-bench
binary alongside manifest-bench, then runs a conc=64/96/128 sweep
against the same project after the standalone manifest-bench sweep.

bench script reverts UTOO_RESOLVE=mb so utoo runs default
fast_preload — gives a third datapoint (utoo wall on integrated
path) alongside manifest-bench (HTTP-only ceiling) and preload-bench
(streaming-with-walk ceiling).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…y path

Step 1 of staged service-layer ablation. Rewrites mb_resolve as a
fully self-contained streaming preload mirroring preload-bench's
loop shape verbatim, but living inside ruborist so it can populate
MemoryCache for the BFS phase.

Bypasses every other ruborist service layer:
  * service::http::get_client — own reqwest::Client built per call,
    no global LazyLock, no shared_resolver dns layer, no
    connect_timeout, pool_max_idle_per_host(256).
  * service::manifest::fetch_full_manifest_with_settle — own GET +
    body.bytes() + spawn_blocking(simd_json::to_borrowed_value),
    no RetryIf, no FETCH_TIMINGS.
  * service::registry::UnifiedRegistry — no OnceMap, no
    ManifestStore, no EventReceiver.

Only service::* touched is MemoryCache writes (DashMap inserts) so
BFS has data to read from.

PM is unaware: dispatch happens entirely inside
service::api::build_deps when skip_preload=true and no warm cache.
Removes the previous UTOO_RESOLVE=mb env-var gating from
pm::helper::ruborist_context::Context::build_deps and
pipeline::resolve_with_pipeline. Removes the now-unused
service::api::build_deps_mb sibling entry point.

Expected: utoo p1_resolve drops from ~2.67s toward preload-bench's
~2.57s (or better since ruborist fetches fewer names than
preload-bench). The remaining gap to mb's ~1.99s would isolate
incremental layer effects we add back next:
  - tokio runtime config / cooperative scheduling
  - reqwest::Client provider differences (TLS, DNS)
  - cache layer (DashMap vs DiskManifestStore reads on the cold path)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
elrrrrrrr and others added 3 commits May 9, 2026 14:42
…mb_fetch

Step 2 of staged service-layer ablation. Targets the two gaps
left after step 1:

1. mb_fetch (in ruborist): 2300ms / 2735 = 0.84 ms/name
   manifest-bench (standalone): 2010ms / 2735 = 0.72 ms/name
   ~290ms gap on same workload, same conc.

2. BFS phase: 305ms wall against a fully-warm MemoryCache.
   Origin unclear — could be graph mutations, repeated cache
   lookups via the inflight gate, or event dispatch.

Changes:

* TLS provider — adds rustls (aws-lc-rs) + rustls-native-certs to
  non-wasm-non-macos targets. mb_resolve's `build_mb_client` now
  uses `use_preconfigured_tls(aws_lc_rs)` matching
  preload-bench / manifest-bench exactly. The reqwest crate's
  `rustls-tls-native-roots` feature on Linux still bundles ring
  for service::http's global client; the two providers coexist.

* mb_fetch instrumentation — per-future `wall_us` (network +
  parse + cache writes) and `net_us` (network only) reported in
  the trace line as `eff_par_full`, `eff_par_net`, `avg_wall`,
  `avg_net`. Same shape as manifest-bench's `avg_conc` so we can
  compare directly.

* BFS instrumentation — splits run_bfs_phase wall into:
    - `collect_us`: collect_unresolved_edges sum
    - `resolve_us`: process_dependency .await sum
    - `event_us`: post-resolve event dispatch (Resolved /
      PackagePlaced / Reused / Skipped) sum
  Plus `levels` and `edges` counters. Trace line lets us
  attribute the 305ms.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Step 3 of staged service-layer ablation. Targets the 305 ms BFS
phase observed against a fully-warm MemoryCache — 100 % attributed
to process_dependency.await sum (graph mutations) per d9fb207's
new bfs instrumentation.

Adds:
* `process_dependency_with_resolved` in builder.rs — sync variant
  of process_dependency for the registry-resolved case. Skips
  spec-routing (only Registry handled), skips resolve_registry_dep
  (resolved is the parameter), skips override re-resolve. Reuses
  existing helpers (find_compatible_node, create_package_node,
  add_edges_from, mark_dependency_resolved, update_node_type_from_edge).
* `mb_fetch_with_graph` in mb_resolve.rs — folded streaming preload
  + graph build. Each fetch result triggers inline
  process_dependency_with_resolved for every parent edge waiting
  on (name, spec). New nodes' edges feed back into pending /
  edge_targets, so the walk continues streaming-style.
  CPU work (graph mutations, ~305 ms total) overlaps with network
  IO (mb_fetch's wall ~2.4 s).

Wires `service::api::build_deps` to use mb_fetch_with_graph for
the lockfile-only path (skip_preload + cold cache). The
follow-up build_deps_with_config still runs to handle any
non-registry edges left unresolved (workspace / git / http /
file); on registry-only workloads it's near no-op.

Install path unchanged — pipeline_deps_options keeps preload +
PackageResolved early-start signal for tgz download.

Expected: utoo p1 wall drops from ~2.76 s toward mb_fetch wall +
serialize ≈ 2.4-2.5 s on good network. Tracing line:
  p1-breakdown mb_fetch_with_graph wall=Xms ok=N fetch=N
  settle=N sum_wall=Xms sum_net=Xms sum_graph=Xms avg_net=Xus
  eff_par_full=N.N eff_par_net=N.N unresolved_targets=N

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
c02bb15 had unresolved_targets=583 in trace — `enqueue_node_edges`
was unconditionally pushing (parent, edge_id) into edge_targets
without checking if the (name, spec) was already cached. When a
later transitive's edge referenced an already-fetched (name, spec),
no fetch result would land to drain that bucket — the parent edges
sat unresolved, potentially missing packages from the lockfile.

Fix: enqueue_node_edges now checks cache.get_version_manifest
first. Cache hit → process_dependency_with_resolved inline (with a
work_stack to recurse into newly-Created nodes' edges). Cache
miss → original behavior (stash in edge_targets, push to pending).

Side effect: more inline graph mutation work in the seed phase
(workspace + root edges that hit warm cache from previous specs in
the same root). Should reduce the number of fetch-result events
that need to do graph mutations downstream, since orphan edges no
longer accumulate.

Targets the correctness bug from c02bb15 trace; perf impact TBD.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
elrrrrrrr and others added 3 commits May 9, 2026 16:09
The 700ms gap between utoo p1 (folded mb_fetch_with_graph) and
manifest-bench standalone needs network-layer evidence. Same
workload, same conc, same network → why does utoo wall trail by
700ms when per-fetch latency is matched (avg_net=53us = mb p50=40us
ish)?

Hypotheses to test via pcap diff:
* Fewer concurrent TCP streams in flight at any moment (utoo's
  main loop CPU steals tokio dispatch capacity → in-flight count
  drops below conc cap)
* More TLS handshakes (utoo's connection pool isn't reusing as
  effectively as mb's per-rep fresh client)
* Larger inter-packet gaps per stream (utoo's runtime pauses mid
  download)
* Different concurrent-stream-time profile (wave shape)

Adds two captures at end of pm-bench-pcap.sh:
  manifest-bench-c96 — flat lockfile-derived names @ conc=96
  preload-bench-c96  — transitive walk @ conc=96 (matches utoo's
                       walk shape, but no graph build)

Each captured with the same tcpdump + iostat as the existing
utoo / utoo-next / bun captures. analyze_pcap globs *.pcap so the
new files get the same TCP signal extraction (zwin / retx /
dup_ack / per-stream gap p50/p99/max / distinct streams).

Workflow: downloads manifest-bench-linux-x64 +
preload-bench-linux-x64 artifacts (built by build-linux's
benchmark-label conditional steps) into the pm-bench-pcap-linux
job env so pm-bench-pcap.sh can find them.

Trigger: workflow_dispatch with target=pm-bench-pcap.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Previous pm-bench-pcap artifact was 2GB (raw .pcap files for every
PM × phase × bench), making the round-trip download impractical
just to read JSON metrics. Adds a separate `pm-bench-pcap-summaries`
artifact containing only the *.json / *.log / *.iostat.txt / dns.txt
files — KB scale, downloads in seconds.

Raw pcap artifact is preserved for cases where we want to re-run
tshark with different filters.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The pm-bench-pcap artifact is ~2 GB (pcap binaries dominate). gh
run download keeps timing out before completion. Two fixes:

1. New `pm-bench-pcap-summaries` artifact uploads only the JSON
   summaries + .log + iostat.txt + dns.txt (small, fast download).
   The full pcap artifact stays for deep inspection when needed.

2. End of pm-bench-pcap.sh prints a tab-separated comparison
   table (name, wall_s, packets, streams, zwin, retx, dup_ack,
   gap_p99_us, gap_max_us) to stdout, so the data is visible in
   the CI run log without downloading anything.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 9, 2026

📊 pm-bench-phases · e9426db · linux (ubuntu-latest)

Workflow run — ant-design

PMs: utoo (this branch) · utoo-npm (latest published) · bun (latest)

npmjs.org

p0_full_cold

PM wall ±σ user sys RSS pgMinor
bun 9.80s 0.86s 10.40s 10.70s 719M 338.8K
utoo-next 8.54s 0.45s 10.71s 12.60s 998M 126.3K
utoo-npm 8.61s 0.24s 11.08s 12.76s 1.34G 179.0K
utoo 8.64s 0.17s 10.79s 12.95s 1.36G 187.1K
PM vCtx iCtx netRX netTX cache node_mod lock
bun 17.9K 18.8K 1.20G 7M 1.89G 1.77G 1M
utoo-next 136.5K 106.6K 1.17G 5M 1.73G 1.73G 2M
utoo-npm 131.1K 91.6K 1.17G 5M 1.73G 1.72G 2M
utoo 139.6K 133.8K 1.17G 6M 1.73G 1.72G 2M

p1_resolve

PM wall ±σ user sys RSS pgMinor
bun 2.24s 0.09s 3.94s 1.10s 481M 181.1K
utoo-next 3.44s 0.19s 5.58s 2.14s 609M 83.3K
utoo-npm 3.31s 0.02s 5.54s 2.08s 609M 82.8K
utoo 2.94s 0.04s 5.46s 1.08s 1010M 140.6K
PM vCtx iCtx netRX netTX cache node_mod lock
bun 11.4K 3.8K 203M 3M 107M - 1M
utoo-next 77.2K 124.3K 201M 3M 7M 3M 2M
utoo-npm 76.6K 122.2K 201M 3M 7M 3M 3M
utoo 47.9K 6.9K 202M 3M - 3M 3M

p3_cold_install

PM wall ±σ user sys RSS pgMinor
bun 6.70s 0.07s 6.33s 10.15s 630M 209.9K
utoo-next 7.11s 1.82s 5.04s 10.93s 502M 61.0K
utoo-npm 7.51s 1.20s 5.48s 11.49s 875M 115.8K
utoo 6.02s 0.11s 5.18s 10.75s 617M 78.0K
PM vCtx iCtx netRX netTX cache node_mod lock
bun 6.3K 7.5K 1.00G 4M 1.78G 1.78G 1M
utoo-next 107.0K 51.5K 1000M 3M 1.72G 1.72G 3M
utoo-npm 113.1K 71.6K 999M 3M 1.72G 1.72G 3M
utoo 94.1K 75.0K 1000M 2M 1.72G 1.72G 3M

p4_warm_link

PM wall ±σ user sys RSS pgMinor
bun 3.51s 0.04s 0.19s 2.42s 139M 32.3K
utoo-next 2.59s 0.27s 0.50s 3.80s 79M 18.2K
utoo-npm 2.44s 0.15s 0.52s 3.87s 84M 19.6K
utoo 2.35s 0.05s 0.48s 3.81s 81M 18.0K
PM vCtx iCtx netRX netTX cache node_mod lock
bun 310 21 43K 30K 1.88G 1.76G 1M
utoo-next 41.6K 16.9K 306K 10K 1.72G 1.72G 2M
utoo-npm 46.7K 21.0K 322K 18K 1.72G 1.72G 2M
utoo 43.0K 19.5K 306K 11K 1.72G 1.72G 2M

npmmirror.com: no output captured.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-Pkg Manager Area: Package Manager benchmark Run pm-bench on PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant