cache: reduce contention in load_or_fetch#20
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: df8d319441
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| let should_remove = in_flight | ||
| .get(cache_key) | ||
| .map(|current| Arc::ptr_eq(current, slot) && Arc::strong_count(slot) == 2) | ||
| .unwrap_or(false); |
There was a problem hiding this comment.
Drop
_slot_guard before pruning IN_FLIGHT_LOADS
At both cleanup call sites in load_or_insert(), _slot_guard and slot are still alive, so Arc::strong_count(slot) is at least 3 here (the map entry, the local slot, and the OwnedMutexGuard). That makes should_remove permanently false, so every distinct cache_key leaves an Arc<Mutex<()>> behind in IN_FLIGHT_LOADS even after the load completes. In a long-lived service that sees many one-off bounding boxes, this side map will grow without bound.
Useful? React with 👍 / 👎.
Summary
load_or_fetchwas still holding the global cache write lock across disk checks, cache-file reads, Overpass fetches, and cache-file writes. That serialized unrelated regions and also left same-key duplicate work underspecified.This change refactors the load path to use a double-checked insert with per-key in-flight deduplication:
Validation
Fixes #7