Skip to content

Updates#1

Open
autonull wants to merge 1188 commits intodeepstupid:multilevel_researchfrom
ben-manes:master
Open

Updates#1
autonull wants to merge 1188 commits intodeepstupid:multilevel_researchfrom
ben-manes:master

Conversation

@autonull
Copy link
Copy Markdown

No description provided.

@ben-manes ben-manes force-pushed the master branch 2 times, most recently from 940dca6 to 2f271e9 Compare July 23, 2022 22:15
@ben-manes ben-manes force-pushed the master branch 2 times, most recently from ac8e049 to 08b24cd Compare August 27, 2022 03:02
@ben-manes ben-manes force-pushed the master branch 17 times, most recently from a337c19 to 48d5246 Compare September 10, 2022 01:17
@ben-manes ben-manes force-pushed the master branch 2 times, most recently from 53f222c to 00b1503 Compare September 26, 2022 02:50
@ben-manes ben-manes force-pushed the master branch 2 times, most recently from 73f40d3 to 6abd616 Compare October 2, 2022 09:40
@ben-manes ben-manes force-pushed the master branch 2 times, most recently from a7d31f5 to d74756d Compare October 14, 2022 06:02
ben-manes and others added 30 commits March 15, 2026 20:16
In remap(), ctx.oldWeight was not set before the eviction check for
expired/collected entries, causing recordEviction to log weight 0
instead of the actual entry weight. Moved ctx.oldWeight assignment
before the cause check, matching doComputeIfAbsent.

Added discardRefresh calls in three remap eviction paths that were
missing them: the !computeIfAbsent early return, the newValue==null
path when cause was already set, and the catch-commit-rethrow handler.
Without these, a pending refresh could complete after eviction and
incorrectly overwrite a subsequently inserted entry.
In determineAdjustment(), hitsInSample() + missesInSample() was computed
as int + int, which silently overflows when the true sum exceeds
Integer.MAX_VALUE. This occurs for caches with maximumSize >= ~107M where
sampleSize saturates at Integer.MAX_VALUE. The overflowed (negative) sum
permanently fails the requestCount < sampleSize check, preventing the
hill climber from adjusting the window/main partition and feeding a
negative hit rate into the step size calculation.
Allows users to supply a per-copier serialization filter as a more
surgical alternative to the JVM-wide jdk.serialFilter property.
Consistent with expire() and peekAhead() which already use the
timerWheel length mask rather than the span mask for tick-to-bucket
conversion.
…ulk load

Test scenarios that were previously uncovered:
- Expiry.expireAfterCreate throwing on expired entry during compute (catch-
  commit-rethrow pattern with variable expiration and weighted eviction stats)
- Mapping function throwing on expired entry during compute (eviction committed
  with correct old weight in stats, EXPIRED removal cause)
- Compute on expired entry where remapping returns same object instance
  (setValue skip optimization, EXPIRED notification, entry survives)
- AsyncExpiry delegating to user's expireAfterCreate on future completion when
  the user callback throws (entry removed, load failure logged)
- Bulk loadAll with mid-iteration weigher failure (partial put, stats)
- replace() on expired entry scheduling cleanup for deferred eviction
* Fix errorprone jar input: use RELATIVE path sensitivity for cache relocatability

* Fix cache relocatability: use RELATIVE path sensitivity and @internal for absolute paths

* Fix remaining inputs.files() calls missing RELATIVE path sensitivity

* Fix javadoc cache relocatability: defer snippet-path option, add RELATIVE sensitivity

* Fix jcache:javadoc cache relocatability: add RELATIVE path sensitivity to unzipped javadoc input
Co-authored-by: Ao Li <aoli-al@users.noreply.github.com>
replace(K, V) and replace(K, V, V) called notifyRemoval directly
instead of notifyOnReplace, firing spurious REPLACED notifications
when old and new async futures resolved to the same value instance.
All other mutation paths (put, computeIfPresent) correctly used
notifyOnReplace which suppresses the notification in this case.
…nitialCapacity

These fields used UNSET_INT (-1) as a sentinel for "not configured," but
the parser accepted -1 as valid input. Since toBuilder() skipped fields
equal to UNSET_INT, a spec string like "maximumSize=-1" silently produced
an unbounded cache with no error.

Changed the fields from primitive int/long to @nullable Integer/Long,
eliminating the sentinel collision. Null means "not configured" and any
non-null value (including negatives) is the user's input. Added early
validation to reject negative values at parse time.

Added structured fuzz test that builds spec strings from random inputs,
parses, builds a cache, and verifies each configured feature is present.
The expiration nanos fields defaulted to 0 and used > 0 checks, making
"not configured" (default 0) indistinguishable from "configured as 0"
(valid evict-immediately semantics). After serialization round-trip, a
cache with zero-duration expiration silently became a cache with no
expiration.

Changed to use UNSET_INT (-1) sentinel, consistent with how maximumSize
and maximumWeight are already handled in the same class.
The writeTime LSB is used as a refresh-in-progress flag. ageOf in
BoundedExpireAfterWrite and BoundedRefreshAfterWrite computed
now - node.getWriteTime() without masking, causing the returned
age to be off by 1 nanosecond during refresh. nodeToCacheEntry
already masked correctly with (now & ~1L) - (node.getWriteTime() & ~1L).
The early negative-value checks in CaffeineSpec duplicated the builder's
own validation. Removed to be consistent with the deferred validation
pattern used throughout CaffeineSpec. The @nullable sentinel fix from
the prior commit already prevents the -1 collision; negative values
now flow through to the builder which rejects them with its own error.
Both methods called scheduleDrainBuffers() inside the iteration loop
for every expired/collected entry encountered. While cheap (opaque
read + tryLock bail out), it's cleaner to track and call once after
the loop.
The onlyIfAbsent fast paths in put() called expiry() (cache-level)
instead of using the expiry parameter for tryExpireAfterRead. Not
currently observable since FixedExpireAfterWrite.expireAfterRead
returns currentDuration unchanged, but inconsistent with the slow
path which correctly uses the parameter.
The lambda declared (k, executor) but applied the captured outer key
instead of k. Functionally identical but k should be used directly.
Fix advance() to correctly expire timers when nanoTime() crosses from
negative to zero by using separate variables for delta computation
(compensated) and bucket scanning (raw), and changing the compensation
condition from > 0 to >= 0. Fix expire() to clamp delta in long space
before narrowing to int, preventing silent no-op on large deltas. Fix
getExpirationDelay() to clamp async computing entries' negative age to
zero, preventing ~220-year scheduler delays (aedile#49).
Policy methods (weightOf, ageOf, getExpiresAfter, setExpiresAfter)
now check node.getValue() == null before returning metadata, consistent
with containsKey's handling of GC-collected weak/soft value entries.
The async branch wrapped the future-resolution work in
whenCompleteAsync(..., executor()) and thenAcceptAsync(..., executor()),
so the outer comparison stage was submitted to the executor directly.
If the executor rejected the continuation (shutdown, bounded queue
saturated), the dependent stage completed exceptionally but nobody
observed the rejection — the REPLACED notification was silently lost.

Run the nv/ov comparisons synchronously on the future-completer thread
and let notifyRemoval perform the single executor submission. It
already falls back to running the listener inline if the executor
rejects, matching every other notifyRemoval call site. The comparison
itself is a pointer check, so the thread it runs on is irrelevant.
The sync views (CacheView, LoadingCacheView) lacked writeReplace and
readObject protection, unlike the 8 concrete cache types. A crafted
byte stream could bypass writeReplace and directly deserialize a view
with a null or malicious asyncCache field.

Add SyncViewProxy that wraps the async cache and recreates the sync
view via asyncCache.synchronous() on deserialization. Add readObject
to AbstractCacheView that rejects direct deserialization attempts.
Port the upstream JCTools fix (d64b9b44) for MpscGrowableArrayQueue.
If buffer allocation fails after the producer index is set to odd
(signaling resize-in-progress), restore it to the original even value
so other producers can resume instead of spinning indefinitely.
- Clamp ceilingPowerOfTwo(int) to 2^30 for inputs exceeding the
  max signed power of two, avoiding a negative return value
- Add inline fallback to AsyncRemovalListener when the executor
  rejects, matching the sync notifyRemoval pattern
- Make MpscGrowableArrayQueue.allocate non-static so tests can
  override it to simulate OOME during resize
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants