Skip to content

feat: Drop persistent-store cache after FDv2 in-memory store init#384

Open
keelerm84 wants to merge 1 commit intomainfrom
mk/sdk-2222/drop-persistent-cache
Open

feat: Drop persistent-store cache after FDv2 in-memory store init#384
keelerm84 wants to merge 1 commit intomainfrom
mk/sdk-2222/drop-persistent-cache

Conversation

@keelerm84
Copy link
Copy Markdown
Member

@keelerm84 keelerm84 commented May 7, 2026

Summary

With FDv2, the in-memory store (InMemoryFeatureStoreV2) becomes the source of truth for flag evaluations once it receives its first full payload. The persistent store's CachingStoreWrapper continues to hold a duplicate copy of every flag and segment in its ExpiringCache, even though that cache is never read again post-init -- roughly doubling the in-memory flag footprint when a persistent store is configured.

This change adds a disable_cache hook that propagates from Store#set_basis through FeatureStoreClientWrapperV2 (and the RedisFeatureStore facade) down to CachingStoreWrapper, where it releases the ExpiringCache reference. The cache is still populated and useful during the bootstrap window before set_basis fires, so :expiration / :capacity options remain functional during that window; YARD docs on the Redis, DynamoDB, and Consul integrations note the new FDv2 behavior.

While here, this also closes a latent TOCTOU window in CachingStoreWrapper's read paths by capturing @cache to a local variable before the existing nil-guards, so a concurrent disable_cache cannot NPE an in-flight reader.

Mirrors the Python implementation from python-server-sdk PR #426.


Note

Medium Risk
Changes cache lifecycle behavior for persistent stores by disabling their local cache after the first full FDv2 payload; mistakes could impact persistent-store performance or initialization sequencing. Includes concurrency-sensitive cache handling and new warning-path logging that should be validated under load.

Overview
With FDv2, the SDK now drops the persistent-store in-memory cache once the in-memory store becomes authoritative (after a full TRANSFER_FULL payload), reducing duplicate flag/segment memory usage.

This introduces a disable_cache hook that propagates through Store#set_basis (best-effort with warning on failure), FeatureStoreClientWrapperV2, and the Redis feature-store facade down to Integrations::Util::CachingStoreWrapper, which releases/clears its cache and updates read paths to safely capture @cache locally to avoid races.

Updates YARD docs for Redis/DynamoDB/Consul cache options to note the FDv2 bootstrap-only behavior, and adds specs covering forwarding/no-op behavior, idempotency, race-safe bypass after disable, and tolerance of missing/raising disable_cache implementations.

Reviewed by Cursor Bugbot for commit 1db3fab. Bugbot is set up for automated code reviews on this repo. Configure here.

With FDv2, the in-memory store becomes the source of truth for flag
evaluations once it receives its first full payload. The persistent
store's CachingStoreWrapper continues to hold a duplicate copy of every
flag and segment in its ExpiringCache, even though that cache is never
read again. This roughly doubled the in-memory footprint of flag data
when a persistent store was configured.

Add a disable_cache hook that propagates from Store#set_basis through
FeatureStoreClientWrapperV2 (and the RedisFeatureStore facade) to
CachingStoreWrapper, where it releases the ExpiringCache reference.
The cache is still populated during the bootstrap window before
set_basis fires, so :expiration / :capacity options remain functional
during that window. YARD docs on the Redis, DynamoDB, and Consul
integrations note the new FDv2 behavior.

Also closes a latent TOCTOU window in CachingStoreWrapper's read paths
by capturing @cache to a local variable before the existing nil-guards.
@keelerm84 keelerm84 marked this pull request as ready for review May 7, 2026 17:41
@keelerm84 keelerm84 requested a review from a team as a code owner May 7, 2026 17:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant