Distributed singleflight for Rails cache misses to prevent stampedes on cold keys.
Cache Coalescer ensures only one request computes a missing value while the rest wait for it. The first caller acquires a lock, computes, and writes to cache. Other callers poll briefly and reuse the result. Optional stale values can be served if the lock is held for too long.
This is ideal for expensive cache-miss work such as API calls, report generation, or heavyweight database queries.
- Prevent thundering herds on cold cache keys
- Reduce p99 latency spikes during traffic bursts
- Protect downstream services from request stampedes
- Coalesce expensive fan-out workloads into a single computation
- Ruby 3.0+
- ActiveSupport 6.1+
- Works with any ActiveSupport cache store
- Best with Redis-backed stores for distributed locking
Also check out these related gems:
- Cache SWR: https://github.com/Elysium-Arc/cache-swr
- Faraday Hedge: https://github.com/Elysium-Arc/faraday-hedge
- Rack Idempotency Kit: https://github.com/Elysium-Arc/rack-idempotency-kit
- Env Contract: https://github.com/Elysium-Arc/env-contract
# Gemfile
gem "cache-coalescer"value = Cache::Coalescer.fetch("expensive-key", ttl: 60, lock_ttl: 5, wait_timeout: 2, store: Rails.cache) do
ExpensiveQuery.call
endRails integration adds Rails.cache.fetch_coalesced:
Rails.cache.fetch_coalesced("expensive-key", ttl: 60) { ExpensiveQuery.call }ttl(Integer) cache TTL in secondslock_ttl(Integer) lock expiry in secondswait_timeout(Float) how long waiters poll for a resultwait_sleep(Float) polling interval in secondsstale_ttl(Integer) optional stale window; if set, stale values are returned on timeoutstoreActiveSupport cache store (defaults toRails.cachewhen available)lock_clientRedis client orCache::Coalescer::Lock::InMemoryLock
If the cache store exposes redis, a Redis lock is used automatically. Otherwise, the gem falls back to an in-memory lock which is safe for single-process usage.
bundle exec rake release