By the end of this lesson, you will understand:
- Why caching matters for API performance
- How to implement time-based expiration (TTL)
- How to invalidate cache when data changes
- Common caching patterns and pitfalls
Imagine 100 users request the items list at the same time:
Request 1: SELECT * FROM items (100ms)
Request 2: SELECT * FROM items (100ms)
Request 3: SELECT * FROM items (100ms)
...
Request 100: SELECT * FROM items (100ms)
Problem:
- Database does the same work 100 times
- Response time is slow (100ms each)
- Database CPU is wasted
First request:
→ Cache miss
→ Query database
→ Store in cache (30 second TTL)
→ Return response
Next 99 requests:
→ Cache hit!
→ Return cached data immediately
File: src/cache/index.ts
const cache = new Map();
export function getCache<T>(key: string): T | null {
const entry = cache.get(key);
if (!entry) return null;
// Check expiration
if (Date.now() > entry.expiresAt) {
cache.delete(key);
return null;
}
return entry.value as T;
}
export function setCache<T>(key: string, value: T, ttlMs: number): void {
cache.set(key, {
value,
expiresAt: Date.now() + ttlMs,
createdAt: Date.now(),
hits: 0
});
}- Freshness - Data changes, cache should reflect that
- Memory - Unlimited cache = memory leak
- Staleness - Old data eventually expires
| Data Type | TTL | Reason |
|---|---|---|
| Items list | 30 seconds | Changes on reserve/confirm |
| Single item | 30 seconds | Changes on reserve/confirm |
| User profile | 5 minutes | Infrequent changes |
| Configuration | 1 hour | Rarely changes |
File: src/routes/index.ts
app.get('/items', async (req, res) => {
const cached = getCache('items');
if (cached) {
logger.debug('Cache hit for items');
return ok(res, cached);
}
const items = listItems();
setCache('items', items, 30_000); // 30 seconds
return ok(res, items);
});When data changes, remove it from cache:
// After reserving an item
function reserveItem(...) {
// ... reserve logic ...
db.prepare('UPDATE items SET availableQty = ...').run();
// Invalidate cache
invalidate('items');
invalidate('item:item_1');
}File: src/cache/index.ts
export function invalidate(key: string): void {
cache.delete(key);
logger.debug('Cache invalidated', { key });
}
export function invalidatePattern(pattern: string): void {
const regex = new RegExp('^' + pattern.replace(/\*/g, '.*') + '$');
for (const key of cache.keys()) {
if (regex.test(key)) {
cache.delete(key);
}
}
}const cached = getCache('items');
if (cached) return cached;
const data = fetchFromDatabase();
setCache('items', data, TTL);
return data;function updateItem(id, data) {
db.update(id, data); // Write to DB
setCache('item:' + id, data); // Update cache
}function getItem(id) {
const cached = getCache('item:' + id);
if (cached) {
// Update cache asynchronously
fetchFromDatabase(id).then(data => {
setCache('item:' + id, data);
});
return cached;
}
return fetchFromDatabase(id);
}-
User-specific data (without scoping)
// Bad: Shared across users cache.set('user', currentUser); // Good: User-scoped cache.set(`user:${userId}`, currentUser);
-
Frequently changing data
// Bad: Stock changes often cache.set('stock:item_1', item.stock); // Good: Use short TTL cache.set('stock:item_1', item.stock, 5_000); // 5 seconds
-
Large responses
// Bad: Cache huge lists cache.set('all-reservations', hugeArray); // Good: Paginate cache.set('reservations:page:1', page1);
export function getCacheStats() {
return {
size: cache.size,
hits: stats.hits,
misses: stats.misses,
hitRate: stats.hits / (stats.hits + stats.misses)
};
}- 90%+ = Excellent
- 70-90% = Good
- 50-70% = Fair
- < 50% = Consider not caching
# First request - cache miss (slower)
time curl http://localhost:3000/api/v1/items
# Second request - cache hit (much faster!)
time curl http://localhost:3000/api/v1/items
# After reserve - cache invalidated
curl -X POST http://localhost:3000/api/v1/reserve \
-d '{"userId":"user_1","itemId":"item_1","qty":1}'
# Next get fetches fresh data
time curl http://localhost:3000/api/v1/items| File | Purpose |
|---|---|
src/cache/index.ts |
Cache implementation |
src/routes/index.ts |
Cache usage in endpoints |
src/services/reservations.ts |
Cache invalidation |
const users = getCache('all-users') ??
db.query('SELECT * FROM users').all();const products = getCache(`products:${page}`) ??
fetchProductsFromAPI(page);const stats = getCache('user:stats:123') ??
calculateUserStats('user_123');// Bad: Cache never invalidated
setCache('items', items, 30_000);
// Later: Items change but cache still has old data
db.update('items', ...);// Good: Always invalidate
db.update('items', ...);
invalidate('items');// Bad: Wastes memory
setCache('request-1', response);
setCache('request-2', response);
// ... 1000 more// Good: Only cache expensive operations
const result = expensiveQuery();
if (result.duration > 100) {
setCache(key, result, TTL);
}// In-memory cache (this project)
const cache = new Map();
// Production cache (Redis)
const redis = new Redis();
await redis.set(key, value, 'EX', 30);- Shared across multiple server instances
- Persistent (survives restarts)
- Distributed locking
- Advanced features (pub/sub, streams)
- Cache frequently accessed data - Reduce database load
- Set appropriate TTL - Balance freshness and performance
- Invalidate on change - Stale data is worse than no cache
- Monitor hit rate - Know if caching is helping
- Don't over-cache - Memory is finite
Task: Add caching to a new endpoint
- Create a
/statsendpoint that returns reservation statistics - Cache the result for 1 minute
- Invalidate the cache when a new reservation is created
- Test with
curl -w "@-"curl -w "\nTime: %{time_total}s\n" http://localhost:3000/api/v1/stats
Continue to Lesson 6: Logging & Observability to learn how to make your API debuggable in production.
💡 Tip: Cache hit rate > 80% is good. Below 50%, consider whether caching is worth it!