Cache TTL Best Practices: How Long Should You Cache Data?
Setting cache TTL (Time To Live) is one of the most important—and most overlooked—caching decisions. Too short and you lose performance benefits. Too long and users see stale data. Here's how to get it right.
The TTL Decision Framework
Every TTL decision balances three factors:
- Data freshness requirements: How stale is acceptable?
- Access frequency: How often is this data requested?
- Change frequency: How often does this data change?
The ideal TTL is long enough to maximize cache hits, but short enough that stale data doesn't cause problems.
TTL Recommendations by Data Type
| Data Type | Recommended TTL | Reasoning |
|---|---|---|
| User sessions | 15-30 minutes | Security + activity timeout |
| API rate limits | 1-60 seconds | Must be accurate |
| Product catalog | 5-15 minutes | Changes infrequently |
| User profiles | 5-10 minutes | Medium change frequency |
| Search results | 1-5 minutes | Personalization needs |
| Static config | 1-24 hours | Rarely changes |
| Feature flags | 30-60 seconds | Need fast propagation |
Pattern 1: Short TTL + Active Invalidation
For data that changes unpredictably, use short TTLs as a safety net, but actively invalidate on changes:
// Set cache with short TTL
await cache.set(`user:${userId}`, userData, { ttl: 300 }); // 5 min
// But invalidate immediately on updates
async function updateUser(userId, updates) {
await db.update('users', userId, updates);
await cache.delete(`user:${userId}`); // Active invalidation
}
Pattern 2: Long TTL + Cache Warming
For stable data, use longer TTLs but pre-warm the cache to avoid cold starts:
// Long TTL for stable data
await cache.set('site:config', config, { ttl: 86400 }); // 24 hours
// Warm cache on deployment
async function warmConfigCache() {
const config = await db.query('SELECT * FROM site_config');
await cache.set('site:config', config, { ttl: 86400 });
}
// Also refresh periodically in background
setInterval(warmConfigCache, 3600000); // Every hour
Pattern 3: Sliding Expiration
For session-like data, reset TTL on each access:
async function getSession(sessionId) {
const session = await cache.get(`session:${sessionId}`);
if (session) {
// Extend TTL on access
await cache.expire(`session:${sessionId}`, 1800); // Reset to 30 min
}
return session;
}
This keeps active sessions alive while letting inactive ones expire.
Pattern 4: Stale-While-Revalidate
Serve stale data immediately while refreshing in the background:
async function getWithSWR(key, fetchFn, { ttl, staleTTL }) {
const cached = await cache.get(key);
const metadata = await cache.get(`${key}:meta`);
if (cached) {
const age = Date.now() - metadata.cachedAt;
if (age > ttl * 1000) {
// Data is stale - refresh in background
refreshInBackground(key, fetchFn, ttl, staleTTL);
}
// Return cached data immediately
return cached;
}
// No cache - fetch and store
const data = await fetchFn();
await cache.set(key, data, { ttl: staleTTL });
await cache.set(`${key}:meta`, { cachedAt: Date.now() });
return data;
}
Common TTL Mistakes
- Same TTL for everything: Different data has different freshness needs
- Forgetting thundering herd: When TTL expires, many requests hit the database simultaneously
- No TTL at all: Memory fills up with stale data
- Extremely long TTLs without invalidation: Users see outdated data for hours
Dynamic TTL Based on Access Patterns
The smartest approach: adjust TTL based on how the data is actually used:
function calculateDynamicTTL(key, accessHistory) {
const avgTimeBetweenAccess = calculateAverage(accessHistory);
// TTL should be 2-3x the access interval
// Popular data gets longer TTL, rare data shorter
const dynamicTTL = Math.min(
avgTimeBetweenAccess * 2.5,
86400 // Max 24 hours
);
return Math.max(dynamicTTL, 60); // Min 1 minute
}
This ensures frequently accessed data stays cached while rarely used data doesn't waste memory.
Let AI optimize your cache TTLs
Cachee.ai automatically adjusts TTLs based on real access patterns—no manual tuning required.
Start Free TrialRelated Reading
Real-World Implementation Notes
Production cache deployments don't fail because the technology is wrong. They fail because of three operational problems that nobody warns you about until you're already in the incident.
The first problem is configuration drift. Cache TTLs, eviction policies, and memory limits start out tuned to your workload and slowly drift as your traffic patterns evolve. A configuration that was optimal six months ago is now leaving 30% of your hit rate on the table because your access patterns shifted and nobody re-tuned. The fix is treating cache configuration as code that lives in version control with the rest of your infrastructure, and reviewing it on the same cadence as database indexes — quarterly at minimum.
The second problem is silent invalidation bugs. Your cache returns a value, your application uses it, and only later does someone notice the value was stale. The user already saw the wrong number on their dashboard. The damage is done. The mitigation is instrumenting your cache layer to track stale-read rates and treating any spike above 0.5% as a P1 incident, not a "we'll look at it next sprint" backlog item.
The third problem is eviction storms during deploys. When you deploy a new version of your application that changes which keys are hot, the existing cache entries become irrelevant overnight. The first few minutes after deploy see a flood of cache misses that hammer your backend. The mitigation is cache warming — running your application against a representative traffic sample before promoting it to serve production traffic. Most teams skip this step and pay for it every release.
None of these problems are technology problems. They're operational discipline problems that the right tools make visible but only humans can actually solve. The cache layer is part of your production system and deserves the same operational attention as any other production component.
The Numbers That Matter
Cache performance discussions get philosophical fast. Here are the actual measured numbers from production deployments running on documented hardware, so you can compare against your own infrastructure instead of trusting marketing copy.
- L0 hot path GET: 28.9 nanoseconds on Apple M4 Max, single-threaded against pre-warmed in-memory cache. This is the floor — there's no faster way to read a key.
- L1 CacheeLFU GET: ~89 nanoseconds on AWS Graviton4 (c8g.metal-48xl). Sharded DashMap with admission filtering.
- Sustained throughput: 32 million ops/sec single-threaded on M4 Max, 7.41 million ops/sec at 16 workers on Graviton4 c8g.16xlarge.
- L2 fallback: Sub-millisecond hits against ElastiCache Redis 7.4 over same-AZ network when L1 misses cascade through.
The compounding effect matters more than any single number. A 28-nanosecond L0 hit means your application spends almost zero time on cache lookups in the hot path, leaving the CPU free for the actual business logic that generates revenue.
The Three-Tier Cache Architecture That Actually Works
Most caching discussions treat the cache as a single layer. Production reality is that high-performance caches are tiered, with each tier optimized for a different latency and capacity tradeoff. Understanding the tier boundaries is what separates teams that get caching right from teams that fight it for years.
L0 — In-process hot tier. This is the cache that lives inside your application process address space. Read latency is bounded by L1/L2 CPU cache plus a hash function — typically 20-100 nanoseconds. Capacity is limited by your application's heap budget, usually 1-10 GB on production servers. Hit rate on hot keys approaches 100% because there's no network in the path. This is where your tightest hot loop reads should land.
L1 — Local sidecar tier. A cache process running on the same host (or in the same pod for Kubernetes deployments) accessed via Unix domain socket or loopback TCP. Read latency is 5-50 microseconds depending on protocol overhead. Capacity is bounded by host RAM, typically 10-100 GB. This tier absorbs cross-process cache traffic from multiple application instances on the same host without paying the network round-trip cost.
L2 — Distributed remote tier. Networked Redis, ElastiCache, or Memcached. Read latency is 100 microseconds to several milliseconds depending on network distance. Capacity is effectively unbounded by clustering. This is the source of truth for cached values across your entire fleet, and the L0/L1 tiers fall back to it on miss.
The compounding effect is what makes this architecture win. When the L0 hit rate is 90%, the L1 hit rate is 95% on the remaining 10%, and the L2 hit rate is 99% on the remainder, your effective cache hit rate is 99.95% with the median read served entirely from L0 in tens of nanoseconds. That's a different universe of performance than treating the cache as a single networked tier.
What This Actually Costs
Concrete pricing math beats hypothetical. A typical SaaS workload with 1 billion cache operations per month, average 800-byte values, and a 5 GB hot working set currently runs on AWS ElastiCache cache.r7g.xlarge primary plus a read replica — roughly $480 per month for the two nodes, plus cross-AZ data transfer charges that quietly add another $50-150 per month depending on access patterns.
Migrating the hot path to an in-process L0/L1 cache and keeping ElastiCache as a cold L2 fallback drops the dedicated cache spend to $120-180 per month. For workloads where the hot working set fits inside the application's existing memory budget, you can eliminate the dedicated cache tier entirely. The cache becomes a library you link into your binary instead of a separate service to operate.
Compounded over twelve months, that's $3,600 to $4,500 per year on a single small workload. Multiply across a fleet of services and the savings start showing up in finance team conversations. The bigger savings usually come from eliminating cross-AZ data transfer charges, which Redis-as-a-service architectures incur on every read that crosses an availability zone.