Distributed Caching Patterns for Kubernetes
December 22, 2025 • 8 min read • Cloud Native
Kubernetes brings unique challenges to caching. Pods come and go. IPs change. Storage is ephemeral by default. Here's how to build reliable distributed caching in K8s environments.
Pattern 1: External Managed Cache
The simplest approach—use cloud provider's managed cache service:
# AWS ElastiCache, GCP Memorystore, Azure Cache
apiVersion: v1
kind: Secret
metadata:
name: redis-credentials
type: Opaque
data:
host: cmVkaXMuYWJjMTIzLmNhY2hlLmFtYXpvbmF3cy5jb20=
port: NjM3OQ==
password: c2VjcmV0cGFzc3dvcmQ=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-server
spec:
template:
spec:
containers:
- name: api
env:
- name: REDIS_HOST
valueFrom:
secretKeyRef:
name: redis-credentials
key: host
Pros: Zero maintenance, automatic failover, scaling. Cons: Cloud lock-in, network latency, cost.
Pattern 2: StatefulSet Redis Cluster
Run Redis inside Kubernetes with persistent storage:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: redis
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7-alpine
ports:
- containerPort: 6379
volumeMounts:
- name: data
mountPath: /data
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
Why StatefulSet? Stable network identities (redis-0, redis-1, redis-2) and persistent volumes that survive pod restarts.
Pattern 3: Sidecar Cache
Run a local cache alongside each application pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-with-cache
spec:
template:
spec:
containers:
- name: api
image: my-api:latest
env:
- name: CACHE_HOST
value: "localhost" # Sidecar is on localhost
- name: CACHE_PORT
value: "6379"
- name: cache-sidecar
image: redis:7-alpine
ports:
- containerPort: 6379
resources:
requests:
memory: "64Mi"
limits:
memory: "128Mi"
Pros: Ultra-low latency (no network hop), simple. Cons: Cache not shared between pods, more memory per pod.
Pattern 4: Two-Tier Caching
Combine local sidecar (L1) with shared Redis (L2):
// Application code
class TwoTierCache {
constructor(localCache, sharedCache) {
this.l1 = localCache; // Sidecar Redis
this.l2 = sharedCache; // Cluster Redis
}
async get(key) {
// Check L1 first (fastest)
let value = await this.l1.get(key);
if (value) return value;
// Check L2
value = await this.l2.get(key);
if (value) {
// Populate L1 for next time
await this.l1.set(key, value, { ex: 60 });
return value;
}
return null;
}
async set(key, value, ttl) {
// Write to both
await Promise.all([
this.l1.set(key, value, { ex: Math.min(ttl, 60) }),
this.l2.set(key, value, { ex: ttl })
]);
}
}
Pattern 5: Redis Operator
Use Kubernetes operators for production-grade Redis:
# Using Redis Operator (e.g., Spotahome or Redis Enterprise)
apiVersion: databases.spotahome.com/v1
kind: RedisCluster
metadata:
name: production-cache
spec:
numberOfMasters: 3
replicasPerMaster: 1
image: redis:7-alpine
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 1Gi
storage:
persistentVolumeClaim:
metadata:
name: redis-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Operators handle: automatic failover, scaling, backups, and upgrades.
Service Discovery
Kubernetes services provide stable endpoints:
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
clusterIP: None # Headless for StatefulSet
selector:
app: redis
ports:
- port: 6379
---
# Connect using DNS
# redis-0.redis.default.svc.cluster.local
# redis-1.redis.default.svc.cluster.local
Resource Planning
Size your cache pods appropriately:
- Memory: Account for data + overhead (Redis uses ~1.2x data size)
- CPU: Redis is single-threaded; one core handles 100K+ ops/sec
- Storage: If persisting, plan for 2x data size for rewrites
- Network: Consider network policies for security
Kubernetes-native caching
Cachee.ai deploys as a Helm chart with auto-scaling, monitoring, and multi-cluster sync built in.
Start Free Trial
The Numbers That Matter
Cache performance discussions get philosophical fast. Here are the actual measured numbers from production deployments running on documented hardware, so you can compare against your own infrastructure instead of trusting marketing copy.
- L0 hot path GET: 28.9 nanoseconds on Apple M4 Max, single-threaded against pre-warmed in-memory cache. This is the floor — there's no faster way to read a key.
- L1 CacheeLFU GET: ~89 nanoseconds on AWS Graviton4 (c8g.metal-48xl). Sharded DashMap with admission filtering.
- Sustained throughput: 32 million ops/sec single-threaded on M4 Max, 7.41 million ops/sec at 16 workers on Graviton4 c8g.16xlarge.
- L2 fallback: Sub-millisecond hits against ElastiCache Redis 7.4 over same-AZ network when L1 misses cascade through.
The compounding effect matters more than any single number. A 28-nanosecond L0 hit means your application spends almost zero time on cache lookups in the hot path, leaving the CPU free for the actual business logic that generates revenue.
Where Redis Fits and Where It Doesn't
This is the honest comparison. Redis is the right tool for plenty of workloads — pretending otherwise wastes your time.
- Redis wins: Rich data structures (sorted sets, streams, geospatial), Lua scripting for atomic multi-key operations, mature pub/sub, decade-plus of client library maturity, ZADD/ZRANGE/XADD primitives that no key-value store can match.
- Cachee wins: Pure key-value reads on the hot path, in-process L0 with no network round-trip, lower per-entry memory overhead, lock-free shard concurrency that scales linearly with worker count, and cost: no per-instance cache tier when the working set fits in your application's memory budget.
Most production deployments run both. Redis stays for the workloads it was designed for. Cachee sits in front of Redis or ElastiCache as an L1 hot tier that absorbs 95%+ of read traffic before it ever hits the network. The two compose cleanly because Cachee speaks the RESP protocol — your existing Redis clients work with zero code changes.
Average Latency Hides The Real Story
Average latency is the most misleading number in cache benchmarking. The percentile distribution is what actually breaks production systems. Tail latency — the slowest 0.1% of requests — is where users notice the lag and where SLAs get violated.
| Percentile | Network Redis (same-AZ) | In-process L0 |
| p50 | ~85 microseconds | 28.9 nanoseconds |
| p95 | ~140 microseconds | ~45 nanoseconds |
| p99 | ~280 microseconds | ~80 nanoseconds |
| p99.9 | ~1.2 milliseconds | ~150 nanoseconds |
The p99.9 spike on networked Redis isn't a bug — it's the cost of running a single-threaded event loop that occasionally blocks on background tasks like RDB snapshots, AOF rewrites, and expired-key sweeps. Cachee's L0 stays inside a few hundred nanoseconds because the hot-path read is a lock-free shard lookup with no background work scheduled on the same thread.
If your application is sensitive to tail latency — payments, real-time bidding, fraud detection, trading — the p99.9 number is the one to optimize against. Average latency improvements that don't move the tail are vanity metrics.
What This Actually Costs
Concrete pricing math beats hypothetical. A typical SaaS workload with 1 billion cache operations per month, average 800-byte values, and a 5 GB hot working set currently runs on AWS ElastiCache cache.r7g.xlarge primary plus a read replica — roughly $480 per month for the two nodes, plus cross-AZ data transfer charges that quietly add another $50-150 per month depending on access patterns.
Migrating the hot path to an in-process L0/L1 cache and keeping ElastiCache as a cold L2 fallback drops the dedicated cache spend to $120-180 per month. For workloads where the hot working set fits inside the application's existing memory budget, you can eliminate the dedicated cache tier entirely. The cache becomes a library you link into your binary instead of a separate service to operate.
Compounded over twelve months, that's $3,600 to $4,500 per year on a single small workload. Multiply across a fleet of services and the savings start showing up in finance team conversations. The bigger savings usually come from eliminating cross-AZ data transfer charges, which Redis-as-a-service architectures incur on every read that crosses an availability zone.
Three Pitfalls That Burn Teams
Three things consistently bite teams during the first month of running an in-process cache alongside or instead of a network cache. We've seen each of these in production. Here's how to avoid them.
- Hot working set sizing. The L0 hot tier is fast because it lives in your application process. If your hot working set is 50 GB and your heap budget is 8 GB, you can't put all of it in L0. Measure your actual hot key distribution before deciding what fits in-process versus what needs an L1 sidecar or an L2 fallback. The Cachee admission filter will protect you from polluting the cache, but it can't conjure RAM that doesn't exist.
- TTL semantics drift. Redis processes TTL expirations lazily on access plus a background sweeper. Cachee processes them in the same lock-free read path via monotonic timestamp comparison. Behavior is identical for the vast majority of workloads, but if you depend on Redis-specific behaviors like
OBJECT IDLETIME tracking or precise keyspace expiration notifications, validate the semantics for your specific use case before flipping production traffic over.
- Eviction policy assumptions. Redis defaults to
allkeys-lru. Cachee uses CacheeLFU which makes different admission decisions on workloads with skewed access frequency distributions. Most teams see hit rate improvements after migration, but if you've spent years tuning your application around LRU behavior — choosing TTLs based on how LRU evicts cold data — expect a brief transition period where you re-tune TTLs and access patterns to match the new admission policy.