← Back to Blog

Cache Consistency in Microservices: Eventual vs Strong

December 21, 2025 • 7 min read • Distributed Systems

Cache consistency is the hardest problem in distributed microservices architectures. Update data in one service, and five other services have stale caches. Choose strong consistency, and you lose the performance benefits of caching. This guide helps you navigate the consistency spectrum and choose the right approach for each use case.

The Cache Consistency Problem

Consider an e-commerce system with separate microservices:

// Product Service updates price
await db.products.update(
  { id: 123 },
  { price: 49.99 }
);

// Problem: These caches are now stale:
// - Product Service cache
// - Cart Service cache (has old price)
// - Recommendation Service cache (has old price)
// - Search Service cache (has old price)

// How do they know to invalidate?

This is the classic distributed cache invalidation problem. The CAP theorem forces a choice between consistency and availability. Most caching systems choose availability, leading to consistency challenges.

Consistency Models Explained

Strong Consistency

Every read returns the most recent write. All services see the same data at the same time.

Eventual Consistency

Reads may return stale data temporarily, but all replicas converge to the same state eventually.

Bounded Staleness

Stale data is allowed, but only within defined limits (time or version bounds).

Pattern 1: Time-Based Invalidation (Eventual)

The simplest approach: cache with short TTLs and accept brief inconsistency.

// Product Service
async function updateProduct(id, data) {
  await db.products.update(id, data);

  // Invalidate own cache
  await cache.delete(`product:${id}`);

  // Other services will get fresh data after TTL expires
  // Max staleness = TTL (e.g., 60 seconds)
}

// Cart Service (different microservice)
async function getProduct(id) {
  let product = await cache.get(`product:${id}`);

  if (!product) {
    product = await productServiceAPI.getProduct(id);
    // Cache for 60 seconds
    await cache.set(`product:${id}`, product, { ttl: 60 });
  }

  return product;  // May be up to 60s stale
}

When to Use TTL-Based Invalidation

Pattern 2: Event-Driven Invalidation (Eventual)

Publish cache invalidation events when data changes. Other services subscribe and invalidate their caches.

// Product Service
async function updateProduct(id, data) {
  await db.products.update(id, data);

  // Invalidate local cache
  await cache.delete(`product:${id}`);

  // Publish invalidation event
  await eventBus.publish('product.updated', {
    productId: id,
    timestamp: Date.now(),
    fields: ['price', 'stock']
  });
}

// Cart Service (subscriber)
eventBus.subscribe('product.updated', async (event) => {
  // Invalidate cached product data
  await cache.delete(`product:${event.productId}`);
  await cache.delete(`cart:*:product:${event.productId}`);

  console.log(`Invalidated cache for product ${event.productId}`);
});

// Recommendation Service (subscriber)
eventBus.subscribe('product.updated', async (event) => {
  // Invalidate recommendation caches that include this product
  await cache.invalidatePattern(`recommendations:*:${event.productId}`);
});

Event-Driven Invalidation Benefits

Challenges

Pattern 3: Write-Through Cache (Strong)

Updates go through a centralized cache layer that maintains consistency.

// Centralized Cache Service
class CacheService {
  async get(key) {
    const cached = await redis.get(key);
    if (cached) return JSON.parse(cached);

    // Cache miss: fetch from database
    const data = await database.get(key);
    await this.set(key, data);
    return data;
  }

  async set(key, value, ttl = 3600) {
    // Write to database first
    await database.set(key, value);

    // Then update cache
    await redis.setex(key, ttl, JSON.stringify(value));

    // All readers get consistent data
  }

  async delete(key) {
    await database.delete(key);
    await redis.del(key);
  }
}

// All services use centralized cache
const cache = new CacheService();

// Product Service
await cache.set('product:123', { price: 49.99 });

// Cart Service reads immediately
const product = await cache.get('product:123');
// Guaranteed to see updated price

Trade-Offs

Pattern 4: Version-Based Consistency

Include version numbers in cache keys to ensure correct data is used.

// Product Service maintains version
async function updateProduct(id, data) {
  const version = await db.products.incrementVersion(id);

  await db.products.update(id, data);

  // Cache with version in key
  await cache.set(`product:${id}:v${version}`, data, { ttl: 3600 });

  // Publish new version
  await eventBus.publish('product.updated', {
    productId: id,
    version: version
  });
}

// Cart Service
let currentVersion = 1;

eventBus.subscribe('product.updated', (event) => {
  currentVersion = event.version;
});

async function getProduct(id) {
  // Always fetch with current version
  const key = `product:${id}:v${currentVersion}`;
  let product = await cache.get(key);

  if (!product) {
    product = await productServiceAPI.getProduct(id);
    await cache.set(key, product, { ttl: 3600 });
  }

  return product;
}

Pattern 5: Read Repair

Detect stale data during reads and update automatically.

async function getProduct(id) {
  const cached = await cache.get(`product:${id}`);

  if (cached) {
    // Background validation: is cache stale?
    validateCache(id, cached.updatedAt).then(async (isStale) => {
      if (isStale) {
        // Repair cache in background
        const fresh = await productServiceAPI.getProduct(id);
        await cache.set(`product:${id}`, fresh);
      }
    });

    return cached;  // Return cached immediately
  }

  // Cache miss
  const product = await productServiceAPI.getProduct(id);
  await cache.set(`product:${id}`, product, { ttl: 300 });
  return product;
}

async function validateCache(id, cachedTimestamp) {
  // Check if source data is newer
  const lastModified = await productServiceAPI.getLastModified(id);
  return lastModified > cachedTimestamp;
}

Pattern 6: Hybrid Consistency Levels

Use different consistency models for different data types within the same system.

const CONSISTENCY_POLICIES = {
  'product.price': 'eventual',        // Can be briefly stale
  'product.description': 'eventual',   // Can be briefly stale
  'inventory.count': 'strong',        // Must be accurate
  'user.balance': 'strong',           // Financial data
  'user.profile': 'eventual',         // Can be stale
};

async function getCacheConsistency(dataType) {
  return CONSISTENCY_POLICIES[dataType] || 'eventual';
}

async function getData(type, id) {
  const consistency = await getCacheConsistency(type);

  if (consistency === 'strong') {
    // Always read from source with cache-aside
    return await getWithStrongConsistency(type, id);
  } else {
    // Use cached data with TTL
    return await getWithEventualConsistency(type, id);
  }
}

Monitoring Consistency

Track consistency metrics across services:

// Consistency lag metric
async function measureConsistencyLag() {
  const sourceData = await database.get('product:123');
  const cachedData = await cache.get('product:123');

  if (cachedData) {
    const lag = sourceData.updatedAt - cachedData.updatedAt;
    metrics.recordConsistencyLag('product', lag);

    if (lag > 5000) {  // >5 seconds stale
      logger.warn(`High consistency lag: ${lag}ms for product:123`);
    }
  }
}

// Stale read detection
async function detectStaleReads() {
  // Track version mismatches
  metrics.increment('cache.stale_reads', {
    service: 'cart',
    resource: 'product'
  });
}

Decision Framework

Use Strong Consistency When:

Use Eventual Consistency When:

Use Hybrid Approach When:

Conclusion

Cache consistency in microservices is about choosing the right trade-off for each use case. TTL-based invalidation works for most non-critical data. Event-driven invalidation reduces staleness while maintaining loose coupling. Write-through caches provide strong consistency at the cost of performance. Version-based systems prevent stale data usage.

The best architectures use different consistency models for different data types: strong consistency for critical data, eventual consistency for everything else. Monitor consistency lag continuously and adjust TTLs and invalidation strategies based on observed behavior.

Automatic Consistency Management

Cachee.ai intelligently manages cache consistency across microservices with ML-powered invalidation timing.

Start Free Trial