When any Cachee instance writes or deletes a key, every other instance invalidates its local copy instantly. Zero code. Zero pub/sub wiring. Zero stale data across services.
Microservices cache aggressively because they have to. Every service keeps a local copy of the data it needs to avoid round-tripping to a shared database on every request. But the moment one service updates a record, every other service holding a cached copy of that record is silently wrong.
The pattern is always the same. Service A caches a user profile. Service B processes a name change and writes the update to the database. Service A has no idea. It keeps serving the old name for minutes, sometimes hours, until the TTL expires. Users see inconsistent data depending on which service handles their next request.
Every team solves this differently. One team wires up Redis pub/sub. Another team builds a webhook listener. A third team polls the database every 30 seconds. None of these approaches talk to each other. The result is a patchwork of invalidation logic scattered across services, impossible to audit, and guaranteed to have gaps.
Every Cachee instance in a namespace is automatically connected to a coherence channel. When a write or delete happens on any instance, a lightweight invalidation event propagates to all other instances. The affected key is evicted from every L1 cache before the next read can return stale data.
The coherence channel is not a separate service to deploy. It is built into every Cachee instance. When you connect multiple instances to the same namespace, coherence is active by default. There is no flag to enable, no topic to configure, no subscription to manage. The protocol handles ordering, deduplication, and failure recovery internally. Learn more about the underlying caching architecture.
Every team that runs microservices with local caching eventually builds some form of invalidation. Here is what that typically looks like compared to Cachee coherence.
| Aspect | Manual Pub/Sub Invalidation | Cachee Coherence |
|---|---|---|
| Setup Time | Weeks per service pair | Zero (built-in) |
| Infrastructure | Redis pub/sub, Kafka, or custom webhooks | None additional |
| Code Changes | Publisher + subscriber per service | Zero application code |
| Deduplication | Manual (easy to miss) | Automatic, protocol-level |
| Failure Handling | Custom retry logic per service | Built-in with TTL fallback |
| Cross-Team Coordination | Every team agrees on topics, schemas, retry policy | Share a namespace, done |
| Propagation Latency | 10-100ms (depends on broker) | <1ms |
| Consistency Gap | Varies (message queue delay + consumer lag) | Near-zero (sub-ms eviction) |
Traditional invalidation patterns require a shared message broker that every service connects to. This introduces a single point of failure, adds network latency to every invalidation, and requires every team to agree on topic naming, message formats, and retry semantics. Cachee eliminates this entirely. The coherence channel is peer-to-peer within the Cachee mesh. No central broker, no shared Redis instance, no Kafka cluster to maintain.
Some teams skip message brokers and use HTTP webhooks or database polling instead. Webhooks are brittle: they require endpoint registration, authentication, retry logic, and dead-letter handling for each service pair. Polling is wasteful: it burns CPU and database connections checking for changes that may not exist. Cachee coherence is push-based and event-driven, with zero wasted work and zero infrastructure to configure.
See how Cachee compares to other caching approaches in our detailed comparison.
Cache coherence is not a theoretical improvement. These are the concrete scenarios where stale data causes user-visible bugs, and coherence eliminates them.
There is no configuration step. If two Cachee instances share a namespace, they are coherent. Write from one, the other knows.
cache.del('key'), the coherence channel broadcasts the deletion. Every instance evicts the key from L1. No ghost data, no phantom reads after deletion.The coherence event is emitted asynchronously after the write completes. Your write latency is unchanged. The invalidation propagates in the background and completes within sub-ms, but it does not block the writing service.
If an instance is temporarily unreachable, it cannot receive coherence events. In this case, the instance falls back to TTL-based expiration as a safety net. Once connectivity is restored, a key-state reconciliation runs automatically. No data is permanently stale.
Yes. If a key is local-only and should not trigger cross-service invalidation, you can set it with the local: true option. Local keys live only in the writing instance's L1 cache and are invisible to the coherence channel.
The coherence channel uses a fan-out model optimized for large clusters. Invalidation events are lightweight (key hash + sequence number, no payload). Even at 500+ instances, propagation stays under 1ms. The protocol is bandwidth-efficient because it sends eviction signals, not the data itself.
Yes. Cross-region coherence uses the same protocol with region-aware routing. Propagation latency between regions depends on network distance but typically stays under 50ms for cross-continental deployments. Within a region, sub-ms propagation is maintained.
Every microservice you add is another invalidation gap waiting to happen. Cachee coherence closes all of them at once. Start with the free tier and see cross-service sync working in under five minutes.