How It Works Pricing Benchmarks
vs Redis Docs Resources Blog
Start Free Trial
New Primitive

Your Cache Finally Understands
Dependencies.

Declare that key A depends on keys B, C, D. When any source changes, every derived key invalidates automatically. No application code. No stale composites. No more hand-coded invalidation chains.

Zero
Stale Composites
Transitive
Invalidation
Zero
Code Changes
DAG
Based
The Problem

The Stale Composite Problem

Every production cache contains derived keys that aggregate, transform, or compose data from multiple source keys. And every one of them is a staleness time bomb.

🚫
Derived Keys Are Never Cached
Teams skip caching dashboards, reports, and aggregated views because invalidation is too hard. The most expensive computations in your system — the ones that would benefit most from caching — never get cached at all. You pay for the recomputation on every single request.
Most expensive queries = zero cache hits
💥
Application-Code Invalidation Breaks at Scale
Every writing service must know every derived key it affects. One team forgets to add an invalidation call, and stale data ships to production. The invalidation graph lives in developers' heads, scattered across dozens of microservices, impossible to audit or verify.
One missed call = stale data in production
TTL Is a Lie
Short TTL means a low hit rate — you're recomputing expensive aggregations every few seconds for no reason. Long TTL means a guaranteed stale window where users see outdated composites. There is no correct TTL for a derived key without dependency awareness. The problem is structural.
No correct answer without dependency graphs
How It Works

Declare Dependencies. The Cache Handles the Rest.

You tell the cache which keys a derived value depends on. Cachee builds a directed acyclic graph. When any source key is invalidated, every downstream dependent is evicted automatically. Transitively. Atomically.

# Set a derived key with explicit dependencies SET user:123:dashboard <value> DEPENDS_ON user:123:profile user:123:orders billing:plan:enterprise # When ANY source key changes, the dashboard is automatically evicted: # SET user:123:profile "new data" --> user:123:dashboard evicted # DEL user:123:orders --> user:123:dashboard evicted # SET billing:plan:enterprise "new" --> user:123:dashboard evicted # Transitive dependencies work too SET user:123:weekly-report <value> DEPENDS_ON user:123:dashboard analytics:week:12 # Now invalidating user:123:profile cascades through: # user:123:profile --> user:123:dashboard --> user:123:weekly-report # The entire chain evicts atomically. Zero application code.
Dependency Graph — Cascade Invalidation
Source Key
user:123:profile
Source Key
user:123:orders
Source Key
billing:plan:enterprise
↓ ↓ ↓
Derived Key (auto-evicted)
user:123:dashboard
Source Key
analytics:week:12
↓   ↓
Derived Key (cascade evicted)
user:123:weekly-report
Cascade Behavior
A → B → C: Invalidating C evicts B and A
The cache builds a directed acyclic graph. Invalidation propagates transitively through every dependent.

Declarative, Not Imperative

You declare dependencies at write time using the DEPENDS_ON modifier. The cache stores the dependency edges in an internal adjacency structure. When any source key is invalidated — by CDC, by a direct SET/DEL, by coherence, by TTL — the graph is walked and every downstream key is evicted.

This means your application code never needs to know about dependency chains. The cache itself is the system of record for which keys depend on which other keys. Teams can change, add, or remove dependencies without coordinating across services.

Zero Read-Path Overhead

The dependency graph is only consulted during invalidation events, never during reads. A GET for a derived key has identical latency to a GET for any other key. The graph structure is maintained as a lightweight adjacency list alongside the cache entries, adding negligible memory overhead.

The only cost is at SET time (registering dependency edges) and at invalidation time (walking the graph to find dependents). Both operations complete in sub-microsecond time for typical dependency chains of 1–10 levels deep.

Composition

Composes With Everything

The dependency graph is not a standalone feature. It is a multiplier on every other Cachee primitive. Each composition creates behavior that no other caching system can replicate.

💾
CDC + Dependency Graph
A database row changes. CDC auto-invalidation evicts the base cache key. The dependency graph then cascades that eviction to every derived key that depends on it. A single row update in your users table can automatically invalidate dashboards, reports, feeds, and aggregated views — all without a line of application code.
DB row change → full cascade invalidation
🔄
Coherence + Dependency Graph
Coherence propagates invalidation across instances. The dependency graph propagates invalidation across keys. Together, they give you cross-instance AND cross-key invalidation. Service A writes a source key, and every derived key on every instance in the namespace is evicted within sub-ms.
Cross-instance + cross-key propagation
Triggers + Dependency Graph
Cachee's ON_INVALIDATE trigger fires for every key that is evicted. When the dependency graph cascades an invalidation, the trigger fires for every key in the cascade chain. You can use this to log the full invalidation path, emit metrics, or trigger downstream workflows — all from a single source key change.
ON_INVALIDATE fires for every key in cascade

Learn more about CDC auto-invalidation, cross-service coherence, and how these primitives compose into a system that eliminates stale data at every level.

Comparison

Why Nobody Else Has This

Dependency-aware invalidation does not exist in any other caching system. Here is why.

Platform Dependency Model Cachee Dependency Graph
Redis None needed — single source of truth, not an L1 cache Full DAG with transitive cascade
Hazelcast Near-cache has no dependency model whatsoever DEPENDS_ON at SET time, automatic enforcement
Caffeine / Guava Local-only, no distributed invalidation at all Distributed + dependency-aware in one primitive
Memcached Key-value only, no relationships between keys Explicit key-to-key dependency edges
Every L1 / Sidecar Cache Has the stale composite problem, none have solved it Stale composites eliminated by design

Why Redis Doesn't Need It

Redis is a shared remote cache. Every service reads from the same instance, so there is no stale-copy problem across services — just latency. Redis doesn't cache derived data locally; it serves every request from the centralized store. The dependency graph problem only arises when you have local caches that hold copies of derived data, which is exactly the L1/sidecar pattern that Cachee serves.

Why L1 Caches Haven't Solved It

Building a dependency graph into a distributed L1 cache requires solving three hard problems simultaneously: maintaining DAG consistency across instances, propagating invalidation events transitively through the graph, and doing it all at cache-speed latency. Every existing L1 cache stops at key-level invalidation. None of them model relationships between keys. This is why the L1 cache category hasn't cracked enterprise production at scale.

L1 caching without dependency graphs is a demo.
L1 caching with dependency graphs is production infrastructure.
FAQ

Frequently Asked Questions

What is a causal dependency graph in caching?

A causal dependency graph is a directed acyclic graph (DAG) that tracks relationships between cache keys. When you declare that a derived key depends on one or more source keys, the cache builds a graph of those relationships. When any source key is invalidated, every derived key that depends on it is automatically invalidated as well, transitively through the entire graph. This eliminates stale composite data without any application-level invalidation code.

How does DEPENDS_ON work?

DEPENDS_ON is a modifier on the SET command that declares which source keys a derived key depends on. For example: SET user:123:dashboard <value> DEPENDS_ON user:123:profile user:123:orders. When any of those source keys is invalidated or updated, the dashboard key is automatically evicted. The dependency is stored in the cache's internal DAG and requires zero application code to enforce.

Is invalidation transitive?

Yes. Invalidation propagates transitively through the entire dependency graph. If key A depends on key B, and key B depends on key C, then invalidating C will also invalidate B and A. The graph is walked depth-first, and every downstream dependent is evicted in a single atomic operation. There is no limit to the depth of the dependency chain.

How does it work with CDC?

CDC auto-invalidation and the dependency graph compose naturally. CDC watches your database's change stream and invalidates the base cache key when a row changes. The dependency graph then cascades that invalidation to every derived key. For example, CDC invalidates user:123:profile when the users table row changes, and the dependency graph automatically invalidates user:123:dashboard, user:123:feed, and any other key that declared a dependency.

Does it add latency to reads?

No. The dependency graph is only consulted during invalidation, not during reads. A cache GET for a derived key has the same latency as any other GET. The graph is maintained as a lightweight adjacency structure alongside the cache entries. The only additional cost is during SET (to register dependency edges) and during invalidation (to walk the graph and evict dependents), both of which complete in sub-microsecond time for typical dependency chains.

Stop Hand-Coding Invalidation Chains.
Declare Dependencies. Ship.

One DEPENDS_ON modifier replaces hundreds of lines of invalidation logic. Declare your dependency graph, and never ship stale composites again.

Start Free Trial Schedule Demo