How It Works Pricing Benchmarks
vs Redis Docs Blog
Start Free Trial
CDC Auto-Invalidation

Your Cache Is Always Fresh.
Because Your Database Told It To.

Connect Cachee to your database's change stream. When a row changes, the cache key invalidates automatically. No code. No TTLs. No stale data.

Zero
Stale Data
Zero
Code Required
Zero
TTL Guessing
Zero
Webhook Pipelines
The Problem

The Problem Every Team Solves Manually

Cache invalidation is famously one of the hardest problems in computer science. And yet every engineering team solves it from scratch, every time, with brittle custom code.

A user updates their profile. The database row changes. Now you need every cache key that references that user to reflect the new data. Immediately. Across every service.

So your team builds an invalidation pipeline. Maybe it starts with a webhook. The application writes to the database, then fires a POST to an invalidation endpoint. But what if the webhook fails? What if the application crashes between the write and the webhook call? You add retry logic, a dead-letter queue, monitoring alerts.

Or you go the pub/sub route. Every write publishes to a Kafka topic or Redis stream. Consumers subscribe, parse the event, figure out which cache keys to invalidate. Now you have a distributed system to maintain. Schema changes break your consumers. Partitioning decisions affect ordering guarantees. Each team wires it up differently.

Some teams skip the complexity and just set short TTLs. Five seconds. Thirty seconds. A minute. The cache works, but you serve stale data for the entire TTL window. For pricing data, account balances, or inventory counts, "eventually consistent within 30 seconds" is not acceptable.

The worst part: every approach requires application-level code changes. Your developers become the invalidation layer. Every new feature that writes to the database needs corresponding invalidation logic. Miss a key? Stale data. Fire an event twice? Thundering herd. Change a table schema? Update every consumer.

This is not an engineering problem that should exist. Your database already knows exactly what changed. The only question is whether anything is listening.

How It Works

How CDC Auto-Invalidation Works

Your database already records every change in its write-ahead log. Cachee reads that log, maps each change to the affected cache keys, and evicts them instantly. No application code involved.

CDC Auto-Invalidation Pipeline
Step 1
DB Row Changes
INSERT / UPDATE / DELETE
Step 2
WAL Event
Captured from replication slot
Step 3
Key Mapping
table:id → cache key
Step 4
L1 Evicted
< 1ms end-to-end
Invalidation Latency
< 1ms
From database commit to cache eviction

Reads the WAL, Not the App

Cachee's CDC engine connects to your database as a logical replication subscriber. For PostgreSQL, it creates a replication slot and reads the WAL (Write-Ahead Log) directly. For MySQL, it attaches to the binlog stream. The database itself is the source of truth for what changed.

This means Cachee sees every change, regardless of which application, migration script, or admin console made it. There is no possibility of a write happening without the cache knowing. The WAL is the database's own internal record. It cannot be skipped, lost, or arrive out of order.

Declarative Key Mapping

You tell Cachee which tables map to which cache key patterns using a simple declarative syntax. When the CDC engine sees a change to a mapped table, it constructs the affected cache key from the row data and evicts it from L1 immediately.

Key mapping supports wildcards, composite keys, and column-level filtering. If you only want to invalidate when specific columns change (e.g., price but not last_viewed_at), you can specify exactly that. This eliminates unnecessary cache churn from non-material column updates.

# One command maps a table to a cache key pattern CDC MAP users "{table}:{id}" # When a row in `users` with id=12345 changes: # Cache key "users:12345" is evicted from L1 automatically # Column-level filtering (only invalidate on material changes) CDC MAP products "{table}:{id}" --columns price,inventory,name # Composite key patterns CDC MAP order_items "order:{order_id}:items" # Wildcard patterns for related cache keys CDC MAP users "user:{id}:*" --cascade

Once mapped, every change to the table triggers automatic invalidation. No application code, no webhook endpoints, no pub/sub consumers. The database tells the cache what changed. Learn more about how this integrates with predictive caching to pre-warm the replacement value before the next read.

Supported Databases

Works With Your Database

Each connector is purpose-built for the database's native change stream protocol. No generic CDC middleware, no Debezium dependency, no Kafka cluster required.

🐘
PostgreSQL
WAL (Logical Replication)
Connects via a logical replication slot using the pgoutput plugin. Reads the WAL in real time with transaction-level consistency. Supports PostgreSQL 12+ and all managed variants including RDS, Aurora, Cloud SQL, and Supabase. Zero performance impact on your primary instance.
🐬
MySQL / MariaDB
Binlog (Row-Based Replication)
Attaches as a binlog replica using the MySQL replication protocol. Processes row-level events with full before/after image support. Works with MySQL 5.7+, MariaDB 10.3+, and managed services including RDS MySQL, Aurora MySQL, PlanetScale, and Vitess. Requires binlog_format=ROW.
DynamoDB
DynamoDB Streams
Consumes from DynamoDB Streams with configurable stream view type. Processes NEW_AND_OLD_IMAGES events to determine which fields changed and whether invalidation is needed. Supports global tables, on-demand mode, and DAX integration for a fully serverless CDC pipeline.

MongoDB change streams, CockroachDB changefeeds, and SQL Server CDC are on the roadmap. Contact us if you need a specific database supported.

Comparison

What CDC Auto-Invalidation Replaces

Every alternative requires application-level code. CDC does not. Here is what you can delete from your codebase after enabling CDC auto-invalidation.

Approach Manual Invalidation CDC Auto-Invalidation (Cachee)
Setup Effort Weeks of engineering per service One CDC MAP command per table
Code Changes Every write path needs invalidation logic Zero application code
Stale Data Window TTL-dependent (seconds to minutes) < 1ms from commit
Missed Invalidations Common (webhook failures, race conditions) Impossible (WAL is authoritative)
Infrastructure Kafka / SQS / Redis Pub/Sub / webhooks Direct WAL connection, no middleware
Maintenance Burden Schema changes break consumers Self-healing, schema-aware
Coverage Only changes made through your app Every change, regardless of source
Ordering Guarantees Depends on queue configuration Transaction-order guaranteed by WAL

One Command to Wire It

Instead of building and maintaining an entire invalidation pipeline, you declare the mapping between your database tables and cache key patterns. Cachee handles the rest. When a row in the mapped table changes, the corresponding cache key is evicted from L1 before the next read can reach it.

# Connect Cachee to your PostgreSQL instance CDC CONNECT "postgres://user:pass@db.example.com:5432/myapp" # Map tables to cache key patterns CDC MAP users "{table}:{id}" CDC MAP products "{table}:{id}" --columns price,name,inventory CDC MAP orders "order:{id}" # Verify your mappings CDC STATUS Connected: PostgreSQL 16.2 (WAL position: 0/1A3B5C8) Mappings: 3 tables, 3 key patterns Events: 1,247 processed, 0 errors, <1ms avg latency # That's it. No application code. No webhooks. No pub/sub.

Compare this to the alternative: a Kafka cluster, Debezium connectors, consumer applications, dead-letter queues, monitoring dashboards, and invalidation logic scattered across every microservice. CDC auto-invalidation replaces all of it with three lines in your Cachee config. See our full comparison page for more details.

Use Cases

Where CDC Invalidation Delivers the Biggest Impact

CDC auto-invalidation is most valuable where stale data has real business consequences. These are the use cases where the difference between "eventually consistent" and "immediately consistent" matters.

01
E-Commerce Pricing
A product price changes. With TTL-based caching, customers see the old price for up to 30 seconds. Orders placed at the wrong price cost you margin or require cancellations. CDC invalidation ensures the cached price updates the instant the database row changes, before the next customer loads the product page.
02
Financial Account Balances
A deposit clears. A transfer completes. Account balance data must reflect the current state immediately. CDC auto-invalidation evicts the cached balance the moment the transaction commits, eliminating the window where users see incorrect balances. No race conditions, no double-spend risk from stale cache reads.
03
Inventory Management
Stock levels change with every purchase. When cached inventory counts are stale, customers add out-of-stock items to their cart and receive errors at checkout. CDC auto-invalidation keeps cached counts synchronized with the database in sub-millisecond time, preventing oversells and improving the checkout experience.
04
User Profile Updates
A user changes their display name, avatar, or permissions. With manual invalidation, you need to remember to invalidate everywhere that profile data is cached: the profile page, the activity feed, comment threads, admin dashboards. CDC maps the user table to all related cache key patterns and evicts them all in a single transaction-triggered event.
Integration

Works With Your Existing Stack

CDC auto-invalidation layers on top of Cachee's full caching platform. Pair it with predictive pre-warming so evicted keys are repopulated before the next request arrives.

🔄
CDC + Predictive Caching
When CDC evicts a key, predictive caching immediately pre-warms the replacement value from the origin. The next read hits L1 instead of falling through to the database. Invalidation and repopulation happen in the same sub-millisecond window.
Zero cold-start penalty after invalidation
🛡
No Infrastructure to Manage
Cachee connects directly to your database's change stream. No Kafka cluster, no Debezium, no Zookeeper, no consumer applications. The CDC engine runs inside the Cachee process. One fewer distributed system in your architecture.
Zero additional infrastructure
📊
Full Observability
Every CDC event is logged with the table, row ID, affected cache keys, and invalidation latency. The enterprise dashboard shows real-time CDC throughput, mapping coverage, and a live stream of invalidation events for debugging.
Real-time invalidation audit trail
Advanced

CDC + Cache Warming: Always Hot, Never Stale

Invalidation alone leaves a gap. The key is evicted, but the next read still hits the origin. Cachee closes the gap by coupling CDC invalidation with immediate re-population.

Invalidate + Re-Warm Pipeline
Trigger
WAL Event
Action 1
Evict Key
Action 2
Fetch New
Result
L1 Hot
Net Effect
Zero Cache Misses After Data Changes
Evict + re-warm completes before the next read arrives

Explore cache warming strategies for more on how Cachee pre-populates evicted keys from multiple origin sources.

Stop Building Invalidation Pipelines.
Start Listening to Your Database.

One CDC MAP command replaces weeks of invalidation engineering. Connect your database, declare your key mappings, and never serve stale data again.

Start Free Trial View Benchmarks