Every caching team has typed KEYS user:123:* into a console, stared at the result, and wondered: did I get everything? Pattern invalidation is string matching applied to a semantic problem. It cannot understand that pricing:tier:enterprise is related to user:123:billing:next because the relationship is conceptual, not lexicographic. We built a primitive that closes this gap: semantic invalidation, powered by the same vector index that drives Cachee's VSEARCH command.
Pattern Matching Is a Sledgehammer
The standard approach to invalidating multiple cache keys is pattern matching. KEYS user:123:* finds every key that starts with user:123:. It is fast, it is familiar, and it is wrong in ways that do not show up until production.
The first failure mode is under-invalidation. A pricing change affects user:123:price, user:123:discount, and user:123:billing:next — all of which match the pattern. But it also affects pricing:tier:enterprise, which does not share the user:123: prefix. The pattern misses it. That key stays in the cache, serving stale pricing data, until its TTL expires or someone remembers to add it to the invalidation list.
The second failure mode is over-invalidation. The pattern user:123:* also matches user:123:avatar, user:123:preferences, and user:123:theme — none of which have anything to do with pricing. They get flushed anyway. The cache refills them on the next request, burning compute and adding latency for no reason. The hit rate drops. The database sees a spike. Nobody knows why.
The third failure mode is the worst: maintenance burden. Teams that recognize pattern matching is imprecise build invalidation maps. “When pricing changes, invalidate these 47 keys.” The map lives in application code, spread across services. Every schema change, every renamed key, every new feature breaks it. One missed entry means stale data in production. One extra entry means unnecessary cache misses. The map is never correct. It is only ever less wrong.
Semantic Invalidation: Embed the Intent, Search for Matches
Semantic invalidation works by giving every cache key a lightweight vector embedding. The embedding is computed at SET time (or lazily on first access) and stored in Cachee's built-in HNSW vector index — the same index that powers the VSEARCH command for similarity search.
When you need to invalidate, instead of constructing a pattern, you describe the intent:
INVALIDATE WHERE intent="user 123 pricing data" CONFIDENCE 0.9
The cache embeds the intent query, runs a vector similarity search against the key index, and invalidates every key whose embedding has greater than 90% similarity to the intent. The result is precise and auditable:
# Matched and invalidated (>90% similarity):
# user:123:price (0.97)
# user:123:discount (0.94)
# user:123:billing:next (0.92)
# pricing:tier:enterprise (0.91)
# NOT invalidated (<90% similarity):
# user:123:avatar (0.31)
# user:123:preferences (0.44)
The pricing:tier:enterprise key is found even though it shares no prefix with user:123:*. The user:123:avatar key is left alone even though it shares the exact prefix. The cache understands the meaning behind the keys, not just the characters in their names.
The Confidence API: Operator-Controlled Precision
The CONFIDENCE parameter is the operator's precision dial. It controls the similarity threshold for inclusion in the invalidation set, and it gives you explicit control over the precision-recall tradeoff.
- CONFIDENCE 0.95 — Surgical. Only keys with very strong semantic similarity are invalidated. Use this in production for critical paths where over-invalidation has measurable cost.
- CONFIDENCE 0.9 — Standard production threshold. Catches all strongly related keys while filtering out tangential matches.
- CONFIDENCE 0.7 — Broad. Useful for development, debugging, or when you want to invalidate an entire category like “anything related to billing” without knowing every key name.
Every INVALIDATE WHERE command returns the full list of matched keys with their similarity scores. You can see exactly what was invalidated, inspect the scores, and adjust the threshold. There is no black-box AI making decisions behind the scenes. You set the threshold. You see the results. You tune it.
How It Compares to Every Other Method
Semantic invalidation does not replace other invalidation strategies. It fills a specific gap in the toolkit — the space between “I know the exact key” and “I need to invalidate a concept.”
| Method | Mechanism | Best For | Blind Spot |
|---|---|---|---|
| Exact Key (DEL) | Direct key name | Known single key | Only one key at a time |
| Pattern (KEYS/SCAN) | Glob/regex on key names | Keys sharing a prefix | Misses differently-named related keys; O(N) |
| CDC | Database change stream | Table-to-key mapping | No key-to-key relationships |
| Dependency Graph | DAG cascade on DEPENDS_ON | Explicit, declared dependencies | Must declare deps at write time |
| Semantic | Vector similarity on intent | Conceptual invalidation | Requires vector index; confidence tuning |
Each method has a use case. Exact key deletion when you know the name. Pattern matching when keys share a predictable prefix. CDC when the cache maps directly to database rows. Dependency graph when you have explicit, known relationships between keys. Semantic when you know what changed conceptually but do not know — or cannot enumerate — every affected key.
Composition: Semantic + Dependency Graph + Triggers
The real power of semantic invalidation emerges when it composes with Cachee's other primitives.
Semantic + Dependency Graph: A semantic invalidation can match a source key in the dependency graph. When INVALIDATE WHERE intent="billing changes" CONFIDENCE 0.9 finds and invalidates billing:plan:enterprise, the dependency graph cascades that invalidation to every derived key that declared a dependency on it — dashboards, reports, composite API responses. The semantic match triggers the DAG cascade. One intent query propagates through the entire dependency chain.
Semantic + Triggers: Cachee's ON_INVALIDATE trigger fires for every key that is semantically matched and invalidated. You can log the full list of matched keys with their similarity scores, emit metrics to your observability stack, or fire webhooks for downstream systems. The trigger system gives you complete visibility into what semantic invalidation does, on every invocation.
Semantic + CDC: CDC auto-invalidation handles the database-to-cache mapping. Semantic invalidation handles the cases CDC cannot reach — keys that are related by meaning but not by direct table mapping. Together, they provide total invalidation coverage without maintaining a single line of invalidation logic in application code.
Related Reading
- Semantic Invalidation Product Page
- Causal Dependency Graph
- CDC Auto-Invalidation
- Cache Coherence
- Cache Triggers
Also Read
The Numbers That Matter
Cache performance discussions get philosophical fast. Here are the actual measured numbers from production deployments running on documented hardware, so you can compare against your own infrastructure instead of trusting marketing copy.
- L0 hot path GET: 28.9 nanoseconds on Apple M4 Max, single-threaded against pre-warmed in-memory cache. This is the floor — there's no faster way to read a key.
- L1 CacheeLFU GET: ~89 nanoseconds on AWS Graviton4 (c8g.metal-48xl). Sharded DashMap with admission filtering.
- Sustained throughput: 32 million ops/sec single-threaded on M4 Max, 7.41 million ops/sec at 16 workers on Graviton4 c8g.16xlarge.
- L2 fallback: Sub-millisecond hits against ElastiCache Redis 7.4 over same-AZ network when L1 misses cascade through.
The compounding effect matters more than any single number. A 28-nanosecond L0 hit means your application spends almost zero time on cache lookups in the hot path, leaving the CPU free for the actual business logic that generates revenue.
When Caching Actually Helps
Caching isn't free. It introduces a consistency problem you didn't have before. Before adding any cache layer, the question to answer is whether your workload actually benefits from caching at all.
Caching helps when three conditions hold simultaneously. First, your reads dramatically outnumber your writes — typically a 10:1 ratio or higher. Second, the same keys get read repeatedly within a window where a cached value remains valid. Third, the cost of computing or fetching the underlying value is meaningfully higher than the cost of a cache lookup. Database queries that hit secondary indexes, RPC calls to slow upstream services, expensive computed aggregations, and rendered template fragments all qualify.
Caching hurts when those conditions don't hold. Write-heavy workloads suffer because every write invalidates a cache entry, multiplying your work. Workloads with poor key locality suffer because the cache wastes memory storing entries that never get reused. Workloads where the underlying fetch is already fast — well-indexed primary key lookups against a properly tuned database, for example — gain almost nothing from caching and inherit the consistency complexity for no reason.
The honest first step before any cache deployment is measuring your actual read/write ratio, key access distribution, and underlying fetch latency. If your read/write ratio is below 5:1 or your underlying database is already returning results in single-digit milliseconds, the engineering time is better spent elsewhere.
Memory Efficiency Is The Hidden Cost Lever
Throughput numbers get the headlines but memory efficiency determines your monthly bill. A cache that stores the same hot data in less RAM lets you run a smaller instance class — and on AWS that's the difference between profitable and breakeven for a lot of services.
Redis stores each key as a Simple Dynamic String with 16 bytes of header overhead, plus dictEntry pointers in the main hashtable, plus embedded TTL metadata. For 1KB values, per-entry overhead lands around 1100-1200 bytes once you account for hashtable load factor and slab fragmentation. At a million keys, that's roughly 1.2 GB of resident memory just for the data.
Cachee's L1 layer uses sharded DashMap entries with compact packing — a 64-bit key hash, value bytes, an 8-byte expiry timestamp, and a small frequency counter for the CacheeLFU admission filter. Per-entry overhead lands at roughly 40 bytes of structural data on top of the value itself. For the same million-key workload, that's about 13% smaller resident memory. On AWS ElastiCache pricing, that gap is the difference between needing a cache.r7g.large versus a cache.r7g.xlarge for borderline workloads.
What This Actually Costs
Concrete pricing math beats hypothetical. A typical SaaS workload with 1 billion cache operations per month, average 800-byte values, and a 5 GB hot working set currently runs on AWS ElastiCache cache.r7g.xlarge primary plus a read replica — roughly $480 per month for the two nodes, plus cross-AZ data transfer charges that quietly add another $50-150 per month depending on access patterns.
Migrating the hot path to an in-process L0/L1 cache and keeping ElastiCache as a cold L2 fallback drops the dedicated cache spend to $120-180 per month. For workloads where the hot working set fits inside the application's existing memory budget, you can eliminate the dedicated cache tier entirely. The cache becomes a library you link into your binary instead of a separate service to operate.
Compounded over twelve months, that's $3,600 to $4,500 per year on a single small workload. Multiply across a fleet of services and the savings start showing up in finance team conversations. The bigger savings usually come from eliminating cross-AZ data transfer charges, which Redis-as-a-service architectures incur on every read that crosses an availability zone.
Stop Guessing Which Keys to Invalidate.
Semantic invalidation. Dependency graphs. CDC auto-invalidation. One platform, zero invalidation maps.
Start Free Trial Schedule Demo