What happens when a user session expires? In every cache you have used before, the answer is: nothing. The key disappears. The TTL fires. Silence. No webhook, no log entry, no cleanup logic. The key is simply gone, and your application has no idea it happened until the next request discovers the absence. Cache Triggers change this fundamentally — turning your cache from a passive data store into an active event bus.
The Silent Disappearance Problem
Caches are built around a simple contract: you put data in, you get data out, and at some point the data goes away. The “goes away” part has always been treated as a non-event. Keys expire, keys get evicted under memory pressure, keys get explicitly deleted — and in every case, the departure is silent.
This creates real operational gaps. A session key expires and no logout event fires. An inventory count gets evicted and the pre-computed product page is never updated. A rate-limit counter is deleted and no compliance log records that the limit was reached. A cached ML feature set is overwritten and no downstream model retraining is triggered.
Teams work around this by building external monitoring, polling loops, and background jobs that check for the absence of keys. This is exactly as fragile as it sounds. You are trying to observe a non-event — the disappearance of something — from outside the system where it happened.
Triggers: Inline, Guaranteed, Programmable
Cachee lets you register Lua functions that fire on cache lifecycle events. Five event types cover the complete lifecycle of a key:
TRIGGER REGISTER session:* ON_EXPIRE <
The five events:
- ON_WRITE — fires when a key is created or updated. Use it for audit trails, real-time analytics, or downstream cache warming.
- ON_EVICT — fires when a key is evicted under memory pressure. Use it to log which keys your eviction policy is dropping, or to persist critical values before they vanish.
- ON_EXPIRE — fires when a key’s TTL runs out. The most common use case: session cleanup, token revocation, scheduled task firing.
- ON_DELETE — fires when a key is explicitly removed. Useful for audit logging, cascading side effects, or notification systems.
- ON_READ — fires when a key is accessed. Use it for access pattern analytics, read-through warming of related keys, or compliance logging on sensitive data.
Triggers execute in a sandboxed Lua 5.4 runtime embedded directly in the Cachee engine. There is no network hop. There is no message queue. The trigger fires inline with the cache operation, in the same process, with sub-microsecond overhead. The Lua sandbox prevents filesystem access, unbounded loops, and memory abuse — you get programmability without risk to the cache engine itself.
Real-World Use Cases
Session Expiry Webhooks
When a session key expires, fire a webhook to your auth service to clean up server-side state, revoke refresh tokens, and update the user’s “last active” timestamp. Without triggers, session cleanup relies on the user’s next request discovering the session is gone — which might never come.
TRIGGER REGISTER session:* ON_EXPIRE <
Inventory-Driven Page Warming
When an inventory count is updated in the cache, trigger a pre-warm of every product page and search result that references that SKU. The page cache is rebuilt before any user requests it, eliminating the cold-start penalty that normally follows inventory changes.
TRIGGER REGISTER inventory:* ON_WRITE <
Rate Limit Compliance Logging
When a rate-limit key is written and the value exceeds the threshold, log the event to your compliance system. Every rate limit breach is captured at the moment it happens, not when a background job eventually notices.
TRIGGER REGISTER ratelimit:* ON_WRITE < tonumber(event.metadata.limit) then
cachee.log("compliance", {
key = event.key,
value = event.value,
limit = event.metadata.limit,
timestamp = event.timestamp
})
end
LUA
Eviction Monitoring
When your eviction policy drops a key, log which key it was and what its recomputation cost was. Over time, this gives you a precise picture of whether your cache is sized correctly and whether your cost weights need adjustment.
Why Redis Keyspace Notifications Are Not Enough
Redis offers keyspace notifications — a pub/sub mechanism that fires when keys change. On paper, it sounds similar. In practice, the differences are fundamental:
- Fire-and-forget: Redis keyspace notifications use pub/sub. If no subscriber is listening when the event fires, the event is lost. There is no replay, no queue, no guarantee. Triggers are inline and guaranteed — they execute as part of the cache operation itself.
- No programmability: Redis notifications tell you that something happened. They do not let you run logic in response at the cache layer. You receive an event, then you must make a separate network call to do something about it. Triggers execute Lua logic at the point of the event, with zero additional round trips.
- Network overhead: Receiving a keyspace notification requires a subscriber connection, a pub/sub channel, and a message decode — all over the network. Triggers run in-process.
- Drop under load: Redis pub/sub is best-effort. Under heavy write load, notifications are dropped. For compliance-critical events like rate limit breaches or session expirations, “best effort” is not acceptable. Triggers execute synchronously and cannot be silently dropped.
Composition With Other Cachee Primitives
Triggers compose with every other Cachee feature. When CDC auto-invalidation deletes a key, the ON_DELETE trigger fires. When a dependency graph cascade invalidates a derived key, ON_DELETE triggers fire for every key in the cascade. When coherence propagates an invalidation to another instance, the trigger fires on the receiving instance too.
This means a single database row change can trigger a CDC invalidation, which cascades through the dependency graph, which propagates via coherence, and at each step, triggers execute custom logic — webhooks, logging, pre-warming, analytics. The entire chain is automatic, sub-millisecond, and requires zero orchestration code in your application.
Your cache is not just a data store anymore. It is an event bus with guaranteed delivery, sub-microsecond execution, and programmable responses to every lifecycle event.
Related Reading
Also Read
The Numbers That Matter
Cache performance discussions get philosophical fast. Here are the actual measured numbers from production deployments running on documented hardware, so you can compare against your own infrastructure instead of trusting marketing copy.
- L0 hot path GET: 28.9 nanoseconds on Apple M4 Max, single-threaded against pre-warmed in-memory cache. This is the floor — there's no faster way to read a key.
- L1 CacheeLFU GET: ~89 nanoseconds on AWS Graviton4 (c8g.metal-48xl). Sharded DashMap with admission filtering.
- Sustained throughput: 32 million ops/sec single-threaded on M4 Max, 7.41 million ops/sec at 16 workers on Graviton4 c8g.16xlarge.
- L2 fallback: Sub-millisecond hits against ElastiCache Redis 7.4 over same-AZ network when L1 misses cascade through.
The compounding effect matters more than any single number. A 28-nanosecond L0 hit means your application spends almost zero time on cache lookups in the hot path, leaving the CPU free for the actual business logic that generates revenue.
When Caching Actually Helps
Caching isn't free. It introduces a consistency problem you didn't have before. Before adding any cache layer, the question to answer is whether your workload actually benefits from caching at all.
Caching helps when three conditions hold simultaneously. First, your reads dramatically outnumber your writes — typically a 10:1 ratio or higher. Second, the same keys get read repeatedly within a window where a cached value remains valid. Third, the cost of computing or fetching the underlying value is meaningfully higher than the cost of a cache lookup. Database queries that hit secondary indexes, RPC calls to slow upstream services, expensive computed aggregations, and rendered template fragments all qualify.
Caching hurts when those conditions don't hold. Write-heavy workloads suffer because every write invalidates a cache entry, multiplying your work. Workloads with poor key locality suffer because the cache wastes memory storing entries that never get reused. Workloads where the underlying fetch is already fast — well-indexed primary key lookups against a properly tuned database, for example — gain almost nothing from caching and inherit the consistency complexity for no reason.
The honest first step before any cache deployment is measuring your actual read/write ratio, key access distribution, and underlying fetch latency. If your read/write ratio is below 5:1 or your underlying database is already returning results in single-digit milliseconds, the engineering time is better spent elsewhere.
Memory Efficiency Is The Hidden Cost Lever
Throughput numbers get the headlines but memory efficiency determines your monthly bill. A cache that stores the same hot data in less RAM lets you run a smaller instance class — and on AWS that's the difference between profitable and breakeven for a lot of services.
Redis stores each key as a Simple Dynamic String with 16 bytes of header overhead, plus dictEntry pointers in the main hashtable, plus embedded TTL metadata. For 1KB values, per-entry overhead lands around 1100-1200 bytes once you account for hashtable load factor and slab fragmentation. At a million keys, that's roughly 1.2 GB of resident memory just for the data.
Cachee's L1 layer uses sharded DashMap entries with compact packing — a 64-bit key hash, value bytes, an 8-byte expiry timestamp, and a small frequency counter for the CacheeLFU admission filter. Per-entry overhead lands at roughly 40 bytes of structural data on top of the value itself. For the same million-key workload, that's about 13% smaller resident memory. On AWS ElastiCache pricing, that gap is the difference between needing a cache.r7g.large versus a cache.r7g.xlarge for borderline workloads.
What This Actually Costs
Concrete pricing math beats hypothetical. A typical SaaS workload with 1 billion cache operations per month, average 800-byte values, and a 5 GB hot working set currently runs on AWS ElastiCache cache.r7g.xlarge primary plus a read replica — roughly $480 per month for the two nodes, plus cross-AZ data transfer charges that quietly add another $50-150 per month depending on access patterns.
Migrating the hot path to an in-process L0/L1 cache and keeping ElastiCache as a cold L2 fallback drops the dedicated cache spend to $120-180 per month. For workloads where the hot working set fits inside the application's existing memory budget, you can eliminate the dedicated cache tier entirely. The cache becomes a library you link into your binary instead of a separate service to operate.
Compounded over twelve months, that's $3,600 to $4,500 per year on a single small workload. Multiply across a fleet of services and the savings start showing up in finance team conversations. The bigger savings usually come from eliminating cross-AZ data transfer charges, which Redis-as-a-service architectures incur on every read that crosses an availability zone.
Make Your Cache React. Not Just Store.
Cache Triggers. Sub-microsecond Lua execution. Guaranteed delivery on every lifecycle event. Zero polling infrastructure.
Start Free Trial Schedule Demo