Skip to main content
Why CacheeHow It Works
All Verticals5G TelecomAd TechAI InfrastructureFraud DetectionGamingTrading
PricingDocsBlogSchedule DemoLog InStart Free Trial
← Back to Blog
Engineering

Cache Triggers: Event-Driven Compute at the Cache Layer

What happens when a user session expires? In every cache you have used before, the answer is: nothing. The key disappears. The TTL fires. Silence. No webhook, no log entry, no cleanup logic. The key is simply gone, and your application has no idea it happened until the next request discovers the absence. Cache Triggers change this fundamentally — turning your cache from a passive data store into an active event bus.

The Silent Disappearance Problem

Caches are built around a simple contract: you put data in, you get data out, and at some point the data goes away. The “goes away” part has always been treated as a non-event. Keys expire, keys get evicted under memory pressure, keys get explicitly deleted — and in every case, the departure is silent.

This creates real operational gaps. A session key expires and no logout event fires. An inventory count gets evicted and the pre-computed product page is never updated. A rate-limit counter is deleted and no compliance log records that the limit was reached. A cached ML feature set is overwritten and no downstream model retraining is triggered.

Teams work around this by building external monitoring, polling loops, and background jobs that check for the absence of keys. This is exactly as fragile as it sounds. You are trying to observe a non-event — the disappearance of something — from outside the system where it happened.

Triggers: Inline, Guaranteed, Programmable

Cachee lets you register Lua functions that fire on cache lifecycle events. Five event types cover the complete lifecycle of a key:

TRIGGER REGISTER session:* ON_EXPIRE <

The five events:

Triggers execute in a sandboxed Lua 5.4 runtime embedded directly in the Cachee engine. There is no network hop. There is no message queue. The trigger fires inline with the cache operation, in the same process, with sub-microsecond overhead. The Lua sandbox prevents filesystem access, unbounded loops, and memory abuse — you get programmability without risk to the cache engine itself.

Sub-microsecond execution: Triggers add less than 1 microsecond of latency to the cache operation. The Lua VM is pre-initialized and the function bytecode is compiled once at registration. Each invocation is a function call, not a cold start.

Real-World Use Cases

Session Expiry Webhooks

When a session key expires, fire a webhook to your auth service to clean up server-side state, revoke refresh tokens, and update the user’s “last active” timestamp. Without triggers, session cleanup relies on the user’s next request discovering the session is gone — which might never come.

TRIGGER REGISTER session:* ON_EXPIRE <

Inventory-Driven Page Warming

When an inventory count is updated in the cache, trigger a pre-warm of every product page and search result that references that SKU. The page cache is rebuilt before any user requests it, eliminating the cold-start penalty that normally follows inventory changes.

TRIGGER REGISTER inventory:* ON_WRITE <

Rate Limit Compliance Logging

When a rate-limit key is written and the value exceeds the threshold, log the event to your compliance system. Every rate limit breach is captured at the moment it happens, not when a background job eventually notices.

TRIGGER REGISTER ratelimit:* ON_WRITE < tonumber(event.metadata.limit) then
    cachee.log("compliance", {
      key = event.key,
      value = event.value,
      limit = event.metadata.limit,
      timestamp = event.timestamp
    })
  end
LUA

Eviction Monitoring

When your eviction policy drops a key, log which key it was and what its recomputation cost was. Over time, this gives you a precise picture of whether your cache is sized correctly and whether your cost weights need adjustment.

Why Redis Keyspace Notifications Are Not Enough

Redis offers keyspace notifications — a pub/sub mechanism that fires when keys change. On paper, it sounds similar. In practice, the differences are fundamental:

  • Fire-and-forget: Redis keyspace notifications use pub/sub. If no subscriber is listening when the event fires, the event is lost. There is no replay, no queue, no guarantee. Triggers are inline and guaranteed — they execute as part of the cache operation itself.
  • No programmability: Redis notifications tell you that something happened. They do not let you run logic in response at the cache layer. You receive an event, then you must make a separate network call to do something about it. Triggers execute Lua logic at the point of the event, with zero additional round trips.
  • Network overhead: Receiving a keyspace notification requires a subscriber connection, a pub/sub channel, and a message decode — all over the network. Triggers run in-process.
  • Drop under load: Redis pub/sub is best-effort. Under heavy write load, notifications are dropped. For compliance-critical events like rate limit breaches or session expirations, “best effort” is not acceptable. Triggers execute synchronously and cannot be silently dropped.
The difference: Redis keyspace notifications tell you something happened — if you are listening, and if the message is not dropped. Cachee triggers do something when it happens — guaranteed, inline, programmable.

Composition With Other Cachee Primitives

Triggers compose with every other Cachee feature. When CDC auto-invalidation deletes a key, the ON_DELETE trigger fires. When a dependency graph cascade invalidates a derived key, ON_DELETE triggers fire for every key in the cascade. When coherence propagates an invalidation to another instance, the trigger fires on the receiving instance too.

This means a single database row change can trigger a CDC invalidation, which cascades through the dependency graph, which propagates via coherence, and at each step, triggers execute custom logic — webhooks, logging, pre-warming, analytics. The entire chain is automatic, sub-millisecond, and requires zero orchestration code in your application.

Your cache is not just a data store anymore. It is an event bus with guaranteed delivery, sub-microsecond execution, and programmable responses to every lifecycle event.

Related Reading

Also Read

Make Your Cache React. Not Just Store.

Cache Triggers. Sub-microsecond Lua execution. Guaranteed delivery on every lifecycle event. Zero polling infrastructure.

Start Free Trial Schedule Demo