Register Lua functions that fire on cache events. Session expires — push a webhook. Inventory changes — pre-warm pages. Rate limit hit — log to compliance. All inline, all sub-microsecond, all without leaving the cache layer.
Every team that needs to react to cache state changes ends up building the same fragile stack. A cron job polls Redis every 5 seconds to check if sessions expired. A Lambda function subscribes to keyspace notifications that silently drop under load. A polling loop in your application code burns CPU cycles checking whether an inventory count changed. A SQS queue sits between your cache and your business logic, adding 50-200ms of latency and another service to monitor.
These solutions share the same fundamental flaw: they are external to the cache. They require network hops, serialization, separate deployment pipelines, and their own failure modes. Redis keyspace notifications are Pub/Sub messages — fire-and-forget, with no delivery guarantee. If your subscriber falls behind or disconnects, events disappear. Cron jobs have second-level granularity at best and minute-level in practice. Lambda cold starts add hundreds of milliseconds to what should be a microsecond operation.
Cache triggers eliminate all of this. The trigger function runs inside the cache process, fires inline with the cache operation, and executes in sub-microsecond time. No infrastructure to manage. No events to lose. No latency to add. See how this compares to traditional architectures in our infrastructure comparison.
Every meaningful state change in a cache maps to one of five events. Each event passes the key, value, and metadata to your Lua function. Register a trigger with a single TRIGGER SET command.
Cache triggers are not an external service bolted onto the cache. They are the cache layer. Every trigger runs inside the same process that handles your GET and SET operations, with zero serialization and zero network hops.
Each trigger is a Lua function compiled at registration time and cached as bytecode. When a cache operation matches a registered trigger pattern, the bytecode executes in a pre-allocated Lua state — no interpreter startup, no JIT warmup. The Lua 5.4 runtime is embedded directly into the Cachee process with a shared-nothing execution model.
Triggers have access to a restricted API surface: cache.get, cache.set, cache.delete, cache.warm, cache.webhook, cache.emit, and cache.log. No filesystem. No raw sockets. No os.execute.
Every trigger runs inside a memory-limited, CPU-limited sandbox. The default execution timeout is 100 microseconds — configurable per trigger up to 1 millisecond. If a trigger exceeds its timeout, it is terminated and the cache operation completes normally. No trigger can block or slow down the cache hot path.
Memory allocation per trigger invocation is capped at 64KB. Trigger patterns are stored in a prefix trie that resolves in O(k) time where k is key length, not number of registered triggers. You can register thousands of triggers without measurable impact on cache throughput. Webhook calls initiated by triggers are dispatched asynchronously after the cache operation returns.
Every use case below was previously handled by external infrastructure — cron jobs, Lambda functions, message queues, or polling loops. Cache triggers collapse all of that into a single Lua function registered at the cache layer.
Side-by-side comparison of cache triggers against the infrastructure they replace. Every row is a reason to stop building external event pipelines for cache state changes.
| Capability | Cron / Lambda / Polling | Redis Keyspace Notify | Cachee Triggers |
|---|---|---|---|
| Execution Latency | 100ms - 5s | ~1ms (when delivered) | < 1µs (inline) |
| Delivery Guarantee | At-least-once (with retry) | Fire-and-forget (lossy) | Exactly-once (synchronous) |
| Infrastructure Required | Lambda + EventBridge + SQS | Redis + Subscriber process | None (in-process) |
| Event Granularity | Seconds to minutes | Per-operation | Per-operation + old/new values |
| Logic Execution | External compute | External subscriber | Inline Lua (sandboxed) |
| Cost per Event | $0.0000002+ (Lambda) | Subscriber infra costs | $0 (included) |
| Access to Old Value | Not available | Not available | Full old + new value diff |
| Pattern Matching | Manual key filtering | Glob patterns (limited) | Prefix trie (O(k) lookup) |
Cache triggers use the same connection as your cache commands. No separate SDK, no configuration files, no deployment pipeline. One command to register, one command to list, one command to remove.
cachee trigger CLI command handles registration, listing, dry-run testing, and removal. Script files can be versioned in your repo.Cache triggers are available on all Cachee plans. Register your first trigger in under 30 seconds. No infrastructure to provision, no external services to wire up.