How It Works Pricing Benchmarks
vs Redis Docs Blog
Start Free Trial
Event-Driven Cache

Your Cache Doesn't Just Store Data.
It Reacts To It.

Register Lua functions that fire on cache events. Session expires — push a webhook. Inventory changes — pre-warm pages. Rate limit hit — log to compliance. All inline, all sub-microsecond, all without leaving the cache layer.

5
Event Types
Lua 5.4
Scripting Engine
<1µs
Execution Time
0
External Triggers
The Problem

The Infrastructure Cache Triggers Replace

Every team that needs to react to cache state changes ends up building the same fragile stack. A cron job polls Redis every 5 seconds to check if sessions expired. A Lambda function subscribes to keyspace notifications that silently drop under load. A polling loop in your application code burns CPU cycles checking whether an inventory count changed. A SQS queue sits between your cache and your business logic, adding 50-200ms of latency and another service to monitor.

These solutions share the same fundamental flaw: they are external to the cache. They require network hops, serialization, separate deployment pipelines, and their own failure modes. Redis keyspace notifications are Pub/Sub messages — fire-and-forget, with no delivery guarantee. If your subscriber falls behind or disconnects, events disappear. Cron jobs have second-level granularity at best and minute-level in practice. Lambda cold starts add hundreds of milliseconds to what should be a microsecond operation.

Cron Jobs
Second-level granularity at best. Most teams run every 30-60s. That is 30-60 seconds of stale state your application cannot see.
Replaced by ON_EXPIRE
Lambda Functions
Cold starts add 100-500ms. You pay per invocation. And you need a bridge service (EventBridge, SQS) just to connect cache to compute.
Replaced by inline Lua
🔔
Keyspace Notifications
Redis Pub/Sub is fire-and-forget. Under load, events drop silently. No replay, no backpressure, no delivery guarantee.
Replaced by ON_EVICT
🔄
Polling Loops
Application code that checks cache state on an interval. Burns CPU, adds latency, and still misses events between polls.
Replaced by ON_WRITE

Cache triggers eliminate all of this. The trigger function runs inside the cache process, fires inline with the cache operation, and executes in sub-microsecond time. No infrastructure to manage. No events to lose. No latency to add. See how this compares to traditional architectures in our infrastructure comparison.

Trigger Events

Five Trigger Events. Every Cache Moment Covered.

Every meaningful state change in a cache maps to one of five events. Each event passes the key, value, and metadata to your Lua function. Register a trigger with a single TRIGGER SET command.

ON_WRITE
Fires When a Key Is Created or Updated
Every SET, MSET, or INCR operation triggers ON_WRITE. Use it to synchronize downstream systems, push real-time analytics, or pre-warm related keys the moment data changes. The trigger receives both the old value (if any) and the new value, so you can detect transitions — inventory crossing a threshold, a user changing roles, a feature flag toggling.
// Pre-warm product pages when inventory count changes TRIGGER SET inventory:* ON_WRITE ` local old = tonumber(trigger.old_value) or 0 local new = tonumber(trigger.value) if old > 10 and new <= 10 then cache.warm("page:product:" .. trigger.key_suffix) cache.emit("inventory.low", { sku = trigger.key_suffix, count = new }) end `
ON_EVICT
Fires When the Eviction Policy Removes a Key
When memory pressure forces the cache to evict a key, ON_EVICT fires before the key is removed. This is your chance to persist critical data, log eviction patterns for capacity planning, or promote the key to a secondary tier. Unlike Redis keyspace notifications, this trigger is guaranteed to execute — it runs synchronously inside the eviction path.
// Persist evicted session data to durable storage TRIGGER SET session:* ON_EVICT ` if trigger.value.last_active > (os.time() - 300) then cache.webhook("https://api.internal/session/persist", { session_id = trigger.key_suffix, data = trigger.value, reason = "eviction" }) end `
ON_EXPIRE
Fires When a Key's TTL Reaches Zero
TTL-based expiration is the most common cache lifecycle event, and the hardest to observe externally. Redis does not guarantee when expired keys are actually cleaned up — lazy expiration means a key can linger for minutes after its TTL. With ON_EXPIRE, your trigger fires at the exact moment of expiration. Use it for session logout flows, token revocation, or re-fetching data from origin before the next request hits a cold cache.
// Push webhook when user session expires TRIGGER SET auth:session:* ON_EXPIRE ` cache.webhook("https://api.internal/auth/session-expired", { user_id = trigger.metadata.user_id, session_id = trigger.key_suffix, expired_at = trigger.timestamp }) cache.delete("permissions:" .. trigger.metadata.user_id) `
ON_DELETE
Fires on Explicit Key Deletion
ON_DELETE distinguishes intentional deletion from eviction and expiration. When your application explicitly calls DEL or UNLINK, this trigger fires with the full key-value pair before removal. Ideal for compliance audit trails — every deletion is logged with the actor, timestamp, and the data that was removed. Also useful for cascading deletes across related keys.
// Compliance audit log on explicit deletion TRIGGER SET pii:* ON_DELETE ` cache.log("compliance", { action = "delete", key = trigger.key, deleted_by = trigger.metadata.caller, timestamp = trigger.timestamp, data_hash = cache.sha256(trigger.value) }) `
ON_READ
Fires on Every Cache Hit for a Key
ON_READ is the lightest trigger — it fires on GET operations without modifying the read path. Use it for real-time access counting, rate limit enforcement, or streaming analytics. Because it runs inline, you get exact hit counts without the sampling error of external monitoring tools. Apply it selectively to high-value key patterns to avoid overhead on bulk reads.
// Real-time rate limiting with compliance logging TRIGGER SET api:ratelimit:* ON_READ ` local count = cache.incr("rl:count:" .. trigger.key_suffix) if count > 1000 then cache.log("compliance", { event = "rate_limit_exceeded", client = trigger.key_suffix, count = count }) end `
Architecture

How It Works Under the Hood

Cache triggers are not an external service bolted onto the cache. They are the cache layer. Every trigger runs inside the same process that handles your GET and SET operations, with zero serialization and zero network hops.

Trigger Execution Pipeline
Client
SET key
Cache Engine
Write
Pattern Match
Trie Lookup
Trigger
Lua 5.4
Response
OK
Total Trigger Overhead
< 1µs
Sandboxed Lua execution, zero allocation, inline with cache operation

Embedded Lua 5.4 Runtime

Each trigger is a Lua function compiled at registration time and cached as bytecode. When a cache operation matches a registered trigger pattern, the bytecode executes in a pre-allocated Lua state — no interpreter startup, no JIT warmup. The Lua 5.4 runtime is embedded directly into the Cachee process with a shared-nothing execution model.

Triggers have access to a restricted API surface: cache.get, cache.set, cache.delete, cache.warm, cache.webhook, cache.emit, and cache.log. No filesystem. No raw sockets. No os.execute.

Sandbox and Timeout Protection

Every trigger runs inside a memory-limited, CPU-limited sandbox. The default execution timeout is 100 microseconds — configurable per trigger up to 1 millisecond. If a trigger exceeds its timeout, it is terminated and the cache operation completes normally. No trigger can block or slow down the cache hot path.

Memory allocation per trigger invocation is capped at 64KB. Trigger patterns are stored in a prefix trie that resolves in O(k) time where k is key length, not number of registered triggers. You can register thousands of triggers without measurable impact on cache throughput. Webhook calls initiated by triggers are dispatched asynchronously after the cache operation returns.

// Register a trigger with timeout and memory limits TRIGGER SET user:profile:* ON_WRITE TIMEOUT 200us MAXMEM 32kb ` -- This trigger fires on every profile update -- Pre-warm the user's dashboard and feed caches local user_id = trigger.key_suffix cache.warm("dashboard:" .. user_id) cache.warm("feed:" .. user_id) -- Emit event for downstream consumers cache.emit("user.profile.updated", { user_id = user_id, fields_changed = trigger.diff_keys, timestamp = trigger.timestamp }) ` // List all registered triggers TRIGGER LIST // Remove a specific trigger TRIGGER DEL user:profile:* ON_WRITE // Dry-run a trigger without executing side effects TRIGGER TEST user:profile:* ON_WRITE WITH '{"name":"test"}'
Use Cases

What Teams Build with Cache Triggers

Every use case below was previously handled by external infrastructure — cron jobs, Lambda functions, message queues, or polling loops. Cache triggers collapse all of that into a single Lua function registered at the cache layer.

01
Session Expiry Notifications
When an auth session expires, ON_EXPIRE pushes a webhook to your identity provider to revoke tokens, update audit logs, and trigger a logout event on connected devices. No cron job checking session TTLs every 30 seconds. The notification fires at the exact moment of expiry.
02
Inventory Threshold Alerts
ON_WRITE watches inventory counts and fires when stock crosses a threshold. Pre-warm the product page with an "almost sold out" variant, notify the fulfillment system, and update the storefront — all before the next customer request arrives.
03
Compliance Audit Logging
ON_DELETE logs every explicit deletion of PII-tagged keys with the caller identity, timestamp, and a SHA-256 hash of the deleted data. Meets GDPR Article 17 audit requirements without a separate logging pipeline. Immutable, inline, zero-lag.
04
Pre-Warming Related Data
When a user profile is written, ON_WRITE pre-warms their dashboard, feed, and notification caches. The next page load hits warm cache across every widget. Combine with predictive caching for full-spectrum pre-warming.
05
Real-Time Analytics Events
ON_READ streams access patterns to your analytics pipeline as they happen. No sampling, no batch windows. Every cache hit on a tracked key generates a structured event with exact timing, value size, and access metadata.
06
Cascading Cache Invalidation
ON_WRITE on a parent key automatically invalidates or refreshes dependent keys. Update a product price and the trigger invalidates the cart cache, the checkout cache, and the search index entry — all within the same microsecond window.
Comparison

Cache Triggers vs External Alternatives

Side-by-side comparison of cache triggers against the infrastructure they replace. Every row is a reason to stop building external event pipelines for cache state changes.

Capability Cron / Lambda / Polling Redis Keyspace Notify Cachee Triggers
Execution Latency 100ms - 5s ~1ms (when delivered) < 1µs (inline)
Delivery Guarantee At-least-once (with retry) Fire-and-forget (lossy) Exactly-once (synchronous)
Infrastructure Required Lambda + EventBridge + SQS Redis + Subscriber process None (in-process)
Event Granularity Seconds to minutes Per-operation Per-operation + old/new values
Logic Execution External compute External subscriber Inline Lua (sandboxed)
Cost per Event $0.0000002+ (Lambda) Subscriber infra costs $0 (included)
Access to Old Value Not available Not available Full old + new value diff
Pattern Matching Manual key filtering Glob patterns (limited) Prefix trie (O(k) lookup)
Integration

Register Your First Trigger in 30 Seconds

Cache triggers use the same connection as your cache commands. No separate SDK, no configuration files, no deployment pipeline. One command to register, one command to list, one command to remove.

// Connect with the standard Cachee SDK import { Cachee } from '@cachee/sdk'; const cache = new Cachee({ apiKey: 'ck_live_your_key_here' }); // Register a trigger programmatically await cache.trigger('session:*', 'ON_EXPIRE', ` cache.webhook("https://api.internal/session-expired", { session_id = trigger.key_suffix, user_id = trigger.metadata.user_id, expired_at = trigger.timestamp }) `); // Or use the CLI // $ cachee trigger set "session:*" ON_EXPIRE --script ./session-expire.lua // List active triggers const triggers = await cache.triggerList(); // [{ pattern: "session:*", event: "ON_EXPIRE", timeout: "100us", created: "..." }] // Remove a trigger await cache.triggerDelete('session:*', 'ON_EXPIRE');
SDK Integration
Register, list, test, and remove triggers from JavaScript, Python, Go, or Rust SDKs. Same connection, same auth, same API key as your cache operations.
4 language SDKs
CLI Management
The cachee trigger CLI command handles registration, listing, dry-run testing, and removal. Script files can be versioned in your repo.
Git-friendly workflow
Dashboard Monitoring
The Cachee portal shows trigger execution counts, latency percentiles, error rates, and timeout frequency. Debug failed triggers with full execution logs.
Real-time metrics

Stop Polling Your Cache.
Let Your Cache Call You.

Cache triggers are available on all Cachee plans. Register your first trigger in under 30 seconds. No infrastructure to provision, no external services to wire up.

Start Free Trial Talk to Enterprise Sales