Zero-Trust Caching: Why Every Cached Value Should Be Verified
Zero trust says "never trust, always verify." It applies to every network request, every API call, every user session. Except your cache. Your cache is the biggest trust assumption in your entire stack. Every cache hit returns data with zero verification that it has not been tampered with, has not expired, or has not become stale. Redis returns whatever bytes are stored at a key. No signature check. No freshness guarantee. No integrity verification. Your cache is a trust-everything zone sitting inside a zero-trust architecture.
Consider what this means in practice. You spend months implementing mutual TLS, OAuth 2.0 token validation, API gateway authorization, and service mesh policies. Every request is authenticated and authorized at multiple layers. Then that carefully validated response gets stored in Redis. Every subsequent consumer reads it from cache with no verification whatsoever. The most security-sensitive data in your system -- session tokens, authorization decisions, user permissions, API credentials -- is served from a layer that performs zero integrity checks.
This is not a theoretical concern. Cache poisoning, cached credential attacks, and phantom reads are real attack vectors that exploit the fundamental assumption every cache makes: that stored data is trustworthy. In a zero-trust architecture, that assumption is the gap adversaries walk through.
The Cache Trust Problem
Every traditional cache -- Redis, Memcached, ElastiCache, any in-memory key-value store -- operates on four implicit assumptions. First, the data was correct when it was stored. Second, nobody has modified it since storage. Third, it is still fresh and relevant. Fourth, the source that produced it was authoritative. None of these assumptions are verified on read. The cache simply returns whatever bytes are associated with a key and trusts the consumer to handle everything else.
This trust model creates three categories of real-world attacks that security teams consistently underestimate.
Cached credential attacks. An OAuth token, session token, or API key is cached in Redis with a 30-minute TTL. The user's access is revoked -- they are terminated, their account is compromised, their permissions change. But the cached token remains valid until the TTL expires. For up to 30 minutes, a revoked credential continues to grant full access to every service that checks the cache before hitting the identity provider. This is not a hypothetical scenario. It is the default behavior of every session cache in production today. The cache does not know that the credential has been revoked because nobody tells it, and it has no mechanism to verify credential validity on its own.
Cache poisoning. An attacker gains write access to your cache -- through a compromised application server, a Redis misconfiguration, or an SSRF vulnerability that reaches your cache endpoint. They modify a cached value: a pricing response, an authorization decision, a feature flag, a user profile. Every subsequent reader receives the poisoned value. There is no signature to verify. There is no integrity check. The poisoned value looks identical to a legitimate value because the cache has no concept of authenticity. The attack persists until the TTL expires or the key is manually invalidated. In high-traffic systems, a single poisoned cache entry can affect millions of requests before detection.
Phantom reads. Data expires in the source system but remains in the cache past its logical validity window. A product is discontinued but its cached listing still appears. A configuration change is deployed but the cached configuration serves the old value. A user deletes their account but cached profile data continues to be served to internal systems. The cache has no awareness of the source system's state. It serves stale data as if it were current because it has no mechanism to verify freshness beyond a simple TTL countdown.
The Trust Gap in Zero Trust
Your zero-trust architecture verifies every network request, every API call, every user identity. But the moment a response enters the cache, all verification stops. The cache becomes a trust-everything island in a verify-everything ocean. Every cache hit bypasses the security controls you spent months implementing. If an attacker can poison a single cache entry, they bypass all downstream authentication and authorization for the lifetime of that entry.
Zero Trust Principles Applied to Caching
NIST SP 800-207 defines the core tenets of zero trust architecture. These tenets were designed for network security, identity, and access control. But they apply directly to caching once you recognize that a cache is a data store that serves security-sensitive content to multiple consumers. Here is how each tenet maps to cache-specific implementation.
| NIST SP 800-207 Tenet | Network Implementation | Cache Implementation |
|---|---|---|
| Verify explicitly | Authenticate every request | Verify signature on every cache read |
| Use least privilege access | Role-based access control | Key-type scoping: Owner, Regulator, Auditor |
| Assume breach | Micro-segmentation, monitoring | Detect tampered entries on read, reject and recompute |
| All data sources are resources | Protect internal services equally | Cache entries are protected resources, not trusted stores |
| Access is per-session | No persistent trust | No persistent trust in cached values; verify or re-derive |
| Policy is dynamic | Continuous evaluation | Lifecycle state machine; entries can be revoked mid-TTL |
| Collect and use information | Audit logging | Log every verification result, every rejection, every recomputation |
Verify explicitly means that every cache read should verify the entry's cryptographic signature before returning the value. Not some reads. Not reads for sensitive keys. Every read. The signature proves that the value was written by an authorized source and has not been modified since. Without this verification, you are trusting the storage layer implicitly, which is the exact opposite of zero trust.
Use least privilege means that not all consumers should see all cached data. A cache that returns any value to any caller is a flat-access data store with no access control. Zero-trust caching implements key-type scoping: Owner keys can read and write, Regulator keys can read and audit but not write, Auditor keys can verify integrity without accessing the plaintext value. This maps directly to how zero trust handles identity-based access at the network layer.
Assume breach means that if an attacker modifies a cache entry, the system should detect the modification on the next read and reject the entry. The cache does not assume its contents are safe. It verifies on every access. When verification fails, the entry is discarded and the value is recomputed from the authoritative source. The breach is contained to a single failed read, not propagated to every consumer.
What a Zero-Trust Cache Looks Like
A zero-trust cache differs from a traditional cache in four fundamental ways. Every entry is signed at write time. Every read can verify independently. A computation fingerprint binds the result to its inputs. And a lifecycle state machine governs entry validity beyond simple TTL expiration.
Signed entries. When a value is written to the cache, it is signed using three post-quantum cryptographic families: ML-DSA-65 (lattice-based, FIPS 204), FALCON-512 (NTRU lattice-based), and SLH-DSA (stateless hash-based, FIPS 205). Three independent mathematical assumptions. An attacker must break all three families simultaneously to forge a valid cache entry. A single signature family provides integrity. Three families provide integrity that survives a breakthrough in any one mathematical domain.
Independent verification. Every cache read can verify the entry's signature without contacting the original writer, without accessing a central authority, and without trusting any intermediate system. The verification is self-contained: the entry carries its signature, and the reader holds the verification key. This is the same principle that makes TLS certificate verification work -- the verifier does not need to contact the CA on every connection. The verification cost is 31 nanoseconds per read, which is faster than a Redis network round trip by three orders of magnitude.
Computation fingerprint. The signature alone proves that the value has not been modified. But it does not prove that the value is still the correct answer to the original query. A computation fingerprint binds the cached result to its inputs. If the inputs change -- the database row is updated, the upstream API returns a different response, the configuration is modified -- the fingerprint invalidates. The cached value may be intact and properly signed, but if the inputs have changed, the fingerprint fails and the entry is treated as stale. This eliminates phantom reads at the cryptographic level.
Lifecycle state machine. Traditional caches have two states: present and absent. A key either exists with a TTL or it does not. A zero-trust cache implements a full lifecycle: Active, Stale, Revoked, Expired, and Revalidating. An entry can be moved to the Revoked state immediately, regardless of its remaining TTL. An entry in the Stale state can be served with a warning header while revalidation occurs in the background. An entry in the Revalidating state blocks concurrent recomputation to prevent cache stampedes. The lifecycle state machine gives the system explicit control over entry validity that goes far beyond "is the TTL still positive."
Verification Modes
Zero-trust caching does not require a single verification strategy. Different workloads have different risk profiles and performance budgets. Cachee provides four verification modes that allow teams to tune the tradeoff between security and throughput.
- AlwaysVerify: Full signature verification on every read. Maximum security. 31ns per read including verification.
- TrustCached: Verify once when the entry is loaded, trust within a configurable window. Lowest overhead after first verification.
- Probabilistic: Verify a configurable percentage of reads (default 10%). Catches tampering statistically. Reduces verification overhead by 90% while maintaining detection capability.
- AgeWeighted: Verification probability increases as the entry ages. Fresh entries are trusted more. Older entries are verified more frequently. Balances the fact that older entries have had more time to be tampered with.
Cache Poisoning Is Impossible When Entries Are Signed
Walk through the attack scenario. An attacker gains write access to your cache. This can happen through a compromised application server, a misconfigured security group that exposes your Redis port, an SSRF vulnerability, or a supply chain attack on a dependency that has cache write access. The attacker modifies a cached value -- an authorization decision that grants admin access, a pricing response that sets all items to zero, a feature flag that disables rate limiting.
In Redis: The modified value is served to every reader. There is no mechanism to detect the modification. The cache returns bytes. It does not know whether those bytes are authentic. Every consumer trusts the value because the cache returned it. The poisoned value propagates through the system until the TTL expires or an engineer manually discovers the anomaly. Depending on the TTL and the traffic volume, millions of requests may be served the poisoned value.
In Cachee: The next reader requests the value. Before returning it, Cachee verifies the entry's cryptographic signature. The attacker modified the value but could not forge a valid signature across three post-quantum families. The signature verification fails. The entry is rejected. Cachee logs the verification failure with full context: the key, the timestamp, the expected signature, the actual bytes. The system falls back to recomputation from the authoritative source. A new, properly signed entry replaces the poisoned one. The attack is detected on the very first read after the modification. Zero poisoned responses are served.
This is not defense in depth. This is defense at the point of access. The signature makes cache poisoning detectable and rejectable at the exact moment a consumer attempts to read the poisoned value. There is no window of vulnerability between the poisoning and the detection. The detection is the read.
Poisoning Detection Is Instant
In a signed cache, the time between cache poisoning and detection is exactly one read. Not one monitoring cycle. Not one alert threshold. One read. The first consumer that requests the poisoned value triggers signature verification, which fails, which rejects the entry, which logs the incident, which recomputes from source. The attacker's window is zero served requests.
Cached Credential Attacks: The Silent Breach
Cached credential attacks are the most dangerous cache vulnerability because they are invisible to monitoring systems. The cache is behaving correctly -- it is returning a valid, unexpired entry. The problem is that the entry represents a credential that has been revoked in the source system but remains active in the cache.
Here is the typical scenario. A user authenticates and receives an OAuth access token. The token is cached in Redis with a 30-minute TTL for performance -- hitting the identity provider on every request adds 5-15 milliseconds of latency. The user's account is compromised and the security team revokes the token at the identity provider. But the cached token in Redis still has 22 minutes of TTL remaining. For those 22 minutes, any service that checks the cache before the identity provider will accept the revoked credential as valid. The attacker has 22 minutes of authenticated access after the credential was explicitly revoked.
This is not a bug. This is how every cache works. The cache has no mechanism to receive revocation signals from the identity provider. The TTL was set at write time and counts down independently of the source system's state. The only mitigation in a traditional cache is to reduce the TTL, which trades security against performance: a 30-second TTL closes the window but means you hit the identity provider on nearly every request, which eliminates the performance benefit of caching.
In Cachee: The lifecycle state machine solves this without sacrificing performance. When a credential is revoked at the identity provider, an invalidation signal moves the cached entry from the Active state to the Revoked state. This is a state transition, not a deletion. The entry still exists in the cache, but its state is Revoked. The next read sees the Revoked state and returns nothing -- the entry is treated as absent. No stale credential is served. The invalidation is instant, not dependent on TTL expiration. And because the state transition is itself signed, an attacker cannot forge a revocation or reverse one.
The invalidation signal can be pushed from the identity provider (webhook, event bus, CDC stream) or pulled by the cache (periodic revalidation of credential entries). Either way, the lifecycle state machine provides a mechanism for the source system's state to override the cache's TTL. This is fundamentally different from a traditional cache, where the TTL is the only validity mechanism.
Performance Cost of Verification
The immediate objection to verified caching is performance. "If I verify every read, doesn't that add latency? Isn't the whole point of a cache to be fast?" The answer depends on what you are comparing against.
A Redis read over the network takes 50-500 microseconds depending on network conditions, serialization overhead, and connection pool state. That read includes zero verification. A Cachee read with AlwaysVerify -- full cryptographic signature check on every access -- takes 31 nanoseconds. That is 1,600 to 16,000 times faster than a Redis network read, and it includes signature verification.
The reason is architectural. Cachee is an in-process L1 cache. There is no network hop. There is no serialization. There is no TCP connection overhead. The verification is a cryptographic operation on data that is already in the process's address space. The 31-nanosecond read includes a hash comparison against the entry's computation fingerprint and a signature validity check. Even with full verification, an in-process cache is orders of magnitude faster than an unverified network cache.
For workloads where even 31 nanoseconds matters, the verification modes provide flexibility.
| Verification Mode | Read Latency | Security Level | Use Case |
|---|---|---|---|
| AlwaysVerify | 31ns | Maximum: every read verified | Credentials, auth decisions, financial data |
| AgeWeighted | 12-31ns (avg) | High: increases with age | Session data, user profiles |
| Probabilistic (10%) | ~15ns (avg) | Statistical: catches tampering over time | Feature flags, configuration, content |
| TrustCached | 8ns (after initial) | Baseline: verify once per window | Static assets, reference data |
The key insight is that verification cost is not an argument against verified caching. It is an argument against network caching. The network round trip dwarfs any verification overhead. When you eliminate the network by moving to in-process caching, you gain enough performance headroom to add full cryptographic verification and still be faster than an unverified Redis read.
Implementation: From Trust-Everything to Zero-Trust Cache
Migrating from a trust-everything cache to a zero-trust cache involves three changes: configuring the verification mode, setting up key types for access control, and enabling audit logging for verification events.
Step 1: Configure Verification Mode
# cachee.toml
[verification]
mode = "AlwaysVerify" # AlwaysVerify | TrustCached | Probabilistic | AgeWeighted
probabilistic_rate = 0.10 # For Probabilistic mode: verify 10% of reads
age_threshold_secs = 300 # For AgeWeighted: increase verification after 5 min
trust_window_secs = 60 # For TrustCached: re-verify every 60 seconds
[signatures]
families = ["ML-DSA-65", "FALCON-512", "SLH-DSA"]
require_all = true # All three must validate; reject if any fails
[lifecycle]
enable_revocation = true # Enable Revoked state transitions
revalidation_interval = 30 # Seconds between background revalidation checks
Step 2: Set Up Key Types for Access Control
# Define key types with access scoping
[key_types.credentials]
verification_mode = "AlwaysVerify"
access = ["Owner", "Regulator"] # Auditors cannot read credential values
lifecycle = "revocable" # Supports immediate revocation
audit_level = "full" # Log every read, write, verify, reject
[key_types.session]
verification_mode = "AgeWeighted"
access = ["Owner"] # Only the owning service reads sessions
lifecycle = "revocable"
audit_level = "standard" # Log writes, rejections, revocations
[key_types.configuration]
verification_mode = "Probabilistic"
access = ["Owner", "Regulator", "Auditor"]
lifecycle = "standard" # TTL-based, no revocation needed
audit_level = "minimal" # Log rejections only
Step 3: Enable Audit Logging
# Audit log configuration
[audit]
enabled = true
destination = "stdout" # stdout | file | syslog | webhook
format = "json" # json | text
include_key_name = true
include_verification_result = true
include_rejection_reason = true
include_recomputation_trigger = true
# Sample audit log entry:
# {
# "timestamp": "2026-05-05T14:23:01.003Z",
# "event": "verification_failed",
# "key": "session:user:48291",
# "reason": "signature_mismatch",
# "family": "ML-DSA-65",
# "action": "rejected_and_recomputed",
# "latency_ns": 31
# }
The migration path is incremental. Start with Probabilistic mode on non-critical key types to validate that your workload handles verification without disruption. Move to AgeWeighted for session and credential data. Once you have confidence in the verification pipeline, move credentials and authorization decisions to AlwaysVerify. The cache attestation documentation covers the full API surface for configuring verification modes programmatically.
The Architecture Shift
Before: Application → Redis (trust everything) → Return unverified bytes. No integrity check. No access control. No revocation. No audit trail. Cache is a blind trust zone.
After: Application → Cachee L1 (verify everything) → Signature check → Lifecycle state check → Fingerprint validation → Access control → Audit log → Return verified value. Every read is verified. Every rejection is logged. Every revocation is instant.
Zero trust was never just about network perimeters and API gateways. It is about eliminating implicit trust from every layer of the stack. Your cache is a data store that serves security-sensitive content to multiple consumers. It should be held to the same zero-trust standard as your network, your identity provider, and your API layer. Every cached value should be verified. Every verification failure should be logged. Every revocation should be instant. The technology to do this at 31 nanoseconds per read exists today. The only question is whether you continue to trust your cache or start verifying it.
For deeper technical detail, see Post-Quantum Caching for how the three PQ signature families work together, Computation Fingerprinting for how input-binding prevents phantom reads, Cache Attestation for the full attestation API reference, and the NIST PQ Compliance Guide for migration timelines and key sizes.
Your cache trusts everything. Cachee verifies everything. 31 nanoseconds. Three PQ signature families. Zero trust, for real.
Get Started Cache Attestation Docs