Encrypted Cache at Rest: Meeting HIPAA, GDPR, and PCI Without Sacrificing Latency
In 2025, HIPAA made encryption mandatory. The word "addressable" -- the loophole that let organizations document why they chose not to encrypt and move on -- was removed from the Security Rule. Encryption of electronic protected health information at rest is now a required implementation specification. GDPR has always demanded "appropriate technical measures" to protect personal data, and European regulators have made clear that encryption is the baseline expectation, not the ceiling. PCI DSS 4.0, which became mandatory in March 2025, tightened its encryption requirements for stored account data with specific controls that go beyond what version 3.2.1 required.
All three frameworks agree on one thing: sensitive data must be encrypted at rest. And all three frameworks have a blind spot in common: they do not specifically mention cache. But cache is where your data lives in its most vulnerable state. Your database encrypts data on disk. Your API encrypts data in transit. Your cache stores data in plaintext in memory, accessible to anyone who can read the process memory space.
Redis stores everything unencrypted in memory. ElastiCache offers "encryption at rest," but that is EBS-level disk encryption -- it protects RDB snapshots on the underlying volume, not the data in memory where your application actually reads it. If an attacker gains access to the Redis process or dumps its memory, every cached value is exposed in plaintext. Every session token, every user profile, every API response, every PII field. Your cache is the biggest plaintext surface in your infrastructure, and none of your compliance frameworks explicitly tell you to fix it.
The Plaintext Cache Problem
Consider what happens during a breach. An attacker gains access to your Redis instance -- through a misconfigured security group, a compromised application server with network access, or an SSRF vulnerability that lets them reach the internal cache endpoint. They issue a DEBUG OBJECT or DUMP command, or they read the process memory directly. What do they find?
Every cached session token, in plaintext. Every cached user profile -- name, email, phone number, address -- in plaintext. Every cached API response that contained PII, payment data, or health information, in plaintext. Every cached lab result, every cached appointment record, every cached payment token. The cache is a single point of extraction for every type of sensitive data your application handles, because the cache does not discriminate. It stores whatever the application puts into it, and it stores it without any cryptographic protection.
Redis RDB snapshots compound the problem. Redis periodically writes its in-memory dataset to disk as an RDB file. Unless EBS encryption is enabled on the underlying volume, these snapshots are plaintext files on disk. Even with EBS encryption, the protection is transparent -- the application never sees it, which means the application cannot verify it. You are trusting AWS to encrypt, with no cryptographic proof that encryption is actually happening at the application layer. If the EBS encryption key is compromised, or if the snapshot is copied to an unencrypted volume, your cached data is exposed.
AOF (Append Only File) persistence has the same problem. Every write to Redis is logged to an AOF file in plaintext. If an attacker accesses the filesystem, they get a complete replay log of every piece of data that ever touched your cache. This is not a theoretical attack. Redis instances exposed to the internet have been a consistent entry point in breach disclosures for years. What changed in 2025 is that the regulatory frameworks now explicitly require the encryption that would have prevented the exposure.
The Cache Compliance Gap
Your database is encrypted. Your API uses TLS. Your cache stores the same sensitive data as both -- in plaintext. Compliance auditors are catching up. The 2025 HIPAA Security Rule update, GDPR enforcement actions citing insufficient technical measures, and PCI DSS 4.0 Requirement 3.5 all point to the same conclusion: if sensitive data is cached, the cache must be encrypted. Not the disk under it. Not the network in front of it. The cache itself.
What Each Framework Actually Requires
The compliance landscape for cached data spans three major frameworks. Each has different language, different enforcement mechanisms, and different penalties. But all three converge on the same technical requirement: sensitive data at rest must be protected by strong cryptography, and cache is at rest.
HIPAA (45 CFR 164.312)
The HIPAA Security Rule requires encryption of electronic protected health information (ePHI) at rest and in transit. The 2025 rule update removed the "addressable" designation from encryption, making it a required implementation specification. This means organizations can no longer perform a risk assessment and decide that encryption is not reasonable and appropriate. Encryption is mandatory.
What constitutes cached ePHI? Patient records, lab results, appointment schedules, medication lists, insurance information, and any data element that includes a patient identifier combined with health information. If your application caches a patient's name alongside their lab results -- even temporarily, even with a 60-second TTL -- that cached entry is ePHI, and it must be encrypted at rest. The fact that it is "just a cache" does not change its regulatory classification.
GDPR (Article 32)
Article 32 of GDPR requires "appropriate technical and organisational measures" to ensure security of processing, including "encryption of personal data." European Data Protection Authorities have consistently interpreted this as requiring encryption of personal data at rest, and enforcement actions have cited the absence of encryption as a violation.
Cached data falls squarely within scope. User profiles, session data, consent records, behavioral analytics, and any data that identifies or can be used to identify a natural person is personal data under GDPR. If it is cached, it is processed, and if it is processed, Article 32 applies. The regulation does not distinguish between persistent storage and temporary cache. Data is data.
PCI DSS 4.0 (Requirement 3.5)
PCI DSS 4.0 Requirement 3.5 states: "Primary account number (PAN) is secured wherever it is stored." The requirement specifies strong cryptography to render PAN unreadable anywhere it is stored. "Anywhere" includes cache. If your payment processing application caches tokenized card data, transaction results, or any data element that contains or derives from PAN, Requirement 3.5 applies to that cache entry.
Requirement 10 adds another dimension: logging. All access to cardholder data must be logged, including reads. If cardholder data is cached and your cache does not produce an audit trail of reads, you are out of compliance with Requirement 10 regardless of whether the cache is encrypted.
| Framework | What's Cached | Encryption Requirement | Penalty for Violation |
|---|---|---|---|
| HIPAA | Patient records, lab results, appointments, insurance data | Mandatory encryption at rest (2025 rule) | Up to $2.1M per violation category per year |
| GDPR | User profiles, session data, consent records, behavioral data | Encryption as appropriate technical measure (Article 32) | Up to 4% of global annual revenue or 20M EUR |
| PCI DSS 4.0 | Payment tokens, card data, transaction results | Strong cryptography for stored account data (Req 3.5) | Fines up to $100K/month, loss of processing rights |
Three Levels of Cache Encryption
Not all cache encryption is equal. The industry has settled on three distinct levels, each protecting against a different threat model. Most organizations implement Level 1 and believe they are compliant. They are not.
Level 1: Disk Encryption (EBS, dm-crypt)
Disk encryption protects cached data that has been persisted to storage. AWS EBS encryption, dm-crypt, and LUKS all operate at this level. The protection model is physical theft: if someone removes the disk from the data center, the data is unreadable. But disk encryption is transparent to the application. The operating system decrypts data as it is read from disk, so any process running on the machine sees plaintext. If an attacker compromises the operating system, the application, or the Redis process, disk encryption provides zero protection. The data in memory -- where the cache actually operates -- is completely unencrypted.
Level 2: Transport Encryption (TLS)
Transport encryption protects data in transit between the application and the cache. Redis supports TLS since version 6.0, and ElastiCache supports in-transit encryption. This prevents network-level eavesdropping: an attacker sniffing traffic between your application server and Redis cannot read the cached values. But transport encryption does not protect data at rest. Once the data arrives at the cache, TLS terminates, and the data is stored in plaintext in memory. Transport encryption protects the pipe, not the bucket.
Level 3: Per-Entry Cryptographic Signing
Per-entry cryptographic signing protects the integrity and provenance of each individual cache entry. Every value stored in the cache is accompanied by a cryptographic signature that proves (a) the data has not been tampered with since it was cached, and (b) the data was produced by a known, authorized source. This protection persists in memory, on disk, and during transit. It does not depend on the infrastructure layer. Even if an attacker reads the raw cache memory, they cannot modify a cache entry without invalidating its signature, and they cannot forge a new entry without the signing key.
Cachee operates at Level 3. Every entry is signed by three post-quantum signature families: ML-DSA, FALCON, and SLH-DSA. This is not just encryption -- it is cryptographic attestation. Each cache entry carries a computation fingerprint that proves its authenticity, its origin, and its integrity. The signature verification is configurable: AlwaysVerify checks every read, Probabilistic checks a random sample, and AgeWeighted increases verification frequency as entries age. The result is per-entry compliance that disk encryption and transport encryption cannot provide.
Why Level 3 Matters for Compliance
HIPAA requires encryption of ePHI at rest. Level 1 (disk) only protects persisted snapshots. GDPR requires appropriate technical measures for personal data. Level 2 (TLS) only protects data in transit. PCI DSS 4.0 requires strong cryptography wherever account data is stored -- including in-memory cache. Only Level 3 (per-entry signing) protects data in the place where caches actually store it: memory. Level 3 also produces the per-access audit trail that PCI Requirement 10 demands.
The Performance Myth: Encryption Must Be Slow
The most common objection to cache encryption is performance. Engineers assume that adding cryptographic operations to every cache read and write will destroy the latency advantage that caching provides in the first place. This assumption is reasonable -- and wrong, for two reasons.
First, traditional encryption overhead is negligible relative to network cache latency. AES-256-GCM encryption adds 1-5 microseconds per operation depending on payload size. Redis round-trip latency over the network is 300-800 microseconds. The encryption overhead is less than 2% of the total operation time. If you are using Redis or ElastiCache, encryption is effectively free relative to the network cost you are already paying. The reason vendors have not implemented per-entry encryption is not performance -- it is architecture. Redis was designed as a plaintext key-value store, and retrofitting per-entry encryption into its data model would require fundamental changes to its memory layout and replication protocol.
Second, in-process cache changes the math entirely. With an in-process cache operating at 31 nanoseconds per read, even 1 microsecond of encryption overhead would represent a 32x slowdown. That is why Cachee separates signing from verification and makes verification configurable. Signing happens at write time, amortized across reads. A cache entry that is written once and read 10,000 times pays the signing cost once. Verification at read time is optional and configurable: AlwaysVerify for maximum security (adds approximately 2 microseconds per read), Probabilistic for statistical assurance (verifies 1 in N reads, net overhead near zero), or AgeWeighted for practical security (verifies more frequently as entries age past their expected freshness window).
The net result: 31-nanosecond reads with optional verification that can be tuned per workload. Even with AlwaysVerify enabled, Cachee at 2 microseconds per read is still 150x faster than unencrypted Redis at 300 microseconds. You do not sacrifice latency for compliance. You sacrifice a Redis bill.
HIPAA Cache Compliance: A Practical Guide
Healthcare applications cache more ePHI than most teams realize. Every time your application caches a patient lookup to avoid a database round-trip, that cache entry is ePHI. The compliance requirements are specific and non-negotiable after the 2025 rule update.
What constitutes cached ePHI. Any combination of a patient identifier (name, MRN, SSN, date of birth) with health information (diagnoses, lab results, medications, appointment schedules, insurance data). A cache entry containing {"mrn": "12345", "last_lab": "A1C 6.2"} is ePHI. A cache entry containing only {"mrn": "12345"} is also ePHI, because the MRN itself is a HIPAA identifier. If your cache key contains a patient identifier, the entire entry is ePHI regardless of the value.
Minimum necessary standard applies to cache. HIPAA requires that access to ePHI be limited to the minimum necessary for the intended purpose. This applies to caching: do not cache full patient records when your application only needs the MRN and appointment time. Cache the minimum data set required for the use case. This reduces both the compliance surface and the memory footprint.
Cache retention is data retention. HIPAA requires 6-year retention of audit logs related to ePHI access. If your cache produces access logs (and it must, per the Security Rule's audit controls), those logs must be retained for 6 years. With Redis, cache access is not logged at all -- there is no built-in per-key access audit. With Cachee, every read produces a computation fingerprint that serves as an auditable access record.
BAA implications. If your cache provider stores ePHI, they are a Business Associate under HIPAA and require a BAA. AWS provides a BAA that covers ElastiCache. Redis Labs provides a BAA for Redis Enterprise Cloud. But with an in-process cache, there is no third-party cache provider. The cache runs inside your application process, on your infrastructure, under your existing BAA coverage. No additional BAA is needed for the cache tier.
GDPR Cache Compliance: Right to Erasure in Cache
GDPR introduces a compliance requirement that is fundamentally incompatible with how most caches work: the right to erasure. Article 17 gives data subjects the right to request deletion of their personal data, and the controller must comply "without undue delay." This applies to cached copies of personal data.
TTL is not erasure. Redis TTL-based expiry is a performance optimization, not a data protection mechanism. If a user requests deletion of their data under Article 17 and their profile is cached with a 3600-second TTL, that data persists in the cache for up to one hour after the erasure request. During that hour, any application reading from cache will continue to serve the data that the user has a legal right to have deleted. TTL-based expiry does not meet the "without undue delay" requirement of Article 17.
Cachee handles erasure through its lifecycle state machine. Every cache entry in Cachee has a lifecycle state: Active, Stale, or Revoked. When a GDPR erasure request arrives, the application sets the entry state to Revoked. The entry becomes immediately unreadable -- subsequent reads return a cache miss, not the cached value. The entry's memory is overwritten, not just marked for future eviction. This is cryptographic erasure: the attestation signature is invalidated, making the entry unrecoverable even if the underlying memory is forensically examined.
Data residency. GDPR restricts cross-border transfers of personal data. With Redis or ElastiCache, cache replication can silently copy personal data to replicas in other regions. A Redis cluster with replicas in eu-west-1 and us-east-1 is performing a cross-border transfer of every cached personal data entry. With an in-process cache, the data stays in the application's region. There is no cache replication layer that can inadvertently transfer personal data across borders. The data residency of your cached personal data is identical to the data residency of your application -- because the cache is the application.
PCI DSS 4.0 Cache Controls
PCI DSS 4.0 imposes two requirements that directly affect cache architecture: Requirement 3.5 (protect stored account data) and Requirement 10 (log all access to cardholder data).
Requirement 3.5: render PAN unreadable anywhere it is stored. The word "anywhere" is the key. PCI assessors have historically focused on databases and file systems, but the DSS 4.0 guidance explicitly includes "any storage location" in scope. If your payment processing application caches tokenized card data, partial PANs, or transaction results that reference cardholder data, Requirement 3.5 applies to those cache entries. The requirement specifies "strong cryptography" -- disk-level encryption may satisfy this for persisted data, but in-memory cache entries are not covered by disk encryption.
Requirement 10: log all access to cardholder data. Every read of cached cardholder data must produce an audit log entry. Redis does not log per-key reads. ElastiCache CloudTrail logs API calls (CreateCacheCluster, ModifyReplicationGroup), not data access. To comply with Requirement 10 for cached cardholder data in Redis, you must implement application-level logging for every cache read -- which adds latency, complexity, and a new failure mode. Cachee's computation fingerprints produce a per-read audit trail as a byproduct of the signing architecture. No additional logging infrastructure is required.
Cache contracts for payment data freshness. PCI DSS 4.0 Requirement 3.3 addresses retention: cardholder data must not be stored beyond the time needed for business purposes. Cache contracts in Cachee provide enforceable freshness SLAs -- you can define that payment-related cache entries must expire within 300 seconds and that the contract is cryptographically enforced, not dependent on a background TTL cleanup process that may lag.
Implementation: Compliance-Ready Cache in 15 Minutes
Here is what a compliance-ready cache configuration looks like with Cachee, compared to what you would need to build on top of Redis to achieve the same guarantees.
Cachee: HIPAA-Compliant Configuration
# Install
brew install cachee
# cachee.toml - HIPAA-compliant configuration
[cache]
max_entries = 500_000
eviction_policy = "CacheeLfu"
[security]
signing = "pq-three-family" # ML-DSA + FALCON + SLH-DSA
verification_mode = "always_verify" # Every read verified
audit_fingerprints = true # Per-read audit trail
[lifecycle]
enable_revocation = true # GDPR Article 17 support
retention_log_days = 2190 # 6-year HIPAA audit retention
[compliance]
ephi_mode = true # Enforce minimum necessary
max_ttl_seconds = 3600 # No indefinite caching of ePHI
require_classification = true # Every entry must declare type
Redis: What You Would Need to Build
# Redis alone does not provide:
# - Per-entry encryption (you build it in your application)
# - Per-read audit logging (you build it in your application)
# - Cryptographic erasure (you build it in your application)
# - Compliance classification (you build it in your application)
# - Freshness SLA enforcement (you build it in your application)
# Approximate application-level implementation:
# 1. AES-256-GCM encrypt every value before SET (+3-5us per write)
# 2. Decrypt every value after GET (+3-5us per read)
# 3. Log every GET to a separate audit system (+50-200us per read)
# 4. Build a key registry for Article 17 deletion (custom service)
# 5. Implement TTL enforcement with monitoring (custom service)
# 6. Key management (rotation, storage, access) (KMS integration)
# 7. Classification metadata in a separate store (custom service)
# Total added latency: 60-210us per read
# Total added services: 3-4 custom microservices
# Total added operational burden: significant
With Redis, compliance-ready caching requires building 3-4 custom services on top of the cache, adding 60-210 microseconds of latency per read, and creating operational complexity that must itself be maintained and audited. With Cachee, it is a configuration file. The compliance controls are built into the cache engine, not bolted on top of it.
The Bottom Line
HIPAA encryption is mandatory. GDPR requires encryption as a baseline technical measure. PCI DSS 4.0 requires strong cryptography wherever account data is stored -- including cache. Redis stores everything in plaintext in memory. ElastiCache encryption is disk-level, not per-entry. Your cache is the largest plaintext surface in your compliant infrastructure. Cachee provides Level 3 per-entry cryptographic signing with three post-quantum signature families, immediate cryptographic erasure for GDPR Article 17, per-read audit trails for PCI Requirement 10, and 31-nanosecond reads that make encryption overhead irrelevant. Compliance is a configuration file, not a custom build.
Your cache stores ePHI, PII, and cardholder data in plaintext. Cachee signs every entry with three PQ families at 31ns.
Get Started Cache AttestationFurther reading: Post-Quantum Caching | Computation Fingerprinting | NIST PQ Compliance Guide | Post-Quantum Caching Is the Only Caching That Will Exist | Complete Guide to Cache Security