← Back to Blog

Cryptographic Agility in Caching: Why 3 Signatures Beat 1

May 9, 2026 | 14 min read | Engineering

NIST, NSA, and CISA all now mandate "cryptographic agility" -- the ability to swap cryptographic algorithms without re-architecting the systems that depend on them. The mandate sounds reasonable in the abstract. In practice, it forces a question that most infrastructure teams have never asked: what happens to your cache layer when the algorithm it depends on breaks?

The answer, for every single-algorithm cache, is catastrophic. If your cache signs entries with one algorithm and that algorithm is broken -- a collision attack, a quantum factoring advance, a side-channel extraction -- every cached entry becomes untrustable in a single instant. You cannot verify that any cached value is authentic. You cannot prove that entries have not been tampered with. Your only option is to flush the entire cache, re-sign everything with a new algorithm, and absorb the downtime, the cold-start latency spike, and the compliance gap that exists between "the old algorithm broke" and "we finished re-signing everything."

Cachee signs every cache entry with three independent post-quantum signature families. If one algorithm shows weakness, two remain. Entries stay trusted. Verification continues. Compliance is uninterrupted. The weakened algorithm is swapped out at the next key rotation cycle, and the cache never goes cold. That is not redundancy for its own sake. That is cryptographic agility built into the data layer, where it actually matters.

3
Independent PQ Signature Families
3
Independent Hardness Assumptions
0
Downtime When One Algorithm Breaks

What Cryptographic Agility Actually Means

Cryptographic agility is not a vague aspiration. It is a precisely defined capability outlined in multiple government standards. NIST SP 800-131A Rev 2 specifies transition recommendations for cryptographic algorithms, including timelines for deprecation and replacement. CNSA 2.0 -- the NSA's Commercial National Security Algorithm Suite -- goes further, explicitly mandating that organizations "maintain algorithm agility" as a core requirement for handling classified and sensitive information. The premise is simple: no algorithm lasts forever, and your architecture must survive the death of any single one.

True cryptographic agility requires three capabilities, not one. Most systems that claim to be "crypto agile" only implement the first.

Capability 1: Swap algorithms without code changes. This is the baseline. Your system should allow you to change from Algorithm A to Algorithm B through configuration, not through a code rewrite and redeployment. Most modern TLS stacks satisfy this requirement. You can change cipher suites in a config file. This is necessary but nowhere near sufficient.

Capability 2: Support multiple algorithms simultaneously. This is where most systems fail. It is not enough to swap from A to B. You need to run A and B at the same time during the transition period. Entries signed with A must remain verifiable while new entries are signed with B. If your system can only use one algorithm at a time, every transition requires a hard cutover -- and hard cutovers are exactly where compliance gaps appear. An auditor will ask: "Between the time you deprecated Algorithm A and finished re-signing with Algorithm B, how did you verify integrity?" If the answer is "we didn't," that is a finding.

Capability 3: Maintain trust during transitions. This is the hardest requirement and the one that single-algorithm systems fundamentally cannot satisfy. When an algorithm is deprecated -- whether due to a published weakness, a regulatory mandate, or a scheduled rotation -- every entry signed exclusively with that algorithm loses its trust anchor. If there is no second signature from an independent algorithm, the entry is unverifiable. Trust is binary: you can verify the signature or you cannot. There is no "partially trusted" state.

Cachee satisfies all three capabilities natively. It supports algorithm selection through configuration. It runs three algorithms simultaneously on every entry. And when any one algorithm is deprecated, the remaining two maintain trust without interruption. This is not a feature that was bolted on after a compliance review. It is the signing architecture.

The Three Hardness Assumptions

Cachee does not use three algorithms from the same family. It uses three algorithms from three families built on three independent mathematical hardness assumptions. This distinction matters enormously. If two algorithms share the same underlying hardness assumption -- say, both rely on the hardness of lattice problems with MLWE structure -- then a breakthrough against that assumption breaks both algorithms simultaneously. Two algorithms, one assumption, one point of failure. That is redundancy theater, not cryptographic agility.

The three families Cachee uses are chosen specifically because their security rests on different mathematical foundations. A breakthrough against one tells you nothing about the security of the other two.

ML-DSA-65 (FIPS 204). This is the Module-Lattice Digital Signature Algorithm, standardized by NIST in FIPS 204. Its security rests on the Module Learning With Errors (MLWE) assumption -- the hardness of solving systems of linear equations with small noise over polynomial rings. ML-DSA-65 provides NIST Security Level 3, produces signatures of approximately 3,293 bytes, and verifies in microseconds on modern hardware. It is the most broadly deployed post-quantum signature algorithm as of 2026 and the default choice for most PQ migration efforts.

FALCON-512 (FN-DSA). FALCON is based on the NTRU lattice assumption, which is structurally different from MLWE despite both being "lattice-based." Where MLWE relies on the hardness of Learning With Errors over module lattices, NTRU relies on the hardness of finding short vectors in a specific class of lattices defined by the NTRU construction. The two problems are not known to be equivalent, and a polynomial-time algorithm for one does not imply a polynomial-time algorithm for the other. FALCON-512 produces compact signatures of approximately 666 bytes -- the smallest of the three -- and provides NIST Security Level 1. Its compact signature size makes it particularly valuable for bandwidth-constrained environments.

SLH-DSA-SHA2-128f (FIPS 205). This is the Stateless Hash-Based Digital Signature Algorithm, standardized in FIPS 205. Its security rests on the assumption that the underlying hash function (SHA-256) is collision-resistant, preimage-resistant, and second-preimage-resistant. This is a fundamentally different assumption from either lattice problem. Hash-based signatures have been studied for decades, and their security is understood with much higher confidence than lattice-based schemes. The tradeoff is size: SLH-DSA-SHA2-128f produces signatures of approximately 17,088 bytes. But in a cache context where entries are measured in kilobytes and stored in memory, 17 KB is an acceptable cost for the strongest mathematical confidence of any post-quantum signature scheme.

The security claim is precise: Cachee's triple-signature scheme breaks if and only if MLWE lattices, NTRU lattices, AND stateless hash functions are simultaneously broken -- three independent mathematical bets.

FamilyStandardHardness AssumptionSignature SizeNIST Level
ML-DSA-65FIPS 204Module-LWE (lattice)~3,293 bytesLevel 3
FALCON-512FN-DSA (Round 3)NTRU (lattice, different structure)~666 bytesLevel 1
SLH-DSA-SHA2-128fFIPS 205Stateless hash functions (SHA-256)~17,088 bytesLevel 1

Three Independent Mathematical Bets

ML-DSA-65 falls if MLWE is solved. FALCON-512 falls if NTRU short-vector problems are solved. SLH-DSA falls if SHA-256 is broken. No known relationship connects these three problems. Breaking one gives the attacker zero advantage against the other two. This is not defense in depth -- it is defense in breadth across the mathematical landscape.

Single-Algorithm Risk: A History Lesson

The idea that a widely deployed cryptographic algorithm could break is not hypothetical. It has happened repeatedly, and every time it happened, systems that depended on a single algorithm suffered disproportionately compared to systems with algorithm diversity.

MD5 (2004). Xiaoyun Wang and colleagues demonstrated practical collision attacks against MD5 at CRYPTO 2004. Within a year, researchers showed that MD5 collisions could be used to forge SSL certificates. Any system that used MD5 as its sole integrity check -- and many did -- could no longer verify that certificates, software packages, or data integrity checksums were authentic. The migration to SHA-1 took years and cost the industry billions in certificate reissuance, software updates, and compliance remediation.

SHA-1 (2017). Google's Project Shattered produced the first practical SHA-1 collision in February 2017, generating two different PDF files with identical SHA-1 hashes. The impact was immediate: Git, which used SHA-1 for object integrity, was vulnerable to collision attacks that could produce different repository histories with identical commit hashes. Certificate authorities still issuing SHA-1 certificates had to emergency-revoke and reissue. Systems that had migrated from MD5 to SHA-1 but not yet to SHA-256 found themselves doing a second emergency migration in thirteen years.

RSA-512 (1999). In August 1999, a team of researchers factored a 512-bit RSA modulus in approximately seven months using the Number Field Sieve. This was not a theoretical break -- it was a complete factorization of a key size that was still in active use. Organizations that had deployed RSA-512 for SSL certificates, code signing, and VPN authentication had to immediately rotate to larger key sizes. Those that had already migrated to RSA-1024 or RSA-2048 were unaffected. The gap between "affected" and "unaffected" was determined entirely by whether the organization had algorithm diversity or was locked into a single key size.

The pattern is consistent. Every algorithm that breaks was once considered secure. Every migration from a broken algorithm to a replacement was expensive, disruptive, and created a compliance gap during the transition. And every time, the organizations that suffered least were those that did not depend on a single algorithm.

The question for post-quantum cryptography is not whether one of the new PQ algorithms will eventually show weakness. The question is when, and what happens to your cache when it does.

When -- Not If -- One PQ Algorithm Shows Weakness

MD5 lasted 13 years before practical collisions. SHA-1 lasted 22 years. RSA-512 lasted about 22 years from the RSA publication to factorization. Post-quantum algorithms are younger, less battle-tested, and built on mathematical problems with less accumulated cryptanalysis. Assuming all three PQ families will remain unbroken indefinitely is not a security posture. It is a hope.

The Algorithm Swap Scenario

To make this concrete, consider a specific scenario: a research team publishes a paper demonstrating a polynomial-time key recovery attack against FALCON-512 under certain parameter conditions. The paper is credible. NIST issues an advisory recommending that organizations begin transitioning away from FALCON-512 for new deployments. Your cache infrastructure uses FALCON for entry integrity verification. What happens next?

Scenario A: Single-Algorithm Cache (FALCON Only)

Every cached entry in your system is signed exclusively with FALCON-512. The advisory means that every one of those signatures is potentially forgeable. You cannot trust any cached value. Your options are to flush the entire cache and absorb a cold start across every service that depends on cached data, or to continue serving potentially forged entries while you re-sign. Neither option is acceptable. Flushing means your database absorbs the full read load -- a 10x to 100x traffic spike that most databases are not provisioned for. Continuing to serve means you are knowingly operating with compromised integrity, which is an automatic compliance finding.

The migration path involves generating new keys for a replacement algorithm (likely ML-DSA-65), re-signing every cached entry -- at 10 million entries, this takes hours even on fast hardware -- and updating every client to verify the new algorithm. During this entire window, your cache is either cold (service degradation) or unverified (compliance gap). There is no third option.

Scenario B: Cachee (Three Algorithms)

Your cached entries are signed with ML-DSA-65 + FALCON-512 + SLH-DSA-SHA2-128f. The FALCON advisory arrives. You update your verification policy from "verify all 3" to "verify 2-of-3, flag FALCON as deprecated." This is a configuration change that takes effect immediately -- no cache flush, no re-signing, no service interruption. Every cached entry is still verifiable because ML-DSA-65 and SLH-DSA remain valid. The FALCON signature is marked as deprecated but retained for forensic purposes.

At the next scheduled key rotation, FALCON keys are replaced with keys for a fourth algorithm -- whatever NIST recommends as FALCON's successor. New entries are signed with ML-DSA-65 + Successor + SLH-DSA. Old entries with FALCON signatures are re-signed opportunistically at next access, not in a bulk migration. The transition completes organically over the natural cache lifecycle, with zero downtime and zero compliance gap.

Single-Algorithm Response

  1. T+0h: FALCON advisory published
  2. T+1h: Emergency change window opened
  3. T+2h: Cache flush begins, services degrade
  4. T+4h: New algorithm keys generated
  5. T+6h: Re-signing begins (10M entries)
  6. T+14h: Re-signing completes
  7. T+16h: Client verification updated
  8. T+18h: Full service restored
  9. Gap: 18 hours unverified or degraded

Cachee Three-Algorithm Response

  1. T+0h: FALCON advisory published
  2. T+1h: Verification policy updated to 2-of-3
  3. T+1h: All entries still verified (ML-DSA + SLH-DSA)
  4. T+1h: Normal operations continue
  5. Next rotation: FALCON keys replaced
  6. Ongoing: Old entries re-signed on access
  7. Gap: 0 hours. Zero downtime. Zero compliance gap.

Compliance Frameworks Requiring Crypto Agility

Cryptographic agility is no longer a best practice suggestion. It is a documented requirement in multiple compliance frameworks that govern defense contracting, financial services, healthcare, government systems, and payment processing. If you operate under any of these frameworks, your cache infrastructure's cryptographic architecture is auditable -- and a single-algorithm design is a finding waiting to happen.

FrameworkCrypto Agility RequirementHow Cachee Satisfies It
CNSA 2.0NSA mandate: "maintain algorithm agility" for all National Security Systems. Algorithms must be replaceable without system redesign.Three independent PQ families. Any one can be replaced via configuration. No redesign required.
NIST SP 800-131A Rev 2Transition recommendations requiring organizations to plan for algorithm deprecation and support multiple algorithms during transition periods.Native multi-algorithm support. Transition from 3-of-3 to 2-of-3 verification is a config change. Deprecated algorithms re-signed on access.
Executive Order 14028"Identify any instances of non-compliance with standards" and mandate federal agencies inventory all cryptographic usage for PQ readiness.Each Cache Attestation Bundle (CAB) documents exact algorithms used per entry. Cryptographic inventory is a cache query, not a manual audit.
FedRAMPFIPS 140-3 validation with algorithm flexibility. Systems must demonstrate ability to update cryptographic modules without full re-authorization.Algorithm updates via cachee.toml configuration. No code changes. FIPS-standardized algorithms (FIPS 204, FIPS 205) used natively.
PCI DSS 4.0"Cryptographic architecture supports algorithm replacement" (Requirement 4.2.1). Document process for migrating from broken algorithms.Migration process is automatic: flag deprecated algorithm, verify with remaining signatures, replace at next rotation. Documented in key management config.

The common thread across every framework is the same: your system must survive the death of any single algorithm without losing trust, without downtime, and without a compliance gap during transition. A single-algorithm cache cannot satisfy this requirement by definition. You cannot swap the only algorithm without losing all verification capability during the swap. Multi-algorithm signing is not a premium feature. It is the minimum architecture that satisfies the regulatory requirement.

Implementation: Crypto-Agile Cache Architecture

Cryptographic agility in Cachee is not a wrapper around a single-algorithm signing function. It is built into the key management, verification, migration, and deprecation architecture. Each of the four layers operates independently, which means you can rotate keys for one algorithm without touching the others, change verification policy without re-signing entries, add a fourth algorithm without invalidating existing entries, and deprecate an algorithm without flushing the cache.

Key Rotation

Cachee rotates keys per family independently. ML-DSA-65 keys can rotate on a 90-day cycle while FALCON-512 keys rotate on a 60-day cycle and SLH-DSA keys rotate on a 120-day cycle. Each rotation is logged with the old key fingerprint, the new key fingerprint, and the timestamp. Entries signed with the old key remain verifiable because the old public key is retained in the key history until all entries signed with it have either expired or been re-signed.

Verification Policy

Cachee supports three verification modes, configurable per deployment and changeable at runtime without restart.

The verification policy is a risk-management decision, not a technical limitation. Defense contractors running under CNSA 2.0 will use verify-all. A content delivery cache might use verify-any. The point is that the architecture supports the full spectrum without code changes.

Algorithm Migration

Adding a fourth algorithm to Cachee does not require re-signing existing entries. New entries are signed with four algorithms. Existing entries retain their three signatures and remain valid under the current verification policy. When an existing entry is accessed and re-cached (due to TTL refresh or update), it is re-signed with the new four-algorithm set. This means migration happens organically over the natural cache lifecycle. There is no bulk migration window, no downtime, and no emergency change request.

Algorithm Deprecation

Deprecating an algorithm is a three-step process. First, mark the algorithm as deprecated in configuration -- entries with that signature are flagged but still accepted under the current policy. Second, shift verification policy if needed (from 3-of-3 to 2-of-3). Third, at the next key rotation, stop generating new keys for the deprecated algorithm. Existing entries are re-signed without the deprecated algorithm on their next access. The deprecated algorithm's signatures are retained for forensic and audit purposes but are no longer used for trust decisions.

# cachee.toml — Crypto-agile key rotation and verification

[signing]
algorithms = ["ML-DSA-65", "FALCON-512", "SLH-DSA-SHA2-128f"]
verification_policy = "verify-all"  # options: verify-all, verify-majority, verify-any

[signing.deprecated]
# Uncomment to deprecate an algorithm during transition:
# algorithms = ["FALCON-512"]
# re_sign_on_access = true
# retain_for_audit = true

[key_rotation]
ml_dsa_65_interval_days = 90
falcon_512_interval_days = 90
slh_dsa_interval_days = 90
auto_rotate = true
log_rotations = true
retain_old_public_keys = true

[key_rotation.on_deprecation]
stop_new_key_generation = true
re_sign_existing_on_access = true

[migration]
allow_algorithm_addition = true       # add 4th algo without invalidating entries
bulk_re_sign = false                   # re-sign on access, not in bulk
re_sign_batch_size = 1000              # if bulk_re_sign enabled

The Cost of NOT Being Crypto-Agile

The cost of cryptographic agility is measurable: three signatures instead of one, a combined signature overhead of approximately 21 KB per entry, and a few extra microseconds of verification time per read. The cost of not having cryptographic agility is also measurable, but the numbers are much worse.

Consider a concrete scenario: one of the three NIST-standardized PQ algorithms shows a weakness in 2028. Your cache holds 10 million entries. Each entry must be re-signed with a replacement algorithm. Here is the comparison.

Impact CategorySingle-Algorithm CacheCachee (Three Algorithms)
Immediate trust impactAll 10M entries unverifiableZero. Remaining 2 signatures still valid.
Cache availabilityCold start or serve unverified dataFull availability, uninterrupted
Re-signing time~8-14 hours (10M entries, single thread)0 hours (re-sign on access, organic)
Compliance gapCritical finding: integrity controls absent during migrationNone. Two algorithms maintain continuous compliance.
Incident classificationP1 security incident, customer notification requiredPlanned deprecation, no incident
Emergency change windowRequired (unplanned, high risk)Not required (standard rotation cycle)
Database load spike10x-100x read amplification during cold startNone. Cache remains warm.
Contract riskDefense/financial contracts may require incident reportStandard compliance report, no incident

For defense contractors operating under CNSA 2.0, a critical compliance finding means potential contract suspension. For financial services under PCI DSS 4.0, it means an audit exception that customers and partners will see. For healthcare under HIPAA, it means a reportable incident if cached PHI was served without integrity verification during the migration window. The cost of three extra signatures per entry is negligible compared to any one of these outcomes.

The mathematics are straightforward. If the probability that any single PQ algorithm is broken in the next decade is p, then the probability that all three independent algorithms are broken is p-cubed. If p is 5% -- a generous estimate for a NIST-standardized algorithm over ten years -- then the probability of all three breaking is 0.0125%. Single-algorithm risk is 400 times higher than three-algorithm risk. That is not a marginal improvement. It is a categorical difference in security posture.

The Arithmetic of Algorithm Independence

If the probability of any single PQ algorithm breaking in 10 years is 5%, then a single-algorithm cache has a 5% chance of catastrophic trust failure. A three-algorithm cache with independent hardness assumptions has a 0.0125% chance. That is a 400x reduction in risk. The cost is 21 KB of additional signature data per entry and microseconds of additional verification time. For any system where cached data integrity matters -- which is every system that caches sensitive data -- the tradeoff is not close.

From Theory to Architecture

Cryptographic agility is not a checkbox. It is not something you achieve by listing three algorithms in a configuration file and verifying with whichever one happens to be loaded. True cryptographic agility means that your system's trust model survives the complete and total failure of any single algorithm -- not in theory, not in a disaster recovery plan, but in production, under load, without human intervention, and without compliance gaps.

The cache layer is where this matters most because it is the layer that every other layer depends on. Your authentication service caches session tokens. Your API gateway caches authorization decisions. Your computation pipeline caches expensive results. Your ML inference service caches model outputs. When the cache goes cold or serves unverified data, every downstream service is affected simultaneously. A single-algorithm cache is a single point of cryptographic failure for your entire infrastructure.

Cachee's three-signature architecture eliminates that single point of failure. ML-DSA-65 covers the MLWE assumption. FALCON-512 covers the NTRU assumption. SLH-DSA covers the hash-based assumption. Three families, three assumptions, three independent bets that the mathematics will hold. When one bet loses -- and history says one eventually will -- two remain. The cache stays warm. The signatures stay valid. The compliance report stays clean. And the migration from three algorithms to a new three happens on the next key rotation cycle, not in an emergency change window at 2 AM.

That is what cryptographic agility means when it is built into the data layer instead of bolted onto the perimeter. Not the ability to swap algorithms. The ability to lose one and keep operating as if nothing happened -- because nothing did.

Your cache should survive any single algorithm failure. Cachee signs every entry with three independent PQ families so it does.

Get Started PQ Caching Docs