← Back to Blog

31 Nanoseconds and a Cryptographic Proof: What Cachee Actually Does

May 11, 2026 | 14 min read | Engineering

31 nanoseconds. That's how long a Cachee read takes. But speed isn't the interesting part. The interesting part is what happens during those 31 nanoseconds -- and what was already done before the read ever arrived.

Most caching systems define themselves by latency numbers. Sub-millisecond. Microsecond. Nanosecond. The number gets smaller, and the marketing gets louder. But latency is a measurement of absence -- the absence of work, the absence of network hops, the absence of serialization overhead. What matters is not how fast the read is. What matters is what the read actually gives you when it completes.

A Redis GET gives you bytes. You trust those bytes because you trust the server. A Cachee read gives you bytes, a cryptographic proof that those bytes are authentic, a computation fingerprint that binds those bytes to the exact computation that produced them, and a lifecycle state that tells you whether those bytes are still valid. The read is 31 nanoseconds because all the hard work -- the signing, the fingerprinting, the attestation -- already happened on the write path. The read is a pointer dereference. The proof was built before you asked for it.

This post is the technical walkthrough. Not the pitch. Not the positioning. The actual architecture: what happens when you write a value, what happens when you read it, what happens when you verify it, how entries live and die, and why the audit chain makes all of it tamper-evident.

31ns
L0/L1 Read Latency
24KB
CAB Bundle Size
3
PQ Signature Families

The 31ns Read: What Actually Happens

When your application calls cachee.get(key), the read path is a pointer dereference from the in-process L0 or L1 tier. There is no network hop. There is no TCP connection. There is no TLS handshake. There is no serialization. There is no deserialization. The value lives in the same address space as your application, and the read is a hash table lookup followed by a pointer follow. That is where the 31 nanoseconds comes from.

This is fundamentally different from every networked cache. Redis, even on localhost, requires a TCP round-trip: your application serializes the command into RESP protocol, writes it to a socket, the kernel copies it to the Redis process, Redis parses the command, looks up the key, serializes the response, writes it back to the socket, and your application deserializes the response. On a fast machine with Unix domain sockets, this takes 50-80 microseconds. On a typical production deployment with TLS and a network hop, it takes 200-500 microseconds. Cachee's in-process architecture eliminates every one of those steps.

But the 31ns read is not just fast. It is also complete. The value returned from L0/L1 is not raw bytes. It is a CacheeEntry that includes the value, the computation fingerprint, the lifecycle state, and the metadata needed to locate the full CAB (Cache Attestation Bundle) in the L2 content store. If you just need the value and trust the in-process tier, you take the value and move on. If you need the proof, you call GETVERIFIED, and the system retrieves the full bundle from L2 and performs cryptographic verification. The fast path is fast. The verified path is thorough. You choose based on your requirements.

The reason both paths exist is pragmatic. Not every read needs full post-quantum signature verification. A session token lookup in a web application needs speed. A financial computation result being served to a compliance system needs proof. Cachee does not force one model. The in-process tier gives you 31ns reads with metadata. The verification tier gives you cryptographic proof. The architecture supports both because different reads have different trust requirements.

What Happened Before the Read: The Write Path

The 31ns read is fast because the write path did all the work. When your application calls cachee.set(key, value, computation_context), the following sequence executes before the write is acknowledged.

Step 1: Computation Fingerprint

The system computes a computation fingerprint that uniquely identifies what produced this value. The fingerprint is SHA3-256(input_hash || computation_hash || parameter_hash || version || hardware_class). Each component is critical. The input_hash binds the result to the exact input data. The computation_hash binds it to the exact function or model that processed the input. The parameter_hash captures configuration -- hyperparameters, feature flags, thresholds. The version identifies the software release. The hardware_class captures the execution environment, because the same computation on different hardware can produce different floating-point results.

This fingerprint is not a cache key. The cache key is whatever your application uses to identify the entry. The fingerprint is a content-addressed identity for the computation result. Two different applications on two different machines that perform the same computation with the same inputs, parameters, and version will produce the same fingerprint. This is how Cachee detects that a cached result is still valid: if any component of the fingerprint changes, the cached result is treated as a miss, even if the cache key matches.

Step 2: Content Address

The system derives a content address: SHA3-256(value_hash || fingerprint.digest()). This address is how the L2 content store locates the full CAB bundle. Content addressing means the bundle's location is determined by its contents. If the contents change, the address changes. You cannot modify a bundle at a given address without changing the address itself. This is the same principle that underpins Git's object store and IPFS, applied to cache entries.

Step 3: Triple Post-Quantum Signatures

The entry is signed by three independent post-quantum signature families. This is where the write path spends most of its time, and this is why the write is more expensive than the read. Each signature serves a distinct purpose and relies on a different mathematical hardness assumption.

ML-DSA-65 (3,309 bytes). The primary signature. Based on Module-Lattice Digital Signature Algorithm, standardized in FIPS 204. The hardness assumption is Module Learning With Errors (MLWE). ML-DSA-65 provides NIST Security Level 3 -- equivalent to AES-192 classical security. The signature is compact relative to the security level, and verification is fast. This is the signature most verifiers will check first.

FALCON-512 (656 bytes). The compact signature. Based on Fast Fourier Lattice-based Compact Signatures over NTRU, a NIST Round 3 selection. The hardness assumption is the NTRU lattice problem, which is mathematically independent from MLWE. FALCON-512 produces the smallest signatures of the three families. At 656 bytes per signature, it adds minimal overhead to the CAB bundle while providing an independent verification path.

SLH-DSA (17,088 bytes). The conservative signature. Based on Stateless Hash-Based Digital Signature Algorithm, standardized in FIPS 205. The hardness assumption is the security of the underlying hash function (SHA-256 in the SHA2 variant). SLH-DSA is the most conservative choice because its security depends only on hash function properties, not on structured algebraic problems. If lattice-based cryptography is broken tomorrow -- if someone finds a polynomial-time algorithm for MLWE or NTRU -- SLH-DSA signatures remain valid.

The three signatures represent three independent mathematical bets. An attacker would need to break MLWE lattices, NTRU lattices, and stateless hash functions simultaneously to forge a cache entry. Any single signature failure is detectable and triggers immediate invalidation.

Step 4: CAB Bundle Creation

The three signatures, the computation fingerprint, the content address, the value hash, the lifecycle state, and the public keys needed for verification are assembled into a Cache Attestation Bundle (CAB). The CAB is approximately 24 KB -- dominated by the SLH-DSA signature at 17 KB. The CAB is self-contained: anyone with the bundle can verify every signature without contacting Cachee, without network access, and without trusting any external authority. The math is in the bundle.

Step 5: Persistence to L2 Content Store

The CAB bundle is persisted to the L2 content store at its content address. The L2 store is backed by sled, an embedded database optimized for concurrent reads. The bundle is written atomically -- it either exists in its entirety or it does not. There is no partial write state. The content address serves as both the lookup key and the integrity check: if the bundle at a given address does not hash to that address, it has been tampered with.

Step 6: Audit Log Entry

An entry is appended to the hash-chained audit log. The log entry includes the event type (EntryCreated), the content address, the computation fingerprint, the timestamp, and the hash of the previous log entry. The chain structure means that modifying or deleting any log entry breaks the hash chain from that point forward. Audit log integrity is discussed in detail below.

Write Path Summary

Every value that enters Cachee passes through six steps before it is acknowledged: fingerprint computation, content addressing, triple PQ signing, CAB bundle creation, L2 persistence, and audit log entry. The 31ns read is fast because this work is already done. The read just follows a pointer. The proof was built at write time.

What Happens on GETVERIFIED: The Verification Path

The standard GET returns the cached value from the in-process L0/L1 tier in 31 nanoseconds. GETVERIFIED does something different. It retrieves the full CAB bundle from the L2 content store and performs actual post-quantum cryptographic verification. This is the path you use when you need proof, not just speed.

Step 1: Bundle Retrieval

The system looks up the content address associated with the cache key and retrieves the full CAB bundle from the L2 sled store. The content address was computed at write time and stored alongside the L0/L1 entry. Retrieval is a single indexed read from sled, typically completing in 2-5 microseconds depending on whether the sled page is in the OS page cache.

Step 2: Content Address Verification

Before checking any signatures, the system verifies the content address. It recomputes SHA3-256(value_hash || fingerprint.digest()) from the bundle's contents and compares it to the address used for retrieval. If the hashes do not match, the bundle has been modified since it was written. Verification stops immediately, and the entry is marked as compromised. This check is fast -- a single SHA3-256 computation -- and catches any modification to the bundle's contents regardless of how it occurred.

Step 3: ML-DSA-65 Signature Verification

The first signature check. The system extracts the ML-DSA-65 signature (3,309 bytes) and the corresponding public key from the bundle and verifies the signature against the content hash. ML-DSA-65 verification is computationally lightweight -- approximately 0.3 milliseconds on modern hardware. If the signature is invalid, the verification result records the failure and continues to the remaining signatures to build a complete verification report.

Step 4: FALCON-512 Signature Verification

The second signature check. FALCON-512 verification uses fast Fourier transforms over the NTRU lattice and completes in approximately 0.1 milliseconds. The FALCON signature is independent from the ML-DSA signature -- different key pair, different algorithm, different mathematical hardness assumption. An attacker who breaks ML-DSA cannot leverage that break to forge a FALCON signature.

Step 5: SLH-DSA Signature Verification

The third and most conservative signature check. SLH-DSA verification involves traversing a hash tree and is the most computationally expensive of the three, completing in approximately 1-3 milliseconds depending on the parameter set. This is also the most conservative check: even if both lattice-based schemes (ML-DSA and FALCON) are broken by a future mathematical advance, the SLH-DSA signature remains valid as long as SHA-256 is secure.

Step 6: Threshold Evaluation

The system evaluates a 2-of-3 threshold. If at least two of the three signatures verify, the entry is considered authentic. The threshold model exists because cryptographic algorithms can be deprecated. If one of the three PQ families is weakened by future research, entries signed before the weakness was discovered remain valid as long as the other two signatures hold. The threshold is configurable, but 2-of-3 is the default because it provides quantum-resistant verification even in the face of a single algorithm compromise.

Step 7: Audit Log Recording

The verification result -- pass, partial pass, or fail -- is recorded in the audit log as a VerificationPerformed event. The event includes which signatures passed, which failed, the verification timestamp, and the content address. This creates an auditable record of every verification ever performed on every cached entry. When a compliance team asks "has this value been verified since it was written," the answer is in the audit log.

Step 8: Response Assembly

The full CacheeReadResponse is assembled and returned. It includes the cached value, the verification status (Verified, PartiallyVerified, or Failed), a signature summary showing the result for each of the three families, the computation fingerprint that produced the value, the provenance metadata (who wrote it, when, from where), and the current lifecycle state. The caller receives not just the value, but the full context needed to make a trust decision about that value.

GETVERIFIED StepOperationTypical Latency
Bundle retrievalsled indexed read2-5 us
Content address checkSHA3-256 recomputation<1 us
ML-DSA-65 verifyLattice signature check~0.3 ms
FALCON-512 verifyNTRU FFT signature check~0.1 ms
SLH-DSA verifyHash tree traversal1-3 ms
Threshold eval2-of-3 logic<1 us
Audit log appendHash-chained append<1 us
Total~1.5-3.5 ms

The GETVERIFIED path takes 1.5-3.5 milliseconds, which is 50,000 to 110,000 times slower than the 31ns GET path. That is the tradeoff. You use GET when you trust the in-process tier and need speed. You use GETVERIFIED when you need mathematical proof. Both paths return the same value. The difference is whether the value comes with a receipt.

The Lifecycle State Machine: How Entries Live and Die

Every Cachee entry has a lifecycle state. The state is not a boolean (valid/invalid). It is a state machine with five states and defined transitions between them. Every transition is authorized, recorded, and independently verifiable.

Active

The entry is current, valid, and servable. All three PQ signatures were valid at the time of the most recent verification. The computation fingerprint matches the current software version and parameters. The entry's validity window has not closed. This is the default state for newly written entries. Most reads hit Active entries, and the 31ns read path returns them without further checks.

Superseded

A new version of the entry exists. The superseded entry remains in the content store for historical reference and audit purposes, but reads are directed to the successor entry. The transition from Active to Superseded records the content address of the successor, creating a linked chain of entry versions. This is how Cachee implements versioning: every version of every entry is preserved, content-addressed, and linked to its predecessor and successor.

Revoked

The entry has been explicitly revoked. This is a terminal state -- a revoked entry cannot return to Active. Revocation is used when an entry is known to be compromised, incorrect, or produced by a computation that has been invalidated. The transition from Active to Revoked records the revocation reason (a structured enum, not a free-text field), the authority that performed the revocation (identified by key type and key ID), and a cryptographic proof that the revocation was authorized. Revoked entries are never served to clients. Any read for a revoked entry returns a miss with the revocation reason attached.

Expired

The entry's validity window has closed. Every Cachee entry has a validity window defined by the cache contract for its computation type. When the window closes, the entry transitions from Active to Expired. Unlike TTL-based expiration in Redis, which silently deletes the entry, Cachee's expiration is a state transition: the entry remains in the content store, the transition is recorded in the audit log, and any read for an expired entry returns a miss with the expiration metadata. The entry is not deleted. It is retired with a record.

Deprecated

One of the three PQ signature families has been weakened or deprecated by NIST. The entry is still valid under the 2-of-3 threshold, but it no longer has full three-family coverage. The transition from Active to Deprecated records which family was deprecated, and the entry can continue to be served (with the Deprecated status visible in the read response) until it is either superseded by a re-signed version or revoked. This state exists specifically because post-quantum cryptography is evolving. An algorithm that is secure today may be weakened by future research. Deprecation lets Cachee degrade gracefully rather than invalidating millions of entries overnight.

Every State Transition Is Permanent Evidence

Every transition between lifecycle states is hash-chained in the audit log. The transition record includes the source state, the destination state, the authority (which key performed the transition), the reason (a structured value, not free text), and a cryptographic proof. You cannot silently move an entry from Active to Expired and back to Active. The audit chain records every transition, and the chain is tamper-evident. An auditor can reconstruct the complete lifecycle of every entry that has ever existed in the cache.

The Audit Chain: Tamper-Evident History

Every event in Cachee -- every write, every read, every verification, every state transition -- is appended to a hash-chained audit log. The chain is the structure that makes everything else tamper-evident. Without the chain, signatures prove authenticity at a point in time. With the chain, signatures prove authenticity across time.

Each audit log entry has the form: SHA3-256(prev_hash || timestamp || sequence || event_type || event_data). The prev_hash is the hash of the immediately preceding log entry. The sequence is a monotonically increasing integer. The event_type identifies the kind of event (EntryCreated, EntryRead, VerificationPerformed, StateTransition, etc.). The event_data contains the details specific to each event type.

The chain structure provides three properties that traditional logs do not.

Deletion is detectable. If you delete a log entry, the chain breaks. The entry that follows the deleted entry includes a prev_hash that references the deleted entry. When the chain is verified, the system tries to look up the entry referenced by prev_hash, finds nothing, and reports a chain break at that position. You cannot delete evidence without leaving evidence of the deletion.

Modification is detectable. If you modify a log entry, its hash changes. The entry that follows it includes a prev_hash that references the original hash, not the modified hash. When the chain is verified, the system computes the hash of the modified entry, compares it to the prev_hash in the following entry, finds a mismatch, and reports a chain corruption at that position. You cannot rewrite history without breaking the chain.

Reordering is detectable. Each entry includes a sequence number that must be strictly monotonically increasing. If entries are reordered, the sequence numbers are out of order, and the prev_hash references no longer point to the immediately preceding entry. Both anomalies are detected during chain verification.

The AUDITVERIFY command walks the entire chain from the first entry to the last, verifying the hash linkage, sequence ordering, and structural integrity at every step. It produces a verification report that states whether the chain is intact, and if not, exactly where the break occurred. This is not sampling. This is not statistical analysis. It is a complete verification of every entry in the chain.

For long-running deployments with millions of audit entries, Cachee provides periodic Merkle root anchoring. At configurable intervals, the system computes a Merkle root over a range of audit entries and records it as a checkpoint. This allows verification to be performed in segments rather than over the entire chain, and it provides a compact proof (a single hash) that summarizes the integrity of a time range. The Merkle root can be exported and stored externally -- in a blockchain, in a compliance database, in a safe deposit box -- as an independent integrity witness.

# Verify the complete audit chain
cachee AUDITVERIFY

# Output:
# Chain length: 1,247,893 entries
# First entry: 2026-01-15T00:00:00.000Z (seq: 1)
# Last entry:  2026-05-11T14:23:17.442Z (seq: 1247893)
# Hash chain:  INTACT
# Sequence:    MONOTONIC
# Merkle roots: 14 checkpoints verified
# Verification time: 2.3 seconds
# Result: PASS

Putting It All Together: The Full Picture

The architecture is a layered system where each layer serves a different purpose. The L0/L1 in-process tier serves speed -- 31ns reads from the application's address space. The L2 content store serves proof -- CAB bundles that are self-contained, content-addressed, and independently verifiable. The audit chain serves accountability -- a tamper-evident record of every operation ever performed on every entry.

These layers are not independent features bolted together. They are a single architecture where each layer depends on the others. The 31ns read is fast because the write path already built the proof and stored it in L2. The proof is trustworthy because the three PQ signatures are computed at write time, not retrofitted later. The audit chain is tamper-evident because it hash-chains every event, including the write that created the entry and the verification that checked it. The lifecycle state machine is enforceable because state transitions are recorded in the audit chain with authority and proof.

When someone asks "what does Cachee actually do," the answer is this: it caches computed results at 31 nanoseconds, signs them with three independent post-quantum signature families, fingerprints the computation that produced them, content-addresses them in a persistent store, tracks their lifecycle through a state machine with authorized transitions, and records everything in a tamper-evident hash-chained audit log. The fast path is 31ns. The verified path is 1.5-3.5ms. The audit path covers everything that ever happened.

Most systems are fast or provable. Fast systems give you bytes and trust. Provable systems give you proofs and latency. Cachee is both. 31 nanoseconds for the read. A full cryptographic proof chain behind every value. The speed comes from architecture -- in-process tiers that eliminate network overhead. The proof comes from the write path -- signatures, fingerprints, and content addressing computed before the value is ever read. The accountability comes from the audit chain -- a tamper-evident record that proves not just what the value is, but where it came from, who created it, how it was verified, and what happened to it over its entire lifecycle.

That is what Cachee actually does. 31 nanoseconds is how long it takes. The cryptographic proof is what you get.

The Architecture in One Paragraph

Every write computes a SHA3-256 computation fingerprint, derives a content address, signs the entry with ML-DSA-65 + FALCON-512 + SLH-DSA, assembles a 24KB CAB bundle, persists it to the L2 content store, and appends a hash-chained audit log entry. Every read from L0/L1 takes 31 nanoseconds. Every GETVERIFIED retrieves the CAB, checks all three PQ signatures, evaluates a 2-of-3 threshold, records the verification in the audit log, and returns a full CacheeReadResponse with verification status, provenance, and lifecycle state. Every state transition is authorized, recorded, and independently verifiable. Every audit entry is hash-chained. The chain is tamper-evident. The bundles are self-contained. The proof is always there, whether you look at it or not.

31 nanoseconds. Three post-quantum signatures. A tamper-evident audit chain. That is what your cache should do.

Get Started Compliance & Audit