← Back to Blog

We Built an Audit System. Then Disguised It as a Cache.

May 11, 2026 | 14 min read | Engineering

Cachee started as a performance play. 31-nanosecond reads. Redis compatibility. The pitch was simple: faster infrastructure. But somewhere between implementing computation fingerprints and hash-chaining state transitions, we realized we hadn't built a cache. We'd built an audit system.

That realization changed everything about how we think about what Cachee is, what market it belongs in, and who should be buying it. This is the story of how a caching engine accidentally became evidence infrastructure -- and why we kept calling it a cache anyway.

The Performance Story We Told

When we first started building Cachee, the thesis was straightforward. Redis is slow. Not slow in the absolute sense -- it is fast compared to a database query. But it is slow in the ways that matter when you are building latency-sensitive systems. Every Redis operation requires a TCP round trip. Even on a fast network, that is 200 to 500 microseconds of wall clock time consumed by serialization, network transit, deserialization, and protocol parsing. For applications that need to serve thousands of cache lookups per request -- fraud scoring, real-time recommendations, trading systems -- those microseconds add up to milliseconds, and those milliseconds add up to real money.

We built an in-process cache tier. No network hop. No serialization. The data lives in the same memory space as the application. We measured 31-nanosecond reads. That is not a typo. Thirty-one nanoseconds. We benchmarked it against Redis, ElastiCache, Memcached, DragonflyDB, and every other cache on the market. We were 10,000 times faster than Redis on hot reads. We ran 140 Redis commands natively in Rust, so migration was a configuration change, not a rewrite. The performance story was real, and it was compelling.

31ns
In-Process Read Latency
10,000x
Faster Than Redis
140
Redis Commands Native in Rust

We went to market with that story. "Drop-in Redis replacement. 10,000x faster. One binary. One config file." The pitch landed with infrastructure teams who were tired of managing ElastiCache clusters and debugging Redis connection pool exhaustion at 3 AM. We were solving a real problem -- cache latency -- and we were solving it convincingly.

But even during those early conversations, something else was happening in the codebase. Something we did not plan and did not initially recognize for what it was.

The Accidental Discovery

The first feature that set Cachee apart from a normal cache was computation fingerprinting. We did not build it for compliance. We built it because we wanted deterministic cache keys. In a normal cache, the application constructs a key string -- something like user:123:profile -- and the cache stores whatever value you give it. The cache has no idea what computation produced that value. It does not know the inputs, the algorithm, the parameters, or the software version. It just stores bytes under a string key.

We wanted something better. We wanted the cache key itself to encode what computation produced the value. So we built fingerprints: SHA3-256(input || computation_type || parameters || version || hardware_class). If any of those fields change -- different input, different algorithm version, different parameters, different hardware -- the fingerprint changes, and you get a cache miss instead of a stale hit. This was a correctness feature, not a compliance feature.

The second feature was triple post-quantum signatures. We did not build this for audit trails. We built it because we were entering the post-quantum migration window, and we wanted cached values to be verifiable. Every cache entry gets signed by three independent post-quantum signature algorithms: ML-DSA-65 (FIPS 204), FALCON-512, and SLH-DSA-SHA2-128f-simple (FIPS 205). Three different mathematical hardness assumptions -- MLWE lattices, NTRU lattices, and stateless hash functions. An attacker would need to break all three simultaneously to forge a cached entry. This was a security feature, not an audit feature.

The third feature was the state machine. Every cache entry follows a lifecycle: Active, Superseded, Revoked, Expired. Every state transition is recorded with a TransitionAuthority (which key authorized the change) and a TransitionProof (a cryptographic proof that the transition was valid). We built this because we wanted strong cache invalidation semantics. When a value is superseded by a newer computation, we wanted to know which computation replaced it, when, and with what authorization. This was an invalidation feature, not a compliance feature.

The fourth feature was hash-chaining. Every state transition is linked to the previous transition via SHA3-256. The chain forms a tamper-evident log: if any entry is modified after the fact, every subsequent hash in the chain breaks. We built this because it was the natural way to implement ordered state transitions with integrity guarantees. It was a data structure choice, not a compliance decision.

And then one day we stepped back and looked at what we had built. Computation fingerprints that bind every cached result to its exact inputs and parameters. Triple cryptographic signatures that make every entry independently verifiable. A state machine that tracks the complete lifecycle of every piece of data. A hash chain that makes the history tamper-evident. Merkle anchoring that lets you prove the state of the entire cache at any point in time.

We had not built a cache with audit features bolted on. We had built an audit system that happens to be extremely fast.

The Moment It Clicked

We did not set out to build evidence infrastructure. We set out to build a fast, correct, verifiable cache. But when you sign every entry with three PQ families, fingerprint every computation, chain every state change, and anchor the result in a Merkle tree -- you do not have a cache anymore. You have evidence. The audit trail is not a feature we added. It is a consequence of the architecture we chose.

What Makes It an Audit System

An audit system must satisfy five properties that traditional logs do not. It must be tamper-evident (modifications are detectable), complete (no entries can be silently deleted), attributable (every action is tied to an identity), independently verifiable (a third party can check it without trusting the system), and replayable (you can reconstruct the state at any point in time). Cachee satisfies all five, not because we designed it as an audit system, but because these properties emerged from the engineering decisions we made for correctness and security.

Hash-Chained Audit Log

Every state transition in Cachee is linked to its predecessor via SHA3-256. The chain starts at the genesis entry -- the first write of a cache key -- and extends through every modification, supersession, revocation, and expiration. Each link includes the previous hash, the current operation, the timestamp, the TransitionAuthority, and the TransitionProof. If an attacker modifies any entry in the chain, every subsequent hash changes, and the tampering is detectable by any party that holds a checkpoint hash.

This is not application-level logging that writes to a file. This is a cryptographic data structure where integrity is a mathematical property, not an operational practice. You do not need to trust that nobody modified the log file. You can verify it. The Merkle anchoring extends this further: the root hash of the entire cache state tree can be published to an external anchor (a blockchain, a timestamping authority, a regulatory archive), creating a public commitment that the cache state existed at a specific time. Anyone with the root hash can verify any entry's membership in the tree without accessing any other entry.

Every Read Is Verifiable

In a traditional cache, GET key returns bytes. You trust that the bytes are what was originally stored. You have no way to verify this. In Cachee, GETVERIFIED key returns the value along with all three PQ signatures and the computation fingerprint. The caller can verify independently that the value was produced by the computation described in the fingerprint, that it has not been modified since it was stored, and that the signatures are valid. This is not a special compliance mode. The signatures exist on every entry. GETVERIFIED simply returns them alongside the value.

The verification is real cryptography, not checksums. ML-DSA-65 provides lattice-based signature security. FALCON-512 provides NTRU-based signature security. SLH-DSA provides hash-based signature security. Three independent mathematical hardness assumptions, each verified independently. If you run GETVERIFIED and all three signatures check out, you have a mathematical guarantee that the cached value is authentic and unmodified. Not a promise. Not a log entry that says "integrity check passed." A verifiable proof.

Every State Transition Is Recorded

When a cache entry moves from Active to Superseded, the transition record includes the identity of the key that authorized the transition, the reason for the transition, the hash of the previous state, the hash of the new state, and a cryptographic proof binding all of these together. When an entry expires, the same record is created with the TransitionAuthority set to the system clock and the reason set to TTL expiration. When an entry is revoked -- perhaps because the computation that produced it was found to be incorrect -- the record includes which authority revoked it and what evidence triggered the revocation.

This means every cache key has a complete, tamper-evident history from genesis to its current state. The AUDITLOG key command returns this complete history. An auditor can take any key, request its audit log, and see every operation that has been performed on it: when it was created, which computation produced it, how many times it was read, when it was superseded, what replaced it, and when it was eventually expired or revoked. This is not a log aggregation pipeline reading from Elasticsearch. This is native to the data structure that stores the cached value.

Full Lifecycle Replay

Because every state transition is hash-chained and every entry is fingerprinted, you can reconstruct the exact state of the cache at any point in time. This is not eventual consistency with conflict resolution. This is deterministic replay. Given the genesis state and the chain of transitions, the state at time T is computed, not estimated. If an auditor asks "what was in the cache at 2:47 PM on March 15," you do not search logs and hope you have enough entries to reconstruct the answer. You replay the chain to that timestamp and get the exact state.

This property -- deterministic reconstruction of historical state -- is one of the hardest requirements in audit infrastructure. Most audit systems approximate it by taking periodic snapshots and interpolating between them. Cachee does not approximate. The hash chain is the complete history. There are no gaps, because gaps would break the chain and be immediately detectable.

Self-Contained Verification

The most important property of Cachee as an audit system is that verification does not require Cachee. The Cache Attestation Bundles (CABs) that Cachee produces are self-contained verification packages. Each CAB includes the cached value, all three PQ signatures, the computation fingerprint, the state transition history, and the public keys needed for verification. An auditor can take a CAB, disconnect from the network, and verify every claim using standard post-quantum cryptography libraries. No API calls. No trust in the Cachee service. No network access. The evidence verifies itself.

This is the property that separates an audit system from a logging system. A logging system says "trust me, this is what happened." An audit system says "here is a mathematical proof of what happened. Verify it yourself." Cachee produces the latter. Every entry is a proof. Every history is a chain of proofs. Every bundle is a self-contained verification package that an adversarial auditor can check without trusting any part of the infrastructure.

Why We Kept Calling It a Cache

If Cachee is really an audit system, why do we call it a cache? Because audit systems have a reputation problem. They are slow. They are complex. They require dedicated infrastructure, specialized configuration, and often a dedicated team to operate. They are the systems that engineers resist integrating because they add latency, complexity, and operational burden to every service that touches them.

Cachee has none of those properties. It reads in 31 nanoseconds. It deploys as a single binary with a single configuration file. There is no external database, no message queue, no log aggregation pipeline, no Elasticsearch cluster, no Kafka topic. The audit trail is not an external system that your application writes to. It is a byproduct of the storage model. When you write a value to Cachee, the fingerprint, signatures, and chain links are computed as part of the write operation. When you read a value, the signatures are verified as part of the read operation. The audit trail does not add latency because it is not a separate operation. It is the operation.

This is the fundamental insight that makes Cachee different from every other compliance tool on the market. Traditional audit infrastructure is an overlay: a separate system that watches your real system and records what it does. That overlay adds latency (observability tax), complexity (another system to operate), and fragility (what happens when the audit system is down?). Cachee eliminates the overlay by making the storage layer itself produce evidence as a side effect of storing data. There is no audit system to go down, because the audit trail is the data structure.

We also kept calling it a cache because of how people buy things. When an engineering team evaluates caching infrastructure, the budget comes from infrastructure. The decision is made by a principal engineer or an infrastructure lead. The evaluation criteria are latency, throughput, compatibility, and operational simplicity. Cachee wins on all of those criteria against every competitor in the market. The audit capabilities are a bonus that the engineering team gets for free.

But when a CISO evaluates audit infrastructure, the budget comes from security or compliance. The decision involves legal, risk, and the external audit firm. The evaluation criteria are evidence quality, verifiability, chain of custody, and regulatory mapping. Cachee wins on all of those criteria too -- but the buyer is different, the budget is different, and the sales motion is different.

By positioning as a cache, we get into infrastructure budgets with a performance pitch. Once deployed, the compliance team discovers that their cache produces better audit evidence than their dedicated audit logging system. That is a much easier conversation than trying to sell a compliance tool to an engineering team that has never heard of you.

The Positioning Shift

The realization that Cachee is an audit system disguised as a cache changed our go-to-market strategy in three specific ways.

First, it changed the competitive frame. When we positioned as "faster Redis," we competed against Redis, DragonflyDB, Memcached, Momento, and Upstash. The evaluation criteria were latency and cost. The switching cost was low, and the differentiation was a benchmark number. When we position as "provable infrastructure," we compete against nobody. There is no other cache that produces self-contained cryptographic proofs of what it served, when, to whom, and whether the value was modified. The competitive frame shifts from "which cache is fastest" to "which cache can prove what it did." Nobody else can.

Second, it changed the buyer. "Faster Redis" gets evaluated by a senior backend engineer. "Provable infrastructure" gets evaluated by the CISO, the head of compliance, and the VP of Engineering together. The deal size is different. The budget is different. The urgency is different. When a SOC 2 auditor tells a company that their cache is a compliance gap, the urgency to deploy a cache that produces audit evidence is measured in weeks, not quarters.

Third, it changed the retention model. A performance tool is replaceable. If someone builds a faster in-process cache next year, we lose on latency and the customer evaluates switching. An audit system is not replaceable. Once your compliance narrative depends on Cachee's CAB bundles, computation fingerprints, and state transition proofs, switching means rebuilding your entire evidence pipeline. The audit trail is a moat that deepens with every cache entry written.

The Market Positioning Lesson

"Faster Redis" gets you dev tool budgets. "Provable infrastructure" gets you compliance budgets, enterprise procurement, and regulatory conversations. The product is the same. The buyer is different. The budget is 10x larger. And the retention is structural, not just performance-based.

What This Means for You

If you are an engineering leader evaluating cache infrastructure, here is the question you need to ask: can your cache prove what it served?

Not "can your cache log what it served." Logging is writing text to a file. Text files can be modified, deleted, or lost. Logs do not prove anything. They are a narrative. They say "trust us, this is what happened." If your auditor asks for proof and you hand them log files, you are handing them a story, not evidence.

Can your cache prove that the value it returned was the value that was originally stored? Can it prove that the value was not modified between the write and the read? Can it prove which computation produced the value, with which inputs, parameters, and software version? Can it prove the complete history of every state change for every key, with cryptographic integrity guarantees that make tampering detectable? Can it produce a self-contained verification package that a third party can check without trusting your infrastructure?

If the answer to any of these questions is no, then you do not have an audit trail for your cached data. You have a cache that stores bytes and returns bytes, with no accountability, no verifiability, and no evidence. That was acceptable when auditors did not ask about cache infrastructure. They are asking now.

The table below shows what a traditional cache provides versus what Cachee provides as evidence infrastructure.

PropertyTraditional Cache (Redis)Cachee
Read returnsRaw bytes, trust requiredValue + 3 PQ signatures + fingerprint
Integrity guaranteeNoneTriple PQ signature verification on every read
HistoryNone (overwrite in place)Hash-chained audit log from genesis
VerifiabilityNoneSelf-contained CAB bundles, offline verifiable
State reconstructionImpossibleDeterministic replay to any timestamp
Tamper detectionNoneHash chain breaks on any modification
Third-party auditRequires infrastructure accessCAB bundles verify without network access

The transition from "cache" to "evidence infrastructure" is not a feature upgrade. It is a category shift. A cache stores bytes and returns bytes. Evidence infrastructure stores bytes, proves their provenance, records their history, and enables independent verification. Every Cachee entry is both: a fast cached value and a piece of cryptographic evidence. You get the 31-nanosecond reads and the compliance story. You do not have to choose.

The implications extend beyond your immediate compliance needs. When your cache produces evidence, your incident response changes. A security event involving cached data is no longer "we think the cache contained X at time T based on our logs." It is "here is a cryptographic proof of exactly what the cache contained at time T, verified by three independent signature algorithms, with a complete chain of custody from creation to the present moment." That is a different conversation with your incident response team, your legal counsel, your regulator, and your customers.

When your cache produces evidence, your vendor relationships change. A customer asking "can you prove what data your system processed on our behalf" is no longer a request that triggers a multi-week investigation through log archives. It is a GETVERIFIED call that returns a self-contained proof in nanoseconds. That is a different conversation with enterprise customers who care about data governance, with healthcare organizations that need HIPAA evidence, with financial institutions that need regulatory proof, and with government agencies that need FedRAMP artifacts.

When your cache produces evidence, your architecture changes. You no longer need a separate audit logging system for cached data. You no longer need a log aggregation pipeline to collect cache access events. You no longer need a SIEM integration to monitor cache integrity. The cache is the audit system. The audit evidence is the data structure. The proof is the storage model.

# What an audit system disguised as a cache looks like

# Standard cache operation -- fast
SET user:123:profile '{"name": "Alice", "role": "admin"}' EX 3600
# Response: OK (31ns write, PQ-signed, fingerprinted, chain-linked)

# Standard cache read -- also fast, also verified
GETVERIFIED user:123:profile
# Response: value + ML-DSA-65 sig + FALCON-512 sig + SLH-DSA sig + fingerprint

# Full audit history -- native to the data structure
AUDITLOG user:123:profile
# Response: genesis -> write(t0) -> read(t1) -> read(t2) -> supersede(t3) -> ...

# Self-contained evidence bundle -- offline verifiable
EXPORTCAB user:123:profile
# Response: 24KB CAB bundle with value, signatures, fingerprint, chain, public keys

We did not plan to build an audit system. We planned to build a fast cache that happened to be correct, verifiable, and tamper-evident. Those properties, combined with hash-chaining and Merkle anchoring, turned out to be the exact definition of an audit system. The fact that it runs at 31 nanoseconds per read is what makes it deployable. The fact that it produces cryptographic evidence is what makes it irreplaceable.

If you cannot prove what your system did, you do not have an audit trail. You have a story. Stories are persuasive. Evidence is conclusive. We built infrastructure that produces evidence at cache speed. We just kept calling it a cache because that is how you get it deployed.

The Bottom Line

If you can't prove what your system did, you don't have an audit trail. You have a story. Cachee started as a performance play and became evidence infrastructure. Every entry is PQ-signed, fingerprinted, hash-chained, and Merkle-anchored. Every read is verified. Every history is tamper-evident. Every bundle is self-contained. And it all runs at 31 nanoseconds. That is not a cache with compliance features. That is an audit system with cache performance.

Your cache should produce evidence, not just bytes. Cachee gives you 31ns reads and a cryptographic audit trail in one binary.

Get Started Compliance & Audit Docs