← Back to Blog

ZK-STARK Proof Reuse: Re-Verification Is Waste

May 1, 2026 | 16 min read | Engineering

A STARK proof is a mathematical statement. It says: "this computation was performed correctly, and here is the evidence." Once a verifier checks that evidence and confirms it is valid, the statement is true. It was true before the verifier checked. It will be true after the verifier checks. It does not become more true if a second verifier checks it. And it does not become less true if a thousand verifiers decline to check it. The truth of the proof is a property of the proof itself, not a property of how many times it has been verified.

Yet in every production system that uses STARK proofs, the same proof is verified multiple times. Every node in a rollup re-verifies the same batch proof. Every bridge validator re-verifies the same cross-chain proof. Every microservice in an authentication pipeline re-verifies the same attestation proof. Each re-verification takes 15-25 microseconds over Goldilocks fields, or 100-150 microseconds over 256-bit fields. Each re-verification produces the same answer: valid. Each re-verification consumes CPU cycles that could be spent on actual computation. The aggregate waste is staggering, and it is entirely preventable.

25 us
Full verification
85 ns
Cached result
294x
Waste eliminated

Where Re-Verification Happens

Re-verification is not a design choice. It is an emergent property of distributed systems where multiple independent components need to confirm the same fact. Each component has a legitimate reason to verify: it does not trust the component that preceded it. This distrust is architecturally correct. But the computational consequence -- the same mathematical check performed hundreds or thousands of times -- is pure waste.

Rollup Nodes

In a ZK-rollup, the sequencer produces a STARK proof that a batch of transactions was executed correctly. This proof is posted to the base layer (Ethereum, for most rollups). Every full node on the base layer verifies the proof independently. For Ethereum, that means approximately 7,000-10,000 nodes each running the same 25-microsecond verification on the same proof bytes. The global compute cost is 7,000 * 25 microseconds = 175 milliseconds of aggregated CPU time. For a single proof. A rollup that posts one proof per minute produces 175 milliseconds * 60 = 10.5 seconds of aggregated re-verification per hour. A rollup that posts one proof per 12 seconds (every Ethereum block) produces 175 milliseconds * 300 = 52.5 seconds of aggregated re-verification per hour.

Each individual node spends a trivial amount of time on verification: 25 microseconds per proof. But the global waste is the sum across all nodes, and that sum is non-trivial. More importantly, from any individual node's perspective, if it has already verified this proof (because it received the proof via gossip before it appeared in a block), the second verification is redundant. The node already knows the answer. It verified the proof 30 seconds ago. Nothing has changed. Yet the consensus protocol requires it to verify again because the protocol does not account for prior verification.

Bridge Validators

Cross-chain bridges use STARK proofs (or SNARK proofs, but the same principle applies) to attest that a state transition occurred on the source chain. Bridge validators on the destination chain must verify this proof before accepting the bridged state. A bridge with 15 validators means each cross-chain proof is verified 15 times. If the bridge processes 1,000 cross-chain transactions per hour, each with its own proof, the total verification count is 15,000 per hour. At 25 microseconds each, that is 375 milliseconds per hour -- modest for a single bridge, but multiplied across dozens of bridges in the ecosystem, the waste accumulates.

The re-verification pattern in bridges is particularly wasteful because bridge validators often run on the same hardware in the same data center. They receive the same proof bytes, run the same verification algorithm, and produce the same result. The only difference is the private key they use to sign their attestation after verification. The verification itself is identical across all validators. If the first validator to verify the proof could share its verification result with the other 14 validators in a trustworthy way, the bridge would save 14 * 25 microseconds per proof. This is exactly what proof reuse enables.

Microservices Authentication

In a microservices architecture, authentication proofs are the most frequently re-verified STARK proofs. A user authenticates once, producing a STARK-attested session token. Every subsequent request passes through multiple services, each of which must verify the token before processing the request. A typical microservices architecture has 5-15 services in the critical path. Each service verifies the same authentication proof independently.

At 10,000 concurrent users making 3 requests per second, with 8 services per request, the system performs 240,000 proof verifications per second. At 25 microseconds per verification, that requires 6 CPU cores dedicated to proof verification. With a 90% cache hit rate (the same user's proof is verified by the same service multiple times per second), caching reduces this to 0.6 CPU cores. The 5.4 freed CPU cores handle actual application logic instead of re-confirming mathematical facts that were established microseconds ago.

The Cumulative Cost

The cost of re-verification is not measured in individual microseconds. It is measured in aggregate CPU-seconds per second of wall-clock time. This metric captures the true waste: how much computation is the system performing that produces no new information.

ScenarioProofs/secVerifications/proofTotal verify/secCPU-sec/sec wasted
Rollup (1000 nodes)0.08 (1/12s)1,000830.002
Bridge (15 validators)0.28154.20.0001
Microservices (8 svc)30,0008240,0006.0
API gateway (1 svc)100,0001100,0002.5
Multi-tenant SaaS500,00031,500,00037.5

The microservices and multi-tenant SaaS rows reveal the scale of the problem. A multi-tenant SaaS platform processing 500,000 proofs per second, with each proof verified by 3 services (gateway, authorization, business logic), generates 1.5 million verifications per second. At 25 microseconds each, that is 37.5 CPU-seconds of verification per second of wall-clock time. You need 38 CPU cores dedicated entirely to re-verifying proofs that have already been verified. With caching at 90% hit rate, you need 3.75 CPU cores. Caching frees 34 CPU cores from redundant verification duty.

The Scale of Redundancy

In production systems with proof-based authentication, 80-95% of all STARK verifications are redundant. The same proof is verified by the same service multiple times during its validity window. Each redundant verification consumes 25 microseconds of CPU time and produces identical results. At scale, this redundancy consumes dozens of CPU cores. Re-verification is not cautious engineering. It is computational waste masquerading as security.

The Solution: Verify Once, Cache the Fact

The proof reuse architecture is built on a single principle: a verified proof is a fact. Facts do not need to be re-established. They need to be remembered and shared.

The architecture has four components: the verifier (runs the STARK verification pipeline), the fingerprint function (produces a unique identifier for the proof + verification context), the cache (stores fingerprint-to-result mappings), and the reuse protocol (determines when and how cached results are shared across consumers).

The Verification Pipeline

// Proof reuse pipeline
fn verify_or_reuse(
    proof: &StarkProof,
    vk: &VerificationKey,
    params: &VerificationParams,
) -> VerificationResult {
    // Step 1: Compute fingerprint
    let fp = sha3_256(
        proof.commitments(),
        vk.hash(),
        params.canonical_bytes(),
    );

    // Step 2: Check cache
    if let Some(cached) = CACHE.get(&fp) {
        return cached;  // 85ns - reuse the fact
    }

    // Step 3: Verify (only on first encounter)
    let result = stark_verify(proof, vk, params);

    // Step 4: Cache the fact
    CACHE.insert(fp, result, ttl_from(proof));

    result  // 25us on first verify, 85ns thereafter
}

The pipeline is intentionally simple. Complexity in a security-critical path is a liability. Every component has a single responsibility. The fingerprint function produces a collision-resistant identifier. The cache stores and retrieves verification results. The verifier runs the mathematical checks. The only new components compared to a non-caching verifier are the fingerprint function (50 nanoseconds) and the cache lookup (35 nanoseconds). On cache hit, the system returns the cached result in 85 nanoseconds. On cache miss, the system runs the full verification, caches the result, and returns it in approximately 25.1 microseconds. The overhead on misses is 0.34%. The speedup on hits is 294x.

The Trust Model

The most important question in any caching system is: "how do I trust the cached result?" For a generic cache, the answer is usually "you trust the cache infrastructure." For proof verification caching, that answer is insufficient. If the cache can lie, the entire security model collapses. An attacker who can insert "valid" into the cache for an invalid proof has bypassed all cryptographic protection.

The proof reuse architecture does not ask you to trust the cache. It asks you to trust three things, all of which you already trust in a non-caching system.

First, you trust the verification algorithm. This is the same STARK verification algorithm you would run without caching. It checks FRI layer consistency, constraint evaluation, Merkle path authentication, and OOD sampling. If the algorithm is correct (which it is, by mathematical construction and extensive formal analysis), its output is correct.

Second, you trust SHA3-256 collision resistance. The computation fingerprint is a SHA3-256 hash over the proof commitments, verification key, and domain parameters. If an attacker can find two different (proof, vk, params) tuples that produce the same fingerprint, they can cause a cache collision -- the cache would return the verification result for one tuple when queried with the other. Finding such a collision requires approximately 2^128 operations, which is beyond the reach of any classical or quantum computer for the foreseeable future. This is the same SHA3-256 that secures the Merkle tree inside the STARK proof itself.

Third, you trust your process memory. The cache is an in-process data structure. If an attacker can modify your process memory, they can also modify the verification algorithm, the verification key, the proof bytes, or the return value of the verification function. In-process memory corruption is a catastrophic security failure regardless of caching. The cache does not create a new attack surface; it operates within an existing trust boundary.

The cache does not introduce any new trust assumptions. It accelerates an operation (verification) by remembering its result (the verification outcome) using a mechanism (SHA3-256 fingerprinting) that is already trusted for the operation's internal security (Merkle tree authentication). The cache is not a trust shortcut. It is a computation shortcut with the same trust properties as the original computation.

What If Someone Asks: "Prove It Was Really Verified"

A skeptical consumer of a cached verification result might ask: "how do I know this proof was actually verified, and not just looked up in a cache that might be wrong?" The answer is: you can re-verify it yourself. The cache entry contains the fingerprint, which is derived from the proof commitments and verification key. Given the fingerprint, you can retrieve the original proof and verification key (they are not stored in the cache, but they are available from the proof's source -- the prover, the blockchain, the attestation service), re-compute the fingerprint, confirm it matches the cached fingerprint, and then run the full STARK verification yourself to confirm the result.

This is the key property of proof reuse: the cache does not prevent anyone from verifying. It prevents redundant verification. If you trust the cache, you save 25 microseconds. If you do not trust the cache, you verify and pay 25 microseconds, exactly as you would without the cache. The cache is a performance optimization for the common case (trust) with a graceful fallback for the exceptional case (distrust). No security is lost in either path.

Proof Reuse Across Services

The simplest form of proof reuse is within a single process: the same proof is verified once and the result is reused for subsequent requests within the same process. This eliminates re-verification within a single service instance. But the larger opportunity is cross-service reuse: sharing verification results across multiple services that all need to verify the same proof.

Cross-service proof reuse requires a shared cache or a verification attestation protocol. The shared cache approach stores verification results in a location accessible to multiple services (a shared memory segment, a local Unix socket, or in the most performant case, a memory-mapped file that multiple processes can read). The verification attestation approach has the first verifier sign the verification result with its service key, producing a lightweight attestation that other services can verify using the first verifier's public key.

Shared Cache Approach

In the shared cache approach, all services on the same host share a single verification cache backed by a memory-mapped file. When service A verifies a proof and caches the result, service B can look up the same fingerprint and receive the cached result without performing its own verification. The memory-mapped file ensures that the cache survives service restarts and is accessible to all services regardless of their runtime language or framework.

The shared cache is protected by the same trust model as the in-process cache: all services sharing the cache are within the same trust boundary (same host, same operating system, same security perimeter). A compromised service on the host can corrupt the shared cache, but a compromised service can also directly attack the other services on the same host. The shared cache does not expand the attack surface.

# Start Cachee with shared verification cache
cachee init --stark-cache-size 1000000 --shared

# Service A verifies and caches
cachee verify --proof /path/to/proof.bin --vk /path/to/vk.bin
# Output: VALID (verified in 25.1us, cached)

# Service B checks the same proof (different process)
cachee verify --proof /path/to/proof.bin --vk /path/to/vk.bin
# Output: VALID (cache hit in 85ns)

Verification Attestation Approach

For cross-host proof reuse, a verification attestation protocol is required. When service A verifies a proof, it produces a signed attestation: "I, service A, verified proof with fingerprint F at time T, and the result was VALID." This attestation is signed with service A's private key and can be verified by any service that has service A's public key. The attestation is lightweight (fingerprint + result + timestamp + signature = approximately 100-200 bytes) and can be transmitted alongside the proof itself.

The trust model for verification attestation is different from the shared cache model. Instead of trusting the cache infrastructure, you trust the attesting service. Service B must decide: "do I trust service A's verification result, or do I want to verify independently?" This is a policy decision, not a cryptographic one. If service A is a dedicated verification service operated by the same organization, trust is reasonable. If service A is an external party, service B should verify independently and only use the attestation as a performance hint (check the attestation first; if it says "valid," verify anyway but with lower priority; if it says "invalid," verify immediately with high priority).

The attestation can be signed using post-quantum signatures for forward security. A FALCON-512 signature adds 690 bytes to the attestation but ensures that the attestation cannot be forged even by a quantum-capable attacker. An ML-DSA-65 signature adds 3,309 bytes but provides FIPS 204 compliance. The choice of attestation signature scheme is independent of the STARK proof's field or constraint system.

The Economics of Proof Reuse

The value of proof reuse can be quantified in CPU-seconds saved per second, which translates directly to infrastructure cost savings. The formula is straightforward.

CPU-seconds saved/sec = verifications/sec * hit_rate * verification_time

Example:
  verifications/sec = 1,000,000
  hit_rate = 0.90
  verification_time = 25us

  CPU-seconds saved/sec = 1,000,000 * 0.90 * 0.000025 = 22.5

  At $0.034/vCPU-hour (AWS Graviton4 spot):
  Cost saved/hour = 22.5 * $0.034 = $0.77/hour
  Cost saved/month = $554/month
  Cost saved/year = $6,650/year

At one million verifications per second, proof reuse saves approximately $6,650 per year in compute costs. This is a conservative estimate because it only accounts for the CPU cost of verification, not the opportunity cost of the freed CPU cores (which can serve additional application requests, increasing revenue) or the latency improvement for end users (which can improve conversion rates and user satisfaction).

At ten million verifications per second (a large multi-tenant SaaS platform or a high-throughput blockchain validator), the savings scale linearly: $66,500 per year. At 100 million verifications per second (a global CDN edge verification network), $665,000 per year. The cache infrastructure cost -- approximately 41 MB per million entries for the in-process DashMap -- is negligible compared to these savings.

When Not to Reuse

Proof reuse is not universally applicable. There are scenarios where re-verification is correct and caching is inappropriate.

Adversarial environments with untrusted provers. If you do not trust the entity that produced the proof, you should verify it yourself regardless of what any cache or attestation says. The cache might be poisoned by a previous interaction with a malicious prover who submitted a carefully crafted proof that passes verification under one set of parameters but should fail under another. The computation fingerprint mitigates this risk (different parameters produce different fingerprints), but in a high-security adversarial environment, independent verification is the correct policy.

Regulatory requirements for independent verification. Some regulatory frameworks require that each participant in a financial transaction independently verify cryptographic proofs. "I checked the cache and it said valid" may not satisfy an auditor who requires evidence of independent mathematical verification. In these environments, each participant must run the full verification pipeline, though they can cache the result internally for their own subsequent use.

Proofs with side effects. Some proof verification protocols have side effects: they update a nonce counter, they record the verification in a tamper-evident log, they trigger downstream state changes. In these cases, the verification is not a pure function, and caching the result skips the side effects. If the side effects are security-relevant (for example, incrementing a replay counter to prevent proof reuse), caching the verification result without executing the side effects creates a vulnerability. The fingerprint-based cache should only be used for pure verification functions with no side effects.

First verification of a novel proof. Obviously, a proof that has never been seen before cannot be served from cache. The cache only helps on the second and subsequent encounters with the same proof. Systems that primarily process unique proofs (batch verification pipelines, archival verification, one-time audit checks) gain no benefit from caching. The cache overhead (85 nanoseconds per miss for fingerprint computation and cache lookup) is wasted in these scenarios, though the waste is small enough to be operationally irrelevant.

Proof Reuse Architecture

The complete proof reuse architecture integrates the verification pipeline, the fingerprint function, the cache, and the reuse protocol into a cohesive system. The architecture handles both in-process reuse (same service, multiple requests) and cross-service reuse (multiple services, same proof).

                    Proof arrives
                         |
                    Compute fingerprint
                    (50ns SHA3-256)
                         |
                 Check in-process cache
                    (35ns DashMap)
                    /            \
               HIT                MISS
           (85ns total)            |
               |              Full STARK verify
          Return result       (15-25us)
                                   |
                              Cache result
                              (10ns insert)
                                   |
                           Emit attestation
                           (optional, for
                            cross-service)
                                   |
                              Return result

The attestation emission step is optional and only enabled when cross-service proof reuse is configured. The attestation is signed asynchronously (off the hot path) and published to a shared cache or message bus. Other services subscribe to the attestation feed and pre-populate their own in-process caches with the attested results. This creates a cascade effect: the first service to verify a proof pays the full 25-microsecond cost, and all subsequent services serve the result from their pre-populated local caches at 35 nanoseconds (no fingerprint computation needed because the fingerprint is included in the attestation).

The Bottom Line

A verified STARK proof is a fact. Re-verifying it produces the same answer and wastes 25 microseconds of CPU time. In production systems, 80-95% of verifications are redundant. Proof reuse eliminates this waste by caching the verification result with a cryptographic fingerprint that binds it to the exact proof and parameters. The cache introduces no new trust assumptions: it relies on SHA3-256 collision resistance (already trusted inside the STARK itself) and process memory integrity (already assumed by the verification algorithm). Verify once. Cache the fact. Serve the truth at 85 nanoseconds. Re-verification is waste. Stop doing it.

Eliminate redundant STARK re-verification. Cache verified proofs at 85 nanoseconds.

brew install cachee Post-Quantum Caching