MVCC gives every read a consistent snapshot of the cache state. Write operations proceed concurrently without blocking any reader. At 96 workers on Graviton4, contention drops to zero.
DashMap is fast. But at extreme concurrency with mixed read-write workloads, sharded locks still create measurable jitter. The problem is not speed — it is determinism.
Each write creates a new version of the value. Readers see a consistent snapshot at their read timestamp. Old versions are garbage-collected after all active readers complete. No locks on the read path. Writes are serialized per-key via atomic version counters — not per-shard.
The read path is completely lock-free. A reader acquires the current global epoch (a single atomic load), then traverses the version chain to find the most recent version whose epoch is less than or equal to the reader's. No mutex, no read-write lock, no compare-and-swap retry loop. The reader never waits on any writer.
DashMap is mostly lock-free for reads — until a concurrent write to the same shard acquires the write lock. MVCC removes the “mostly”. Reads are unconditionally non-blocking regardless of concurrent write activity.
Writes create a new version and swap the head pointer via atomic CAS (compare-and-swap). Serialization is per-key, not per-shard. Two writers updating different keys in the same shard proceed in parallel with zero coordination. This is a fundamental improvement over shard-level write locks.
Write latency increases by approximately 0.001ms (the cost of allocating a new version struct and performing the atomic swap). For workloads where write latency is not the bottleneck — which is most of them — this is invisible.
Each key maintains a version chain from newest to oldest. Old versions are garbage-collected when all active readers have advanced past them.
mvcc.max_versions. Default: 2 versions per key. Higher values allow readers with older snapshots to continue operating, at the cost of memory.| Keys | Versions per Key | Version Overhead |
|---|---|---|
| 1M | 2 | 48 MB |
| 10M | 2 | 480 MB |
| 10M | 4 | 960 MB |
| 100M | 2 | 4.8 GB |
For 10M keys with 2 versions each, overhead is ~480MB. This is the cost of zero read contention. For workloads where microsecond-level P99 determinism matters, the tradeoff is overwhelmingly positive.
96 workers, Graviton4 (c8g.metal-48xl), 30% write ratio. The P50 is unchanged. The P99 drops by 55%.
| Metric | Without MVCC (DashMap) | With MVCC |
|---|---|---|
| Read Latency (P50) | ~0.0015ms | ~0.0015ms (unchanged) |
| Read Latency (P99, 30% writes) | ~4µs (shard contention) | ~1.8µs (zero contention) |
| Write Latency | 0.013ms | 0.014ms (+0.001ms version creation) |
| P99 Jitter Reduction | — | 55% |
| Read-Path Locks | Per-shard RwLock | Zero (completely lock-free) |
MVCC is not for every workload. It is for workloads where microsecond-level P99 determinism is a requirement, not a nice-to-have.
MVCC is not a standalone feature. It is a concurrency layer that composes with every other Cachee primitive to eliminate contention at every level.
MVCC (Multi-Version Concurrency Control) is a concurrency technique where each write creates a new version of a value instead of overwriting it in place. Readers see a consistent snapshot at their read timestamp and never block on concurrent writes. This eliminates read-write contention entirely, which is critical for high-concurrency workloads like HFT, ML feature stores, and IoT state management.
Yes. Each additional version of a key adds approximately 24 bytes of overhead (pointer, timestamp, epoch). With the default configuration of 2 versions per key, 10 million keys add roughly 480 MB of version overhead. Old versions are garbage-collected automatically via epoch-based GC once all active readers have advanced past them. The overhead is configurable via mvcc.max_versions.
No. MVCC is transparent to the client. GET, SET, HGET, HSET, and all other commands work identically. The only new commands are CONFIG SET mvcc.enabled true, CONFIG SET mvcc.max_versions 2, and CONFIG SET mvcc.gc_interval_us 100 for enabling and tuning the feature. Existing applications require zero code changes.
DashMap divides keys into shards, each protected by a read-write lock. Reads within the same shard as a concurrent write must wait for the write lock to release. At high worker counts (64-96+), same-shard collisions become statistically frequent and add 1-4 microseconds of P99 jitter. MVCC eliminates this entirely: readers acquire an epoch number (a single atomic load) and read from the version chain without any lock. The read path is completely lock-free, not just “mostly lock-free.”
Yes. Three configuration parameters control MVCC behavior: mvcc.enabled (true/false) turns the feature on or off, mvcc.max_versions (default: 2) controls how many versions are retained per key before garbage collection, and mvcc.gc_interval_us (default: 100) sets how often the background GC thread scans for reclaimable versions. All three can be changed at runtime via CONFIG SET.
Zero-contention reads under concurrent writes. Consistent snapshots at every read. Epoch-based garbage collection with zero reader impact. One config flag to enable.