Skip to main content
Why CacheeHow It Works
All Verticals5G TelecomAd TechAI InfrastructureFraud DetectionGamingTrading
PricingDocsBlogSchedule DemoLog InStart Free Trial
← Back to Blog
Comparison

Redis vs Valkey in 2026: What Actually Changed?

In March 2024, Redis Ltd. switched Redis from its permissive BSD license to the dual RSALv2/SSPL model — effectively making it proprietary for anyone offering a hosted service. Within weeks, the Linux Foundation announced Valkey, a community-driven fork backed by AWS, Google Cloud, Oracle, Ericsson, and Snap. Two years later, the fork is mature, the ecosystem has split, and engineering teams everywhere are asking the same question: does it actually matter which one I run? Here is what the benchmarks, the architecture, and the production data actually show.

1–3ms P99 Latency (Both)
1.5µs Cachee L1 Lookup
667× L1 vs Network Cache
$0 Migration Cost

What Happened

The timeline is straightforward. In March 2024, Redis Ltd. changed the license on the Redis codebase from the BSD 3-Clause license to a dual RSALv2/SSPL model. Under the new terms, any company offering Redis as a managed service — AWS ElastiCache, Google Cloud Memorystore, Azure Cache for Redis — would need a commercial agreement with Redis Ltd. The practical effect: cloud providers could no longer offer Redis-compatible services based on the open-source codebase without paying licensing fees.

The response was immediate. The Linux Foundation launched Valkey as a BSD-licensed fork of Redis 7.2.4 — the last fully open-source release. AWS migrated ElastiCache to Valkey. Google Cloud followed with Memorystore for Valkey. Oracle, Ericsson, and dozens of contributors from the original Redis community joined the project. By mid-2025, Valkey 8.0 shipped with its first wave of independent features. As of early 2026, Valkey 8.1 is in wide production use, and the project has established its own release cadence, governance model, and roadmap distinct from Redis.

Redis, meanwhile, continued under Redis Ltd. with the proprietary license. Redis 8.x added Redis Stack modules (RediSearch, RedisJSON, RedisTimeSeries) directly into the core distribution, blurring the line between the open-source engine and paid features. Enterprises running self-hosted Redis face license complexity. Teams building managed platforms face a hard choice. And everyone else is caught in the middle, wondering whether this matters for their application at all.

Performance: Head to Head

The honest answer: in terms of raw GET/SET throughput, Redis and Valkey are remarkably similar. They share the same core architecture — single-threaded event loop, epoll/kqueue multiplexing, in-memory hash tables. Valkey 8.x has introduced meaningful improvements in specific areas, but the fundamental performance characteristics are the same because the fundamental architecture is the same.

Valkey 8.x brought RDMA (Remote Direct Memory Access) support for kernel-bypass networking, which can reduce network latency by 30–40% in data center environments with RDMA-capable NICs. It also improved multi-threaded I/O handling, allowing I/O threads to process commands rather than just handling read/write operations. These are real improvements. They push the ceiling higher for peak throughput under heavy concurrent load.

But here is what the production benchmarks consistently show: P99 latency for both Redis and Valkey sits in the 1–3 millisecond range for typical workloads over a network. Under ideal conditions — same-rack, low contention, small payloads — you might see sub-millisecond median latency. Under real-world conditions — cross-AZ deployments, mixed workloads, persistence enabled, periodic snapshots — both engines reliably deliver P99 in the low single-digit millisecond range. RDMA shaves microseconds off the network path, but the serialization overhead, event loop processing, and TCP/IP stack still dominate the latency budget.

Metric Redis 8.x Valkey 8.1 Cachee L1
License RSALv2 / SSPL BSD 3-Clause Commercial
Median latency 0.3–0.8 ms 0.3–0.8 ms 1.5 µs
P99 latency 1–3 ms 1–3 ms 8 µs
Network hops 1+ (TCP) 1+ (TCP/RDMA) 0 (in-process)
Multi-core I/O threads only I/O threads + cmd processing Native multi-thread
Eviction LRU / LFU / TTL LRU / LFU / TTL AI predictive + TTL
Cloud-native support Self-hosted or Redis Cloud AWS, GCP, Oracle native Any cloud, any backend
The bottom line on raw performance: Valkey's RDMA and multi-threaded I/O are genuine improvements over Redis. But both engines share the same 1ms latency floor imposed by network round-trips, serialization, and event-loop processing. Switching from Redis to Valkey will not move your P99 from 2ms to 2µs. The bottleneck is architectural, not implementation.

The Real Question Neither Answers

The Redis-vs-Valkey debate focuses almost entirely on the wrong axis. License? Matters for compliance and vendor strategy. Throughput? Both handle millions of ops/sec. Community governance? Important for long-term sustainability. But none of these address the fundamental limitation that both engines share: they are reactive, network-bound, key-value stores that serve data after you ask for it.

Both Redis and Valkey use LRU or LFU eviction. Both require the application to make a network round-trip for every cache lookup. Both rely on TTL-based expiration that is semantically disconnected from when data actually changes. Both impose a latency floor of approximately 1 millisecond per operation that no amount of RDMA, io_uring, or multi-threaded I/O will eliminate — because the round-trip itself is the bottleneck.

If your application needs cache reads in the low microsecond range, neither Redis nor Valkey can deliver that. Not because they are poorly engineered — both are exceptional at what they do — but because what they do requires a network hop. The data leaves your process, crosses a socket, enters another process, gets looked up, serialized, and returned. That sequence has a physics-imposed floor. No software optimization eliminates the speed of light across a network cable or the overhead of TCP/IP processing in the kernel.

The question is not "Redis or Valkey?" The question is: should the cache layer require a network hop at all?

The Third Option: L1 Predictive Layer

Cachee does not replace Redis or Valkey. It sits in front of either one as an in-process L1 memory tier that intercepts cache reads before they ever hit the network. Your existing Redis or Valkey deployment becomes L2 — the durable, shared backing store — while Cachee serves the hot path from the application process’s own memory space in 1.5 microseconds. Zero serialization. Zero TCP. Zero network hops.

This is not a theoretical architecture. Cachee speaks native RESP protocol, which means it works with every Redis and Valkey client library already in your stack. Point your CACHE_HOST at the Cachee proxy, and every GET that hits L1 returns in microseconds. Misses fall through to your existing Redis or Valkey backend transparently. No code changes. No migration. No choosing sides in the license debate.

Where Cachee diverges from both Redis and Valkey is in cache intelligence. Both Redis and Valkey are passive: they store what you tell them to store and evict based on generic LRU/LFU heuristics. Cachee’s predictive engine learns your application’s access patterns and pre-loads data into L1 before it is requested. The result is a 99%+ hit rate on the L1 tier, which means 99%+ of your cache reads never touch the network at all. The Redis-vs-Valkey decision becomes irrelevant for the vast majority of your traffic — it only matters for the 1% of cold reads that fall through to L2.

Cachee works with both Redis and Valkey as the L2 backend. Your team can migrate from Redis to Valkey (or stay on Redis) whenever it makes strategic sense. The L1 layer is backend-agnostic. The performance gain — 667× faster cache reads — comes from eliminating the network hop, not from replacing the engine. See the full comparison.

When to Choose What

The decision tree is simpler than the debate makes it seem:

Choose Valkey if you want a fully open-source, BSD-licensed cache engine with strong community governance. If you run on AWS, Valkey is now the default backend for ElastiCache — you may already be running it. Google Cloud Memorystore and Oracle Cloud have native Valkey support. If open-source licensing, vendor neutrality, and long-term community control matter to your organization, Valkey is the clear choice. Its RDMA support and improved multi-threaded I/O also make it the better option for teams pushing raw throughput at scale.

Choose Redis if you depend on Redis Stack modules — RediSearch for full-text search, RedisJSON for native JSON document handling, RedisTimeSeries for time-series data, or RedisGraph (now FalkorDB). These modules are tightly integrated into the Redis 8.x distribution and have no direct equivalents in Valkey yet. If your application architecture depends on these capabilities, Redis remains the only option. Be aware of the licensing implications for managed deployments.

Add Cachee L1 regardless of which you pick. The latency reduction from L1 caching is orthogonal to the Redis-vs-Valkey decision. Whether your L2 backend is Redis, Valkey, Memcached, or DynamoDB DAX, the in-process L1 tier eliminates the network round-trip that dominates your cache latency budget. The engine underneath matters for durability, replication, and data structure support. It does not matter for read latency — because with a properly warmed L1, your reads never reach the engine at all.

# Works with Redis CACHEE_L2_BACKEND=redis://your-redis-cluster:6379 # Works with Valkey CACHEE_L2_BACKEND=redis://your-valkey-cluster:6379 # Same 1.5µs L1 reads either way. # Same RESP protocol. Same client libraries. # The engine debate disappears at the L1 layer.

Also Read

The Cache Engine Matters Less Than the Cache Strategy.

Redis or Valkey, the network round-trip is still the bottleneck. Cachee eliminates it. See how 1.5µs L1 reads transform your cache layer — regardless of the backend.

Start Free Trial Schedule Demo