← Back to Blog

Valkey vs Redis 2026: What Actually Changed

April 27, 2026 | 14 min read | Engineering

In March 2024, Redis Ltd. changed the Redis license from BSD-3-Clause to a dual license combining the Redis Source Available License (RSALv2) and the Server Side Public License (SSPLv1). Within days, the Linux Foundation announced Valkey, a community fork of Redis 7.2.4 under the BSD-3-Clause license. The internet treated this as a seismic event. Two years later, it is worth asking: what actually changed for the engineers who use these systems every day?

The honest answer: less than you think. Valkey is Redis. It is the same codebase, the same architecture, the same protocol, and the same limitations. What changed is who controls the project and under what license. For most engineering teams, this is a legal and procurement question, not a technical one. But the conversation around Valkey vs Redis has obscured a more important question: whether either system is the right architecture for your caching needs in 2026.

Same
Core Architecture
Same
RESP Protocol
Same
Performance Ceiling

What Valkey Actually Is

Valkey is a fork of Redis 7.2.4, the last version released under the BSD-3-Clause license. The fork is maintained by the Linux Foundation with contributions from AWS, Google Cloud, Oracle, Ericsson, and Snap. The project has an open governance model with a Technical Steering Committee that oversees development direction.

The codebase at fork time was identical to Redis 7.2.4. Since the fork, the Valkey community has diverged slightly: bug fixes, minor feature additions, and some internal refactoring. But the core architecture is unchanged. Valkey uses the same single-threaded event loop for command processing. It uses the same RESP (Redis Serialization Protocol) for client-server communication. It uses the same data structures internally: SDS strings, ziplists, quicklists, hash tables, skip lists. It uses the same RDB and AOF persistence mechanisms. It uses the same replication protocol for primary-replica setups. It uses the same cluster protocol for sharding.

If you run a Redis 7.2 workload against Valkey, the performance will be identical. Not "similar." Identical. The same code is executing the same instructions on the same data structures. Benchmarks confirm this repeatedly, and they will continue to confirm it until one of the projects makes a fundamental architectural change -- which neither has done as of April 2026.

What Changed: The License

The only change that matters for most teams is the license. Redis is now dual-licensed under RSALv2 and SSPLv1. Valkey remains BSD-3-Clause. Here is what each license means in practical terms.

BSD-3-Clause (Valkey): You can use, modify, and distribute Valkey for any purpose, including embedding it in proprietary products and offering it as a managed service. There are no restrictions on commercial use. This is the license Redis had from 2009 to 2024. It is the license that enabled AWS ElastiCache, Google Memorystore, Azure Cache, and dozens of other managed Redis services.

RSALv2 + SSPLv1 (Redis): You can use Redis internally for any purpose. You can modify it for your own use. You cannot offer Redis as a managed service (a hosted service whose primary purpose is providing Redis functionality) without a commercial agreement with Redis Ltd. You cannot embed Redis in a product that competes with Redis products.

Who the License Change Affects

The license change affects three categories of organizations. First, cloud providers who offer managed Redis services. AWS, Google Cloud, and Azure all offered managed Redis services under the BSD license. Under the new license, they cannot offer managed Redis without a commercial agreement. This is why they backed Valkey -- it gives them a BSD-licensed codebase they can continue offering as a managed service.

Second, companies that embed Redis in their products. If you sell a product that includes Redis as a component (an appliance, a SaaS platform that bundles Redis as its data store), the new license may restrict your use. You should consult your legal team about whether your use case falls under the "competing product" restriction.

Third, companies that offer Redis-as-a-service to their customers. If you run a managed Redis offering for your clients (even internally), the new license may apply. Again, consult your legal team.

Who the License Change Does NOT Affect

If you use Redis as a cache, session store, message broker, or data store in your own application -- and you are not offering Redis itself as a service -- the license change does not affect you. You can continue using Redis under the new license for all internal purposes. The SSPL/RSALv2 restrictions apply to offering Redis as a product, not to using Redis in your product.

What Did Not Change: The Architecture

Both Redis and Valkey share the same fundamental architecture: a single-threaded event loop that processes commands sequentially. This architecture has specific performance characteristics that are worth understanding, because they determine whether either system is the right choice for your use case.

The Single-Threaded Event Loop

Redis (and Valkey) process commands on a single thread. When a client sends a command, it is queued in the event loop and processed in order. This design provides several benefits: no lock contention, no race conditions, atomic command execution without explicit transactions, and simple reasoning about consistency. It also imposes a hard ceiling: a single Redis instance can process approximately 100,000-200,000 simple commands per second, depending on the command complexity and hardware. Complex commands (ZRANGEBYSCORE with large ranges, LRANGE over long lists, KEYS with pattern matching) can block the event loop and reduce throughput for all clients.

Redis 6 added multi-threaded I/O for reading and writing network buffers, but command processing remains single-threaded. This helps with network-bound workloads (large values, many concurrent connections) but does not change the fundamental throughput ceiling for command processing. Valkey inherits this same threading model.

The Network Overhead

Every Redis/Valkey operation involves a network round-trip. The client sends a command over TCP, the server processes it, and the server sends the response back over TCP. Even on localhost, this round-trip takes 50-100 microseconds. On a local network, it takes 200-500 microseconds. Cross-region, it takes 10-100 milliseconds.

For small values (a few hundred bytes), the network overhead is a small fraction of the total latency. For large values (50KB+), serialization and deserialization dominate. A 50KB JSON blob must be serialized to bytes on the client, sent over TCP, stored in Redis, and then the reverse on retrieval. The serialization alone can take 100-200 microseconds for complex structures.

This network overhead is the same in Valkey and Redis because it is inherent to the client-server architecture, not to the server implementation. Any system that communicates over TCP will have this overhead. The only way to eliminate it is to move the cache into the application process -- which is what in-process caching does.

The Serialization Tax

Everything stored in Redis/Valkey must be serialized to a wire-compatible format. If your application works with structured data (JSON objects, protobuf messages, application-specific types), you pay a serialization cost on every write and a deserialization cost on every read. For a typical JSON object of 5KB, serialization takes approximately 10-20 microseconds and deserialization takes approximately 15-30 microseconds. This cost is invisible in benchmarks that test with pre-serialized strings but very visible in real applications that work with structured data.

The serialization tax compounds with the network overhead. A cache read that involves deserializing a 5KB JSON object from Redis takes: 200 microseconds (network round-trip) + 15 microseconds (deserialization) = 215 microseconds. The same read from an in-process cache where the object is already in application memory takes: 31 nanoseconds (hash lookup). That is a 6,935x difference. The data structure does not need to be serialized or deserialized because it never leaves the process.

OperationRedis/ValkeyIn-Process (Cachee)Difference
Small value GET (100 bytes)120 us31 ns3,871x
Medium value GET (5 KB)215 us31 ns6,935x
Large value GET (50 KB)1,500 us31 ns48,387x
SET + GET round-trip350 us62 ns5,645x
Pipeline (100 commands)1,200 us3,100 ns387x

The Real Question

The Valkey vs Redis debate is about licensing. The actual engineering question is different: should you be using a network cache at all for your hot-path data?

Network caches (Redis, Valkey, Memcached) solve a specific problem: sharing cached data across multiple application instances. If you have 20 application servers that all need to read the same session data, a centralized cache avoids duplicating the data 20 times. The network overhead is the cost of shared access.

But most cache reads are not shared. In a typical application, 70-80% of cache reads are for data that only one instance needs right now. The user's session, the current request's configuration, the recently computed result. These reads go over the network to a centralized cache and come back, paying 200+ microseconds of round-trip latency for data that could have been served from local memory in 31 nanoseconds.

The L1/L2 Architecture

The solution is not "replace Redis/Valkey." It is "add an L1 in front of it." The architecture mirrors CPU cache hierarchies: L1 is fast and local (in-process, 31 nanoseconds), L2 is slower and shared (Redis/Valkey, 200+ microseconds). Hot data lives in L1. Cold data lives in L2. Cache misses at L1 fall through to L2. Cache misses at L2 fall through to the database.

def get(key):
    # L1: In-process cache (31ns)
    if value := l1_cache.get(key):
        return value

    # L2: Redis/Valkey (200us)
    if value := redis.get(key):
        l1_cache.set(key, value, ttl=60)
        return value

    # L3: Database (15ms)
    value = database.query(key)
    redis.set(key, value, ttl=3600)
    l1_cache.set(key, value, ttl=60)
    return value

In this architecture, the choice between Redis and Valkey for the L2 layer is genuinely unimportant. Both perform identically. Both speak RESP. Both provide the same data structures. The L1 layer handles 70-80% of reads at 31 nanoseconds. The L2 layer handles 15-25% of reads at 200 microseconds. The database handles 5% of reads at 15 milliseconds. The weighted average read latency drops from 200 microseconds (all reads hitting L2) to approximately 50 microseconds (dominated by L1 hits).

When to Choose Valkey Over Redis

If you are already using Redis and the license change does not affect your use case (you are using Redis as a cache in your own application, not offering it as a service), there is no technical reason to migrate to Valkey. The migration has a cost (testing, deployment changes, monitoring updates) and zero performance benefit. Stay on Redis. Spend the engineering time on something that moves the needle.

If the license change does affect you -- you are a cloud provider, you embed Redis in a product, or you offer Redis-as-a-service -- Valkey is the obvious choice. It is the same codebase under a permissive license. The migration is straightforward: Valkey is a drop-in replacement for Redis 7.2. Point your clients at the Valkey server. Everything works.

If you are starting a new project and have no existing Redis dependency, Valkey has a slight edge: it is BSD-licensed, which gives you more flexibility if your product direction changes. There is no downside to choosing Valkey over Redis for new projects. The performance is identical, the ecosystem compatibility is identical (Valkey speaks RESP, so all Redis clients work), and the license is more permissive.

When to Choose Neither

Both Redis and Valkey are network caches. They add 200+ microseconds of latency per operation. If your application needs sub-microsecond cache reads, neither Redis nor Valkey will deliver. This is not a criticism of either project. It is a statement about the physical limitations of network communication. Light through fiber takes time. TCP handshakes take time. Serialization takes time. No amount of software optimization can reduce a network round-trip to 31 nanoseconds.

If your hot-path latency budget is under 100 microseconds, you need an in-process cache. If your hot-path latency budget is under 1 microsecond, you need an in-process cache with no serialization overhead -- a native data structure in application memory, accessed by pointer dereference, with a hash-based lookup that completes in tens of nanoseconds.

This is the tier of caching that Redis and Valkey do not address. Not because they are poorly engineered, but because they are designed for a different problem. They are designed for shared, persistent, structured data storage with network access. In-process caching is designed for fast, local, ephemeral data access without network overhead. The two are complementary, not competitive.

The Migration Path

If you decide to migrate from Redis to Valkey, the process is straightforward because Valkey is protocol-compatible with Redis 7.2. Here is the practical migration checklist.

Step 1: Verify compatibility. Run your test suite against a Valkey instance. If your tests pass against Redis 7.2, they will pass against Valkey. If you use Redis modules (RedisJSON, RediSearch, RedisTimeSeries), check whether Valkey-compatible versions exist. Some modules are maintained by Redis Ltd. and may not be available for Valkey.

Step 2: Deploy Valkey alongside Redis. Run both systems in parallel. Mirror write traffic to both. Compare read results. This validates that Valkey produces identical results for your workload. Run this parallel deployment for at least one week to catch edge cases.

Step 3: Switch reads to Valkey. Point read traffic at Valkey while continuing to write to both systems. Monitor latency, error rates, and data consistency. If everything looks good after 48 hours, proceed to step 4.

Step 4: Switch writes to Valkey. Point all traffic at Valkey. Keep Redis running but idle for 72 hours as a rollback target. After 72 hours with no issues, decommission Redis.

The entire migration can be completed in two weeks with minimal risk because the two systems are functionally identical. The risk is not technical -- it is operational. New monitoring dashboards, updated runbooks, different package repositories, and different support channels. These are worth planning for but are not engineering challenges.

What Matters More Than the Fork

The Redis-to-Valkey fork consumed enormous amounts of attention in the infrastructure community. Blog posts, conference talks, migration guides, benchmark comparisons. The attention was disproportionate to the technical impact, which was effectively zero for most users. The fork changed the governance and licensing of a project. It did not change the architecture, the performance, or the limitations.

What deserves more attention is the question that the fork did not address: why are we sending cache reads over the network at all? Redis was created in 2009. The assumptions of 2009 -- that application servers were stateless VMs with limited memory, that centralized data stores were necessary for consistency, that network latency was acceptable for all access patterns -- are not the assumptions of 2026. Modern application servers have 64-256 GB of RAM. Container orchestration provides persistent local storage. In-process data structures provide sub-microsecond access without network overhead.

The next evolution in caching is not a better network cache. It is adding an in-process L1 tier in front of whatever network cache you already use -- Redis, Valkey, Memcached, or anything else that speaks a standard protocol. The L1 tier handles the hot reads. The L2 tier handles shared state. Each tier does what it is good at.

The Bottom Line

Valkey is Redis under a different license. Same code, same architecture, same performance, same limitations. If the Redis license change affects you (cloud provider, embedded product, managed service), switch to Valkey -- it is a drop-in replacement. If it does not affect you, stay on Redis. Either way, the bigger optimization is adding an in-process L1 cache in front of your network cache. That is where the 6,000x latency reduction lives -- not in choosing between two identical network caches.

31ns in-process reads in front of Redis or Valkey. Add L1 caching to your stack.

brew install cachee Database Query Caching