Skip to main content
Why CacheeHow It Works
All Verticals5G TelecomAd TechAI InfrastructureFraud DetectionGamingTrading
PricingDocsBlogSchedule DemoLog InStart Free Trial
← Back to Blog
Performance

Redis SLOWLOG Explained: Find and Fix Your Slowest Commands

Your Redis instance is “fast.” You know this because you read it on the internet, because the dashboard says average latency is 0.2ms, and because nobody has complained yet. But averages lie. Run SLOWLOG GET 10 on any production Redis instance that has been running for more than a week and you will find commands taking 10–100ms hiding in plain sight. These are the commands dragging your P99 into territory that makes your SLA nervous. Here is how to find them, understand what they mean, fix the ones you can, and — more importantly — prevent them from mattering at all.

How to Read SLOWLOG

Redis maintains an internal log of commands that exceed a configurable execution-time threshold. This log lives entirely in memory — it does not touch disk, it does not require any external monitoring tool, and it has been available since Redis 2.2.12. Despite this, most teams never check it. The default threshold is 10,000 microseconds (10ms), and the default log length is 128 entries. Both are configurable at runtime without a restart.

# Check your current threshold (in microseconds) CONFIG GET slowlog-log-slower-than # Returns: "slowlog-log-slower-than" "10000" (10ms default) # Lower it to 1ms to catch more offenders CONFIG SET slowlog-log-slower-than 1000 # Check how many entries to keep CONFIG GET slowlog-max-len # Returns: "slowlog-max-len" "128" # Increase to 256 for busier instances CONFIG SET slowlog-max-len 256

To read the log, run SLOWLOG GET 10 to fetch the 10 most recent entries. Each entry contains four fields: a unique ID (incrementing integer), a Unix timestamp of when the command executed, the execution duration in microseconds (this is pure execution time — it does not include network latency or queue wait time), and the command with its arguments. Starting with Redis 4.0, entries also include the client IP address and name, which tells you which application instance is responsible.

SLOWLOG GET 10 # Example output: # 1) 1) (integer) 14 <-- unique log entry ID # 2) (integer) 1711324800 <-- Unix timestamp # 3) (integer) 34210 <-- duration: 34.2ms # 4) 1) "KEYS" <-- the offending command # 2) "session:*" # 5) "10.0.1.42:52340" <-- client address (Redis 4.0+) # 6) "web-api-prod" <-- client name
Pro tip: Set slowlog-log-slower-than to 1000 (1ms) in production. The default 10ms threshold only catches catastrophic commands. Most performance damage comes from commands in the 1–5ms range that execute thousands of times per minute. At 1ms threshold on a busy instance, you will fill 128 entries in minutes — increase slowlog-max-len to 256 or 512 to compensate.

The 5 Commands That Always Show Up

After reviewing SLOWLOG output from hundreds of production Redis instances, the same five commands appear with remarkable consistency. Each one is an O(n) operation masquerading as a simple key-value lookup, and each one has a direct, drop-in replacement.

1. KEYS pattern (O(n) full keyspace scan)

KEYS session:* looks innocent. It scans the entire keyspace to find matching keys. On an instance with 2 million keys, this takes 30–80ms and blocks all other commands for the duration because Redis is single-threaded. If you have 10 million keys, expect 150–400ms of complete blockage. Every request queued behind a KEYS command pays the penalty.

The fix: Replace KEYS with SCAN. SCAN is cursor-based and non-blocking — it returns a small batch of results per call and yields execution between batches, allowing other commands to interleave. The total time to scan the keyspace is roughly the same, but no single call blocks for more than a few hundred microseconds.

# BAD: blocks Redis for 30-80ms on 2M keys KEYS session:* # GOOD: cursor-based, non-blocking, returns ~10 keys per call SCAN 0 MATCH session:* COUNT 100 # Returns: cursor + batch. Repeat with returned cursor until 0.

2. HGETALL on large hashes

A hash with 500 fields is a perfectly valid Redis data structure. Calling HGETALL on it returns all 500 field-value pairs in a single response — which means Redis must serialize 500 entries, and your client must deserialize them. On a hash with 1,000+ fields, this regularly hits 5–15ms. Worse, it transfers the entire payload over the network even if you only need 3 fields.

The fix: Use HMGET to fetch only the fields you need. If you genuinely need all fields in a batch operation, use HSCAN to iterate incrementally. If a hash consistently exceeds 500 fields, reconsider the data model — partition it into multiple smaller hashes by category or time window.

3. SORT on unsorted collections

SORT is one of Redis’s most expensive commands. It copies the collection into a temporary array, sorts it (O(n log n)), and optionally fetches external keys for each element via GET patterns. On a list of 10,000 elements with external key lookups, a single SORT command can take 50–200ms. If you are using SORT with BY and GET patterns in production, you are running a mini database query inside Redis.

The fix: Move sorting to your application layer. Fetch the raw data with LRANGE (paginated) or SSCAN, sort in your application code, and cache the sorted result in a separate key if the sort is expensive and frequently requested. For leaderboards and ranked data, use sorted sets (ZADD / ZRANGE) which maintain sort order on insert at O(log n) per entry.

4. LRANGE 0 -1 on long lists

LRANGE mylist 0 -1 retrieves every element from a list. On a list with 50,000 entries, this is a 20–60ms operation that also generates a massive response payload. Teams commonly use this pattern for activity feeds, job queues, and event logs — exactly the data structures that grow unbounded in production.

The fix: Paginate. LRANGE mylist 0 49 fetches the first 50 elements in under 0.1ms. If you need the full list for a batch job, use LRANGE in chunks of 100–500 with application-side aggregation. Also consider LTRIM to cap list length — most activity feeds only need the last 1,000 entries.

5. SMEMBERS on large sets

SMEMBERS returns every member of a set. Like HGETALL and LRANGE 0 -1, it is an unbounded read. A set with 100,000 members produces a response that takes 15–40ms to serialize and transfer. The fix is the same pattern: use SSCAN for incremental iteration, or SRANDMEMBER if you need a sample, or SISMEMBER if you are checking membership for a specific element.

KEYS Use SCAN instead
HGETALL Use HMGET / HSCAN
SORT Use ZRANGE / app-side
LRANGE 0 -1 Paginate with LRANGE

Why Fixing Commands Isn’t Enough

You found the KEYS calls. You replaced them with SCAN. You broke up the HGETALL calls into targeted HMGET calls. You paginated your LRANGE reads. Your SLOWLOG is clean — no entries above 1ms. You deploy, check the dashboard, and your P99 is still 3ms. What happened?

The floor is the network round-trip. Every Redis command, no matter how optimized, requires a TCP round-trip between your application and the Redis server. Same-rack, that is 0.3–0.5ms. Cross-AZ, it is 1–3ms. Through a VPC peering connection or NAT gateway, it is 2–5ms. TLS adds another 1–2ms per new connection. You cannot SLOWLOG your way past TCP overhead because SLOWLOG only measures execution time inside Redis — it does not see the network latency that dominates your end-to-end P99. A command that executes in 50 microseconds inside Redis still takes 2ms from your application’s perspective if the network round-trip is 1.95ms.

Pipelining helps — batching 10 commands into one round-trip cuts network overhead by 10x for that batch. But it does not eliminate the fundamental problem: every interaction with Redis crosses a network boundary. For latency-sensitive paths where your budget is under 1ms, no amount of Redis tuning can close the gap. The speed of light through fiber is the constraint, and your application is on the wrong side of it.

The SLOWLOG blind spot: SLOWLOG measures command execution time inside Redis. It says nothing about network latency, connection pool wait time, serialization overhead, or client-side deserialization. A clean SLOWLOG does not mean fast cache reads — it means Redis is doing its part. The bottleneck has moved to the network. See our guide on reducing Redis latency for strategies to minimize the network component.

Eliminate SLOWLOG Entries Entirely

There is a simpler way to think about this problem. If 99% of your reads never reach Redis, there are no slow commands to find because there are no commands at all. An in-process L1 cache that intercepts reads before they cross the network eliminates Redis from the hot path entirely. Your application serves from a hash table in its own memory — 1.5 microseconds per lookup, no serialization, no TCP, no SLOWLOG entries.

Redis remains the system of record. Cold reads, writes, and invalidation still flow through it. But the reads that account for 90–99% of your traffic — the reads that generate SLOWLOG entries, that saturate connection pools, that spike P99 during load spikes — are intercepted before they leave the process. No network hop means no network latency. No Redis command means no SLOWLOG entry. The entire debugging workflow described in this article becomes unnecessary because the conditions that create slow commands no longer exist.

This is the architecture that Cachee implements. The L1 tier uses predictive pre-warming to populate the in-process cache before requests arrive, maintaining hit rates above 99%. The result is sub-millisecond latency on every read — not because Redis got faster, but because Redis got bypassed. Your SLOWLOG stays empty not because you fixed every command, but because the commands never execute.

The math is simple: If your L1 hit rate is 99.2% and your L1 lookup takes 1.5µs, then 99.2% of your reads complete in 0.0015ms. The remaining 0.8% fall through to Redis at 1–3ms. Your effective P99 drops from 3ms (Redis-only) to 0.015ms. SLOWLOG entries become a rounding error because Redis only handles 1 in 125 reads.

The Monitoring Stack

Even with an L1 cache intercepting most reads, you should still monitor the Redis commands that do execute. The production monitoring stack has three layers that work together.

SLOWLOG is your real-time detector. Set the threshold to 1ms (slowlog-log-slower-than 1000) and poll it every 60 seconds with SLOWLOG GET 25. Alert if any entry exceeds 5ms, or if the total count of new entries exceeds 50 per minute. Reset the log after reading with SLOWLOG RESET to avoid processing the same entries twice.

INFO commandstats is your aggregate view. It shows the total calls, total microseconds, and average microseconds per command type. Run INFO commandstats and look for commands with high average latency (above 500µs) or disproportionately high total time. If HGETALL accounts for 40% of total Redis CPU time, that is where your optimization effort belongs — even if individual calls do not exceed the SLOWLOG threshold.

The Cachee dashboard ties it together. It shows L1 hit rate, L1 miss rate (commands that fall through to Redis), Redis command breakdown, and per-key access patterns. When a specific key starts generating SLOWLOG entries, the dashboard shows whether it is an L1 miss (cache the key) or a write-path command (expected). Combined, these three layers give you complete visibility from the L1 hot path down to individual Redis command execution.

# Quick health check script: run every 60 seconds # 1. Check for new slow commands SLOWLOG GET 25 # 2. Check command stats for hot commands INFO commandstats # Look for: cmdstat_keys, cmdstat_hgetall, cmdstat_smembers # High calls + high usec_per_call = optimization target # 3. Check overall latency INFO latencystats # 4. Reset SLOWLOG after processing SLOWLOG RESET

Further Reading

Also Read

Stop Debugging Redis. Start Bypassing It.

See how in-process L1 lookups eliminate slow commands, network overhead, and SLOWLOG entries — permanently.

Start Free Trial Schedule Demo