Fixing 'context deadline exceeded' in Redis Streams — and Getting a 10,000× Speed Boost
How batching Redis Streams consumers and removing timeouts improved performance from 8s to 800μs.
Wed Oct 15 2025
Fixing “context deadline exceeded” in Redis Streams — and Getting a 10,000× Speed Boost
When working with Redis Streams in Go, I ran into a major performance bottleneck that was causing constant context deadline exceeded errors — and spamming my logs. Here’s what happened and how I fixed it.
The Problem
In the original Redis Streams package, each stream consumer was handled by spawning its own goroutine.
That means if I had 40 streams, the app would spin up 40 separate goroutines — each making its own Redis call.
This design quickly became inefficient:
- Each goroutine maintained its own connection and polling loop.
- A 10-second timeout wrapped every loop.
- If no messages arrived in time, the context expired, triggering
context deadline exceeded.
The result?
Log spam, unnecessary Redis traffic, and wasted CPU cycles.
The Solution
1. Stream Batching
Instead of one goroutine per stream, I introduced batched consumers.
For example, with 40 streams:
- Group them into batches of 8 streams each (configurable).
- Each batch is handled by one goroutine.
That means 5 goroutines instead of 40 — an ~80% reduction in concurrent workers.
Each batch handles Redis calls only when needed.
2. Remove Fixed Timeout
I removed the 10-second hard timeout completely.
Redis’s blocking read (XREAD BLOCK) already provides a built-in way to wait for new messages efficiently.
This change eliminated unnecessary cancellations and error spam.
3. Add Exponential Backoff
In case of an error inside the loop, instead of retrying instantly (and flooding logs), I added exponential backoff.
This ensures retries happen gradually, reducing noise and load.
The Result
The improvement was massive.
| Metric | Before Fix | After Fix |
|---|---|---|
| Event publish time | ~8 seconds | ~800 microseconds |
| Goroutines per 40 streams | 40 | 5 |
| Error logs | Flooded | Silent and stable |
That’s roughly a 10,000× performance improvement — latency dropped by 99.99%.
From seconds to microseconds — Redis Streams now feels instantaneous. 😄
Takeaways
- Avoid per-stream goroutines when you can batch work efficiently.
- Let Redis handle blocking and timeouts — don’t reinvent it in Go.
- Always add exponential backoff for resilient retry loops.