Skip to content

Six Redis Patterns Every Laravel App Eventually Needs

A

Al Amin Ahamed

Senior Engineer

10 min read
𝕏 in

Redis is the Swiss army knife of Laravel infrastructure. The same Redis instance can do session storage, cache, queue, rate limiting, locks, and pub/sub. Each pattern has a sweet spot — and a way to break things if used wrong.

Cache

The most common use, also the most misused.

// Tag invalidation Cache::tags(['posts', 'sidebar'])->put('top_posts', $posts, 3600); Cache::tags('posts')->flush(); // invalidates anything tagged 'posts' // Atomic remember $posts = Cache::remember('top_posts', 3600, fn () => Post::featured()->get());

The remember pattern looks atomic but isn't. Two concurrent requests can both miss the cache and both run the closure. For expensive closures, use flexible (added in Laravel 11):

Cache::flexible('top_posts', [3600, 7200], fn () => Post::featured()->get());

flexible accepts a stale window: serve cached for 1 hour, return stale value up to 2 hours while regenerating in the background. This is the Stale-While-Revalidate (SWR) pattern. Brilliant for traffic-heavy pages.

Sessions

SESSION_DRIVER=redis

Redis sessions are faster than database sessions and survive across multiple web servers. They expire automatically — no cleanup cron.

Gotcha: Redis is in-memory. A reboot loses all sessions, logging users out. Set Redis to persist (appendonly yes) for production.

Queues

QUEUE_CONNECTION=redis

The default Redis driver is fine for ~10k jobs/minute. Above that:

  • Use multiple Redis databases for different queue names (avoids hot keys)
  • Use Redis Cluster if your queue key count exceeds ~100k
  • Consider Horizon for monitoring; the dashboard pays for itself

The most common bug: queue workers and the web app sharing the same Redis instance for cache and queue. Cache writes can block queue reads if the cache is busy. Solution: separate Redis instances or at minimum separate database numbers.

REDIS_DB=0 # cache REDIS_QUEUE_DB=1 # queue REDIS_CACHE_DB=0

Rate Limiting

Laravel's RateLimiter facade uses Redis under the hood:

RateLimiter::attempt( 'login:' . $request->ip(), $perMinute = 5, fn () => $this->doLogin($request), $decaySeconds = 60, );

Per-IP login throttling: 5 attempts per minute, then locked out.

For routes:

Route::middleware('throttle:60,1')->group(/* ... */);

60 requests per minute per authenticated user (or per IP if guest).

Atomic Counters

For real-time stats:

Redis::incr('post:42:views'); // atomic increment Redis::expire('post:42:views', 86400); // 24h TTL $views = (int) Redis::get('post:42:views');

Async-flush to the database with a daily cron:

Schedule::command('stats:flush')->daily();

Critical: Redis is not durable for counters. A crash loses uncommitted increments. For exact billing counters, use the database. For approximate analytics counters, Redis is fine.

Locks

Distributed locks prevent two workers from processing the same job:

Cache::lock('process-order:' . $orderId, 60)->block(5, function () { // Critical section. Times out after 5 seconds of waiting. });

The block parameter waits up to N seconds for the lock. The 60 is the lock's max hold time — if the holder dies, the lock auto-releases after 60 seconds.

Without distributed locks, race conditions are real. With them, you're trading correctness for occasional timeouts when contention is high.

Pub/Sub for Real-Time

For broadcasting events to WebSocket clients:

Redis::publish('order.created', json_encode($order)); // In a worker Redis::subscribe(['order.created'], function (string $message) { // dispatch to ws clients });

Laravel Echo's Redis broadcaster uses this internally. For most apps, just configure broadcasting in config/broadcasting.php and let Laravel handle the pub/sub.

What I Don't Use Redis For

  • Long-term storage — even with persistence, Redis is RAM-bound. Anything you need to keep, put in Postgres or S3.
  • Complex queries — Redis is key-value at heart. The fancy data structures (sorted sets, hashes) work, but for any structured query I'd rather use Postgres.
  • Transaction-heavy workloads — Redis transactions exist (MULTI/EXEC) but lack the isolation guarantees of Postgres. If you need ACID, Postgres.

Memory Tuning

Redis on a 1GB VPS will OOM-crash if you don't set maxmemory:

maxmemory 800mb maxmemory-policy allkeys-lru

allkeys-lru evicts least-recently-used keys when full. For a cache use case, this is what you want. For a session store you can't lose, use noeviction and either size up or split workloads across multiple instances.

Monitoring

redis-cli --latency # measure ms latency from CLI to Redis redis-cli info # memory, hit rate, connected clients redis-cli monitor # all commands in real-time (don't run on prod)

Hit rate (keyspace_hits / (keyspace_hits + keyspace_misses)) under 80% means your TTLs are too short or your cache is too small.

What I'd Tell My Past Self

  • Use the same Redis instance for everything; separate via DB numbers if needed
  • Set maxmemory immediately, don't wait for the OOM
  • Turn on persistence (appendonly yes) for sessions; can skip for pure cache
  • Sample, don't trace — Redis monitoring tools that try to capture every command will crash a busy production instance
Share 𝕏 in
A

Al Amin Ahamed

Senior software engineer & AI practitioner. Laravel, PHP, WordPress plugins, WooCommerce extensions.

About me →

One email a month. No noise.

What I shipped, what I read, occasional deep dive. Unsubscribe anytime.