Introduction
Ask most developers what Redis is, and they'll say “a cache.” While that’s true, it’s like calling Kubernetes just a container runner. Redis is an in-memory data structure store that can solve some of the most challenging problems in distributed systems.
If you're building production applications that need to scale, handle real-time features, or manage distributed state, Redis is one of the tools that can help address these requirements. Many teams end up using only a small subset of its features.
What you'll learn:
- How to build stateless, scalable authentication systems with Redis sessions
- Implementing rate limiting to protect your APIs from abuse
- Real-time messaging patterns with Pub/Sub and Streams
- Handling race conditions with distributed locks
- Building background job queues and real-time leaderboards
- Tracking live analytics and metrics at scale
What Makes Redis Special?
Before diving into use cases, let's understand why Redis is so powerful:
In-Memory Performance: All data lives in RAM, giving you microsecond latency. This is significantly faster than disk-backed databases for many workloads.
Rich Data Structures: Unlike simple key-value stores, Redis provides Strings, Hashes, Lists, Sets, Sorted Sets, Streams, and more. Each structure is optimized for specific use cases.
Atomic Operations: Redis operations are atomic by default, making it perfect for distributed systems where consistency matters.
Persistence Options: Despite being in-memory, Redis can persist data to disk, which allows a balance between performance and durability.
Now let's explore eight production use cases that go far beyond simple caching.
Use Case 1: Redis as a Session Store
The Problem: Traditional session storage doesn't scale. When you store sessions in server memory, users get logged out if they hit a different server behind a load balancer. Database-backed sessions are slow and create bottlenecks.
The Solution: Redis as a centralized session store enables true stateless architecture. Any server can validate any user's session instantly.
import express from "express";
import session from "express-session";
import RedisStore from "connect-redis";
import { createClient } from "redis";
// Initialize Redis client
const redisClient = createClient({
url: "redis://localhost:6379",
});
await redisClient.connect();
const app = express();
// Configure session middleware with Redis
app.use(
session({
store: new RedisStore({ client: redisClient }),
secret: "your-secret-key",
resave: false,
saveUninitialized: false,
cookie: {
secure: true, // Use in production with HTTPS
httpOnly: true,
maxAge: 1000 * 60 * 60 * 24, // 24 hours
},
})
);
// Login endpoint
app.post("/login", async (req, res) => {
const { username, password } = req.body;
// Validate credentials (simplified)
if (await validateUser(username, password)) {
req.session.userId = username;
req.session.loginTime = Date.now();
res.json({ success: true });
} else {
res.status(401).json({ error: "Invalid credentials" });
}
});
// Protected route
app.get("/dashboard", (req, res) => {
if (!req.session.userId) {
return res.status(401).json({ error: "Not authenticated" });
}
res.json({
user: req.session.userId,
message: "Welcome to your dashboard",
});
});
Explanation: Sessions are stored in Redis with automatic expiration. All your application servers share the same session store, enabling horizontal scaling without sticky sessions.
Production tip: Use Redis Cluster for high availability and set appropriate TTLs to automatically clean up old sessions.
Use Case 2: Rate Limiting with Redis
The Problem: APIs need protection from abuse, DDoS attacks, and runaway client bugs. Rate limiting prevents any single user from overwhelming your system.
The Solution: Redis counters with expiration provide precise, distributed rate limiting across all your servers.
import { createClient } from 'redis';
const redis = createClient();
await redis.connect();
async function rateLimiter(userId, maxRequests = 10, windowSeconds = 60) {
const key = `rate_limit:${userId}`;
// Use Redis pipeline for atomic operations
const multi = redis.multi();
// Increment counter
multi.incr(key);
// Set expiration only if key is new
multi.expire(key, windowSeconds, 'NX');
const results = await multi.exec();
const currentCount = results[0];
return {
allowed: currentCount <= maxRequests,
remaining: Math.max(0, maxRequests - currentCount),
resetIn: windowSeconds
};
}
// Express middleware for rate limiting
async function rateLimitMiddleware(req, res, next) {
const userId = req.session?.userId || req.ip;
const limit = await rateLimiter(userId, 100, 60); // 100 requests per minute
// Set rate limit headers
res.set({
'X-RateLimit-Limit': 100,
'X-RateLimit-Remaining': limit.remaining,
'X-RateLimit-Reset': Date.now() + (limit.resetIn * 1000)
});
if (!limit.allowed) {
return res.status(429).json({
error: 'Too many requests',
retryAfter: limit.resetIn
});
}
next();
}
// Apply to routes
app.use('/api', rateLimitMiddleware);
Advanced pattern: For more sophisticated rate limiting, use the Sliding Window Log pattern with Redis Sorted Sets:
async function slidingWindowRateLimit(
userId,
maxRequests = 100,
windowMs = 60000
) {
const key = `rate_limit:sliding:${userId}`;
const now = Date.now();
const windowStart = now - windowMs;
const multi = redis.multi();
// Remove old entries outside the window
multi.zRemRangeByScore(key, 0, windowStart);
// Count requests in current window
multi.zCard(key);
// Add current request
multi.zAdd(key, { score: now, value: `${now}-${Math.random()}` });
// Set expiration
multi.expire(key, Math.ceil(windowMs / 1000));
const results = await multi.exec();
const requestCount = results[1];
return {
allowed: requestCount < maxRequests,
remaining: Math.max(0, maxRequests - requestCount),
};
}
Why this matters: The sliding window approach prevents burst traffic at window boundaries and provides more accurate rate limiting.
Use Case 3: Redis Pub/Sub for Real-Time Features
The Problem: You need to push updates to connected clients in real-time—chat messages, notifications, live updates. Polling is inefficient, and WebSockets alone don't scale across multiple servers.
The Solution: Redis Pub/Sub provides a lightweight message bus that connects all your servers, enabling real-time broadcast to any connected client.
import { createClient } from "redis";
import { WebSocketServer, WebSocket } from "ws";
// Publisher service (your API server)
const publisher = createClient();
await publisher.connect();
app.post("/chat/message", async (req, res) => {
const { roomId, userId, message } = req.body;
const chatMessage = {
id: Date.now(),
roomId,
userId,
message,
timestamp: new Date().toISOString(),
};
// Publish message to Redis channel
await publisher.publish(`chat:room:${roomId}`, JSON.stringify(chatMessage));
res.json({ success: true, message: chatMessage });
});
// Subscriber service (your WebSocket server)
const subscriber = createClient();
await subscriber.connect();
const wss = new WebSocketServer({ port: 8080 });
const clients = new Map(); // roomId -> Set of WebSocket clients
wss.on("connection", (ws) => {
ws.on("message", async (data) => {
const { action, roomId, userId } = JSON.parse(data);
if (action === "join") {
// Add client to room
if (!clients.has(roomId)) {
clients.set(roomId, new Set());
// Subscribe to this room's channel
await subscriber.subscribe(`chat:room:${roomId}`, (message) => {
// Broadcast to all clients in this room
const roomClients = clients.get(roomId);
if (roomClients) {
roomClients.forEach((client) => {
if (client.readyState === WebSocket.OPEN) {
client.send(message);
}
});
}
});
}
clients.get(roomId).add(ws);
ws.roomId = roomId;
}
});
ws.on("close", () => {
// Clean up on disconnect
if (ws.roomId && clients.has(ws.roomId)) {
clients.get(ws.roomId).delete(ws);
}
});
});
Real-world applications:
- Live chat applications
- Real-time notifications
- Collaborative editing tools
- Live sports scores or stock tickers
- Server-to-server communication
Important limitation: Pub/Sub messages are fire-and-forget. If a subscriber is offline, it misses the message. For guaranteed delivery, use Redis Streams (next section).
Use Case 4: Redis Streams for Event-Driven Architecture
The Problem: Pub/Sub doesn't persist messages. If a service crashes or is temporarily offline, it loses events. You need guaranteed message delivery and the ability to replay events.
The Solution: Redis Streams provide an append-only log similar to Kafka, but simpler to operate and suitable for many event-driven scenarios.
import { createClient } from "redis";
const redis = createClient();
await redis.connect();
// Producer: Add events to stream
async function publishEvent(streamKey, eventData) {
const eventId = await redis.xAdd(
streamKey,
"*", // Auto-generate ID
eventData,
{
MAXLEN: 10000, // Keep last 10k events (optional)
}
);
return eventId;
}
// Example: Order processing system
await publishEvent("orders:stream", {
orderId: "12345",
userId: "user_789",
amount: "99.99",
status: "created",
timestamp: Date.now().toString(),
});
// Consumer: Process events from stream
async function consumeEvents(streamKey, consumerGroup, consumerName) {
try {
// Create consumer group (only needed once)
await redis.xGroupCreate(streamKey, consumerGroup, "0", {
MKSTREAM: true,
});
} catch (err) {
// Group already exists, ignore
}
while (true) {
// Read new messages
const messages = await redis.xReadGroup(
consumerGroup,
consumerName,
[{ key: streamKey, id: ">" }], // '>' means undelivered messages
{
COUNT: 10,
BLOCK: 5000, // Block for 5 seconds waiting for messages
}
);
if (!messages || messages.length === 0) continue;
for (const { name: streamName, messages: entries } of messages) {
for (const { id, message } of entries) {
try {
// Process the event
await processOrder(message);
// Acknowledge successful processing
await redis.xAck(streamKey, consumerGroup, id);
} catch (error) {
console.error(`Failed to process event ${id}:`, error);
// Event remains unacknowledged and can be retried
}
}
}
}
}
// Start multiple consumers for parallel processing
consumeEvents("orders:stream", "order-processors", "worker-1");
consumeEvents("orders:stream", "order-processors", "worker-2");
// Process the order
async function processOrder(orderData) {
console.log("Processing order:", orderData.orderId);
// Business logic here
// Update inventory, charge payment, send confirmation, etc.
await publishEvent("orders:stream", {
orderId: orderData.orderId,
status: "processed",
timestamp: Date.now().toString(),
});
}
Key advantages:
- Guaranteed delivery: Messages are persisted and acknowledged
- Consumer groups: Multiple workers process messages in parallel
- Replay capability: Can reprocess historical events
- Dead letter queue: Handle failed messages with retry logic
- Consumer recovery: Monitor the Pending Entries List (
XPENDING) to detect messages that were delivered but not acknowledged.
When to use Streams vs Pub/Sub:
- Use Pub/Sub for real-time notifications where message loss is acceptable
- Use Streams for critical events requiring at-least-once delivery with acknowledgment tracking
Use Case 5: Distributed Locks with Redis
The Problem: In distributed systems, multiple servers might try to perform the same operation simultaneously—processing the same job twice, double-charging a customer, or corrupting shared state.
The Solution: Redis-based distributed locks ensure only one process can execute a critical section at a time across your entire infrastructure.
import { createClient } from "redis";
import { v4 as uuidv4 } from "uuid";
const redis = createClient();
await redis.connect();
class RedisLock {
constructor(redis, lockKey, ttlSeconds = 10) {
this.redis = redis;
this.lockKey = `lock:${lockKey}`;
this.ttlSeconds = ttlSeconds;
this.lockValue = uuidv4(); // Unique identifier for this lock
}
async acquire() {
// SET with NX (only if not exists) and EX (expiration)
const result = await this.redis.set(this.lockKey, this.lockValue, {
NX: true,
EX: this.ttlSeconds,
});
return result === "OK";
}
async release() {
// Only release if we own the lock (prevent releasing someone else's lock)
const script = `
if redis.call("get", KEYS[1]) == ARGV[1] then
return redis.call("del", KEYS[1])
else
return 0
end
`;
await this.redis.eval(script, {
keys: [this.lockKey],
arguments: [this.lockValue],
});
}
async extend(additionalSeconds) {
// Extend lock expiration if we still own it
const script = `
if redis.call("get", KEYS[1]) == ARGV[1] then
return redis.call("expire", KEYS[1], ARGV[2])
else
return 0
end
`;
return await this.redis.eval(script, {
keys: [this.lockKey],
arguments: [this.lockValue, additionalSeconds.toString()],
});
}
}
// Example: Ensure only one server processes a payment
async function processPayment(orderId, amount) {
const lock = new RedisLock(redis, `payment:${orderId}`, 30);
const acquired = await lock.acquire();
if (!acquired) {
throw new Error("Payment already being processed");
}
try {
// Critical section - only one server executes this
console.log(`Processing payment for order ${orderId}`);
// Check if already processed
const status = await redis.get(`payment:status:${orderId}`);
if (status === "completed") {
return { success: true, message: "Already processed" };
}
// Process payment with external API
await chargeCustomer(amount);
// Mark as completed
await redis.set(`payment:status:${orderId}`, "completed", { EX: 86400 });
return { success: true, message: "Payment processed" };
} finally {
// Always release the lock
await lock.release();
}
}
// Example: Prevent duplicate job execution
async function processScheduledJob(jobId) {
const lock = new RedisLock(redis, `job:${jobId}`, 300); // 5 minute lock
if (await lock.acquire()) {
try {
console.log(`Executing job ${jobId}`);
// Long-running job
for (let i = 0; i < 10; i++) {
await performJobStep(i);
// Extend lock if job is taking longer than expected
if (i % 3 === 0) {
await lock.extend(60);
}
}
console.log(`Job ${jobId} completed`);
} finally {
await lock.release();
}
} else {
console.log(`Job ${jobId} is already running on another server`);
}
}
Critical considerations:
- Always set TTL: Prevents deadlocks if your process crashes
- Use unique values: Ensures you only release your own lock
- Extend if needed: For long-running operations
- Handle failures: Always release locks in finally blocks
Production-ready library: For production use, consider Redlock, which implements the Redlock algorithm for multi-Redis instance deployments.
Note: For multi-node critical systems (e.g., payments or financial operations), prefer Redlock or database-backed idempotency guarantees rather than relying on a single Redis instance for locking.
Use Case 6: Redis Job Queues for Background Processing
The Problem: You can't process time-consuming tasks like sending emails, generating reports, or processing images during HTTP requests. You need reliable background job processing.
The Solution: Redis Lists provide a simple, reliable job queue. For more features, BullMQ builds a feature-rich queue system designed for production workloads on top of Redis.
import { Queue, Worker } from "bullmq";
import { createClient } from "redis";
const connection = {
host: "localhost",
port: 6379,
};
// Create queues for different job types
const emailQueue = new Queue("emails", { connection });
const imageQueue = new Queue("image-processing", { connection });
// Producer: Add jobs to queue
app.post("/register", async (req, res) => {
const { email, username } = req.body;
// Create user in database
const user = await createUser({ email, username });
// Queue welcome email (don't wait for it)
await emailQueue.add(
"welcome-email",
{
to: email,
username: username,
userId: user.id,
},
{
attempts: 3, // Retry up to 3 times on failure
backoff: {
type: "exponential",
delay: 2000, // Start with 2 second delay
},
}
);
res.json({ success: true, userId: user.id });
});
// Consumer: Process email jobs
const emailWorker = new Worker(
"emails",
async (job) => {
const { to, username, userId } = job.data;
console.log(`Sending email to ${to}...`);
// Update progress
await job.updateProgress(25);
// Send email via external service
await sendEmail({
to,
subject: `Welcome ${username}!`,
body: generateWelcomeEmail(username),
});
await job.updateProgress(75);
// Log in database
await logEmailSent(userId, "welcome");
await job.updateProgress(100);
return { sent: true, timestamp: Date.now() };
},
{ connection, concurrency: 5 }
); // Process 5 emails concurrently
// Handle job events
emailWorker.on("completed", (job) => {
console.log(`Job ${job.id} completed successfully`);
});
emailWorker.on("failed", (job, err) => {
console.error(`Job ${job.id} failed:`, err);
// Send alert to monitoring system
});
// Image processing example with priority
app.post("/upload-avatar", async (req, res) => {
const { userId, imageUrl } = req.body;
// High priority job for avatar updates
await imageQueue.add(
"resize-image",
{
userId,
imageUrl,
sizes: [50, 150, 500],
type: "avatar",
},
{
priority: 1, // Higher priority = processed first
attempts: 5,
}
);
res.json({ success: true, status: "processing" });
});
// Scheduled jobs (cron-like)
const reportQueue = new Queue("reports", { connection });
// Add recurring job
await reportQueue.add(
"daily-report",
{ reportType: "sales" },
{
repeat: {
pattern: "0 9 * * *", // Every day at 9 AM
},
}
);
// Get queue metrics
app.get("/admin/queue-stats", async (req, res) => {
const [waiting, active, completed, failed] = await Promise.all([
emailQueue.getWaitingCount(),
emailQueue.getActiveCount(),
emailQueue.getCompletedCount(),
emailQueue.getFailedCount(),
]);
res.json({ waiting, active, completed, failed });
});
Advanced patterns:
// Delayed jobs
await emailQueue.add(
"reminder-email",
{ userId: "123", type: "trial-ending" },
{ delay: 7 * 24 * 60 * 60 * 1000 } // 7 days
);
// Job dependencies (wait for parent job to complete)
const parentJob = await emailQueue.add("send-invoice", { orderId: "456" });
await emailQueue.add(
"send-receipt",
{ orderId: "456" },
{ parent: { id: parentJob.id, queue: "emails" } }
);
// Rate limiting jobs
const rateLimitedQueue = new Queue("api-calls", {
connection,
limiter: {
max: 10, // 10 jobs
duration: 1000, // per second
},
});
Why BullMQ over simple lists:
- Job prioritization and scheduling
- Automatic retries with exponential backoff
- Job dependencies and workflows
- Rate limiting built-in
- Progress tracking and metrics
- Dead letter queue for failed jobs
Use Case 7: Leaderboards with Sorted Sets
The Problem: Building real-time leaderboards efficiently is surprisingly hard. You need fast insertions, updates, and queries for rankings—all while handling millions of scores.
The Solution: Redis Sorted Sets are specifically designed for leaderboards. They maintain sorted order automatically with O(log N) operations.
import { createClient } from "redis";
const redis = createClient();
await redis.connect();
// Add or update a player's score
async function updateScore(gameId, playerId, score) {
const key = `leaderboard:${gameId}`;
// ZADD updates if exists, adds if new
await redis.zAdd(key, { score, value: playerId });
// Optional: Store player metadata separately
await redis.hSet(`player:${playerId}`, {
lastScore: score,
lastUpdated: Date.now(),
});
}
// Get top N players
async function getTopPlayers(gameId, limit = 10) {
const key = `leaderboard:${gameId}`;
// Get top players with scores (descending order)
const results = await redis.zRangeWithScores(key, 0, limit - 1, {
REV: true, // Reverse for highest scores first
});
// Enrich with player data
const players = await Promise.all(
results.map(async ({ value: playerId, score }, index) => {
const playerData = await redis.hGetAll(`player:${playerId}`);
return {
rank: index + 1,
playerId,
score,
...playerData,
};
})
);
return players;
}
// Get player's rank and score
async function getPlayerRank(gameId, playerId) {
const key = `leaderboard:${gameId}`;
// Get rank (0-indexed, so add 1)
const rank = await redis.zRevRank(key, playerId);
if (rank === null) {
return { found: false };
}
// Get score
const score = await redis.zScore(key, playerId);
// Get total players
const totalPlayers = await redis.zCard(key);
return {
found: true,
rank: rank + 1,
score,
totalPlayers,
percentile: (((totalPlayers - rank) / totalPlayers) * 100).toFixed(2),
};
}
// Get players around a specific player (context)
async function getPlayersAround(gameId, playerId, range = 5) {
const key = `leaderboard:${gameId}`;
const rank = await redis.zRevRank(key, playerId);
if (rank === null) return [];
// Get players above and below
const start = Math.max(0, rank - range);
const end = rank + range;
const results = await redis.zRangeWithScores(key, start, end, {
REV: true,
});
return results.map(({ value: id, score }, index) => ({
rank: start + index + 1,
playerId: id,
score,
isCurrentPlayer: id === playerId,
}));
}
// Increment score (useful for tracking metrics)
async function incrementScore(gameId, playerId, points) {
const key = `leaderboard:${gameId}`;
const newScore = await redis.zIncrBy(key, points, playerId);
return newScore;
}
// Time-based leaderboards (weekly, daily)
async function getDailyTopPlayers(gameId, date, limit = 10) {
const key = `leaderboard:${gameId}:daily:${date}`;
const results = await redis.zRangeWithScores(key, 0, limit - 1, {
REV: true,
});
// Auto-expire daily leaderboards after 30 days
await redis.expire(key, 30 * 24 * 60 * 60);
return results;
}
// Example: API endpoints
app.post("/game/:gameId/score", async (req, res) => {
const { gameId } = req.params;
const { playerId, score } = req.body;
await updateScore(gameId, playerId, score);
const playerRank = await getPlayerRank(gameId, playerId);
res.json({
success: true,
...playerRank,
});
});
app.get("/game/:gameId/leaderboard", async (req, res) => {
const { gameId } = req.params;
const { limit = 100 } = req.query;
const topPlayers = await getTopPlayers(gameId, parseInt(limit));
res.json({ leaderboard: topPlayers });
});
app.get("/game/:gameId/player/:playerId/rank", async (req, res) => {
const { gameId, playerId } = req.params;
const rankData = await getPlayerRank(gameId, playerId);
const context = await getPlayersAround(gameId, playerId, 3);
res.json({
...rankData,
nearby: context,
});
});
Advanced patterns:
// Multiple leaderboards (global, regional, friends)
async function updateAllLeaderboards(playerId, score, region, friendIds) {
const multi = redis.multi();
multi.zAdd("leaderboard:global", { score, value: playerId });
multi.zAdd(`leaderboard:region:${region}`, { score, value: playerId });
friendIds.forEach((friendId) => {
multi.zAdd(`leaderboard:friends:${friendId}`, { score, value: playerId });
});
await multi.exec();
}
// Percentile-based ranking
async function getScorePercentile(gameId, score) {
const key = `leaderboard:${gameId}`;
// Count players with lower scores
const belowCount = await redis.zCount(key, "-inf", score);
const totalCount = await redis.zCard(key);
const percentile = ((belowCount / totalCount) * 100).toFixed(2);
return { percentile, betterThan: belowCount };
}
Performance characteristics:
- Add/Update: O(log N)
- Get Rank: O(log N)
- Get Top N: O(log N + N)
- Range Query: O(log N + M) where M is the result count
Perfect for games, social apps, trending content, and any scenario requiring real-time rankings.
Use Case 8: Real-Time Analytics and Counters
The Problem: Tracking live metrics like page views, active users, likes, and engagement requires fast writes and instant reads. Traditional databases create bottlenecks at scale.
The Solution: Redis atomic operations and data structures simplify the implementation of real-time analytics, handling millions of updates per second.
import { createClient } from "redis";
const redis = createClient();
await redis.connect();
// Simple counter with atomic increment
async function trackPageView(pageId, userId) {
const multi = redis.multi();
// Total views for page
multi.incr(`page:${pageId}:views`);
// Unique viewers using HyperLogLog
multi.pfAdd(`page:${pageId}:unique`, userId);
// Views today
const today = new Date().toISOString().split("T")[0];
multi.incr(`page:${pageId}:views:${today}`);
await multi.exec();
}
// Get analytics data
async function getPageAnalytics(pageId) {
const [totalViews, uniqueViewers] = await Promise.all([
redis.get(`page:${pageId}:views`),
redis.pfCount(`page:${pageId}:unique`),
]);
return {
totalViews: parseInt(totalViews) || 0,
uniqueViewers,
};
}
// Track active users in real-time
async function trackActiveUser(userId) {
const now = Date.now();
const activeKey = "users:active";
// Add user to sorted set with current timestamp
await redis.zAdd(activeKey, { score: now, value: userId });
// Remove users inactive for more than 5 minutes
const fiveMinutesAgo = now - 5 * 60 * 1000;
await redis.zRemRangeByScore(activeKey, "-inf", fiveMinutesAgo);
}
async function getActiveUsersCount() {
return await redis.zCard("users:active");
}
// Like/reaction system with Set operations
async function toggleLike(postId, userId) {
const key = `post:${postId}:likes`;
// Check if user already liked
const alreadyLiked = await redis.sIsMember(key, userId);
if (alreadyLiked) {
// Unlike
await redis.sRem(key, userId);
return { liked: false, action: "unliked" };
} else {
// Like
await redis.sAdd(key, userId);
return { liked: true, action: "liked" };
}
}
async function getLikeStats(postId) {
const key = `post:${postId}:likes`;
const [count, users] = await Promise.all([
redis.sCard(key),
redis.sMembers(key),
]);
return {
likeCount: count,
likedBy: users.slice(0, 10), // First 10 users
};
}
async function hasUserLiked(postId, userId) {
return await redis.sIsMember(`post:${postId}:likes`, userId);
}
// Time-series data with Sorted Sets
async function trackMetric(metricName, value, timestamp = Date.now()) {
const key = `metrics:${metricName}`;
await redis.zAdd(key, {
score: timestamp,
value: `${timestamp}:${value}`,
});
// Keep only last 7 days
const sevenDaysAgo = timestamp - 7 * 24 * 60 * 60 * 1000;
await redis.zRemRangeByScore(key, "-inf", sevenDaysAgo);
}
async function getMetricHistory(metricName, startTime, endTime) {
const key = `metrics:${metricName}`;
const results = await redis.zRangeByScore(key, startTime, endTime);
return results.map((entry) => {
const [timestamp, value] = entry.split(":");
return {
timestamp: parseInt(timestamp),
value: parseFloat(value),
};
});
}
// Trending content algorithm
async function updateTrendingScore(contentId, engagement) {
const now = Date.now();
const ageInHours = (now - engagement.createdAt) / (1000 * 60 * 60);
// Reddit-style hot score formula
const score =
(engagement.upvotes - engagement.downvotes) / Math.pow(ageInHours + 2, 1.5);
await redis.zAdd("trending:content", {
score,
value: contentId,
});
// Expire old content from trending
const twoDaysAgo = -1000; // Lower than any recent score
await redis.zRemRangeByScore("trending:content", "-inf", twoDaysAgo);
}
async function getTrendingContent(limit = 20) {
const contentIds = await redis.zRange("trending:content", 0, limit - 1, {
REV: true,
});
// Fetch full content details
const content = await Promise.all(contentIds.map((id) => getContentById(id)));
return content;
}
// Real-time counters dashboard
async function getDashboardMetrics() {
const multi = redis.multi();
// Get multiple metrics in one round trip
multi.get("metrics:total:users");
multi.get("metrics:total:posts");
multi.zCard("users:active");
multi.get("metrics:revenue:today");
multi.zCard("trending:content");
const results = await multi.exec();
return {
totalUsers: parseInt(results[0]) || 0,
totalPosts: parseInt(results[1]) || 0,
activeUsers: results[2] || 0,
revenueToday: parseFloat(results[3]) || 0,
trendingItems: results[4] || 0,
};
}
// Sliding window rate counter (e.g., requests per minute)
async function trackRequest(endpoint) {
const now = Date.now();
const key = `requests:${endpoint}`;
const windowMs = 60000; // 1 minute
const multi = redis.multi();
// Add current request
multi.zAdd(key, { score: now, value: `${now}:${Math.random()}` });
// Remove old requests outside window
multi.zRemRangeByScore(key, "-inf", now - windowMs);
// Count requests in window
multi.zCard(key);
// Set expiration
multi.expire(key, 120); // 2 minutes
const results = await multi.exec();
const requestsPerMinute = results[2];
return requestsPerMinute;
}
// Batch counter updates (efficient for high-volume events)
class CounterBatcher {
constructor(redis, flushIntervalMs = 1000) {
this.redis = redis;
this.counters = new Map();
this.flushIntervalMs = flushIntervalMs;
// Auto-flush periodically
this.flushInterval = setInterval(() => this.flush(), flushIntervalMs);
}
increment(key, amount = 1) {
const current = this.counters.get(key) || 0;
this.counters.set(key, current + amount);
}
async flush() {
if (this.counters.size === 0) return;
const multi = this.redis.multi();
for (const [key, amount] of this.counters.entries()) {
multi.incrBy(key, amount);
}
await multi.exec();
this.counters.clear();
}
async close() {
clearInterval(this.flushInterval);
await this.flush();
}
}
// Usage example
const batcher = new CounterBatcher(redis, 5000); // Flush every 5 seconds
app.post("/api/content/:id/view", async (req, res) => {
const { id } = req.params;
// Batch the increment instead of writing immediately
batcher.increment(`content:${id}:views`);
batcher.increment("global:total:views");
res.json({ success: true });
});
// Example: Real-time analytics API
app.get("/analytics/dashboard", async (req, res) => {
const metrics = await getDashboardMetrics();
const trending = await getTrendingContent(10);
const activeUsers = await getActiveUsersCount();
res.json({
metrics: {
...metrics,
activeNow: activeUsers,
},
trending,
});
});
app.get("/analytics/post/:postId", async (req, res) => {
const { postId } = req.params;
const [analytics, likes] = await Promise.all([
getPageAnalytics(postId),
getLikeStats(postId),
]);
res.json({
postId,
views: analytics.totalViews,
uniqueViewers: analytics.uniqueViewers,
...likes,
});
});
Performance tips:
- Use pipelines/transactions: Batch multiple operations to reduce network round trips
- HyperLogLog for cardinality: Track unique counts with minimal memory (0.81% error rate)
- Expire old data: Set TTLs on time-series data to prevent memory bloat
- Batch high-frequency writes: Reduce Redis load by batching counter updates
Production considerations:
- Use Redis Cluster for horizontal scaling beyond single-instance limits
- Monitor memory usage with
INFO memorycommand - Configure maxmemory policies (e.g.,
allkeys-lru) for automatic eviction - Use Redis persistence (RDB + AOF) for durability
Cache Invalidation Strategy
Caching is only useful if the data remains correct. Stale cache entries can be more harmful than slow queries.
Common invalidation approaches:
- Time-based expiration (TTL): Automatically expire entries after a fixed duration.
- Write-through invalidation: Update or delete cache entries immediately when the underlying data changes.
- Event-based invalidation: Publish cache invalidation events (e.g., via Redis Pub/Sub or Streams) so other services can invalidate related keys.
- Versioned keys: Include a version or timestamp in cache keys to force refreshes after schema or logic changes.
Relying on TTL alone is often insufficient for correctness in systems where data consistency matters.
Best Practices Across All Use Cases
1. Connection Management
Always use connection pooling and handle reconnections gracefully:
import { createClient } from "redis";
const redis = createClient({
socket: {
reconnectStrategy: (retries) => {
if (retries > 10) {
return new Error("Max retries reached");
}
return Math.min(retries * 100, 3000);
},
},
});
redis.on("error", (err) => console.error("Redis error:", err));
redis.on("connect", () => console.log("Redis connected"));
redis.on("reconnecting", () => console.log("Redis reconnecting"));
await redis.connect();
2. Use Transactions for Atomic Operations
When multiple operations must succeed or fail together:
const multi = redis.multi();
multi.decrBy("inventory:item:123", quantity);
multi.incrBy("user:456:cart", quantity);
multi.set(`order:${orderId}`, JSON.stringify(orderData));
const results = await multi.exec();
// All operations succeed or all fail
3. Set Appropriate Expiration Times
Always expire temporary data to prevent memory leaks:
// Session with 24-hour expiration
await redis.set("session:abc123", sessionData, { EX: 86400 });
// Rate limit with 1-minute window (use pipeline for atomic incr + expire)
const multi = redis.multi();
multi.incr("rate:user:123");
multi.expire("rate:user:123", 60, "NX"); // NX: only set if no TTL exists
await multi.exec();
4. Monitor and Alert
Track key Redis metrics:
- Memory usage and fragmentation
- Hit/miss ratio for cached data
- Latency percentiles (p50, p95, p99)
- Evicted keys count
- Connection count
5. Plan for Failure
Redis should enhance your system, not be a single point of failure:
async function getCachedData(key) {
try {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
} catch (error) {
console.error("Redis error, falling back to DB:", error);
// Degrade gracefully
}
// Fallback to database
return await database.query(key);
}
Security Considerations
Redis should never be exposed directly to the public internet.
Minimum production security practices:
- Enable authentication and use Redis ACLs to restrict command access.
- Bind Redis to private network interfaces only.
- Use TLS for connections, especially with managed services or cross-network deployments.
- Rotate credentials regularly and avoid embedding secrets in source code.
- Disable dangerous commands (
FLUSHALL,CONFIG, etc.) for non-admin users.
Treat Redis as part of your internal infrastructure, not as a public-facing service.
Common Pitfalls to Avoid
Pitfall 1: Storing Large Values
- Redis is optimized for small values (under 1MB)
- For large data, store in S3/database and keep references in Redis
- Use compression for larger JSON objects
Pitfall 2: Blocking Operations in Production
- Avoid
KEYS *in production (useSCANinstead) - Don't use blocking commands without timeouts
- Use pipelining for bulk operations
Pitfall 3: Not Using Appropriate Data Structures
- Don't serialize complex data into strings when Hashes work better
- Use Sorted Sets for rankings, not sorted application-side arrays
- Use Sets for unique collections, not arrays in strings
Pitfall 4: Ignoring Memory Management
- Set
maxmemoryand eviction policies - Monitor memory fragmentation
- Use TTLs liberally to auto-expire data
Pitfall 5: Over-Engineering
- Don't use Redis for everything
- Keep complex business logic in application code
- Use Redis for what it's good at: fast, simple operations
Choosing the Right Redis Deployment
Single Instance:
- Development and small applications
- Simple to manage
- No built-in high availability
Redis Sentinel:
- Automatic failover
- Monitoring and notifications
- Good for most production apps
Redis Cluster:
- Horizontal scaling across nodes
- Handles terabytes of data
- Required for multi-region deployments
Managed Services (AWS ElastiCache, Redis Cloud):
- Automatic backups and patching
- Built-in monitoring and alerting
- Less operational overhead
Conclusion
Redis provides far more functionality than simple key-value caching. It's a versatile toolkit that solves some of the hardest problems in distributed systems: maintaining shared state, coordinating work across servers, delivering real-time updates, and processing high-volume events.
Key Takeaways:
- Use Redis sessions for stateless, scalable authentication
- Implement rate limiting to protect your APIs from abuse
- Leverage Pub/Sub for real-time features and Streams for reliable messaging
- Prevent race conditions with distributed locks
- Build robust background job processing with queues
- Create real-time leaderboards and analytics with Redis data structures
- Always profile, monitor, and set appropriate TTLs
These patterns are commonly used in real-world systems. Start with the use cases that solve your immediate problems, and gradually explore more advanced patterns as your application scales.
Remember: Redis is a powerful tool, but like any tool, it should be used thoughtfully. Profile your application, understand your bottlenecks, and apply Redis where it provides clear value. When used correctly, Redis can reduce the complexity of certain distributed system problems when applied appropriately.
Further Resources
- Redis Official Documentation
- Redis University (Free Courses)
- BullMQ Documentation
- Redis Query Engine Best Practices
- Redlock Algorithm
Want to discuss Redis architecture for your specific use case? Let's connect - I love talking about system design and scalability challenges!

