How to scale your web socket server (WSS) using Redis Cache

TL;DR Check out the sample code on Github.

I was recently asked if Azure PaaS (Web Apps) & Azure Load Balancers supported Web Sockets, and I said yes, because well I read that it did. But I never tried it so I figured I’ll follow the Chat App tutorial from to try it for myself. Easy enough, I created the local app, added it to a git repo then deployed it to Azure Web Apps and everything worked as expected (How to use git to deploy node to Azure? Read: Continuous deployment using GIT in Azure App Service).

So to further my investigation I scaled out my web app to 2 instances (which are auto load balanced by Azure) and at first things seemed to go smoothly until I realized that not all chat messages were being delivered to all chat clients. Sandrino Di Mattia answers it really well on Stack Overflow so kudos to him on that! In short your socket server only knows about clients connected to it, so it cannot relay messages to clients connected to other web socket servers (i.e the other instances of your web app).

Say you have 2 instances of your server (2 instances of your web app running on azure). When you connect to your app you are actually connecting to the load balancer and it, in turn, is passing your request to each available instance of your app in a round-robin fashion. So that means instances of your web app are only aware of the socket connections (sessions) that are connected to them, which means they cannot relay messages to all connected sessions.


  • You are running 2 instances of your web app
  • Clients A is load-balanced to Instance 1
  • Clients B is load-balanced to Instance 2
  • Clients C is load-balanced to Instance 1

So in the example of our chat app, if Client A sends a message through Instance 1 only Client A & Client C will see the message. Why? Because Client B is connected to Instance 2 which did not receive the message.

Queues to the rescue

Luckily the solution is pretty simple, just make sure whenever an instance of your app receives a message that it relays it to all other instances who in turn relay to all active clients. Sounds easy enough right? Well yes and no. If you are thinking about configuring the endpoints (ip addresses) of all other instances and making requests to those instances … just stop. This is a bad idea! Why? What if the instances are down? What if you add more, how are you going to ensure all instances know about each other and their locations are configured correctly etc. This is what we call “tightly coupled architecture”, which is bad m’kay. The easiest way to do this is use a pub/sub model using Redis Cache.


The Publish-subscribe or Pub/Sub pattern is a core messaging pattern whereby you subscribe to a message queue and receive messages from that queue whenever one is published to it (regardless of who published it). This provides de-coupled architecture (instances don’t know about/connect to other instances) which simplifies everything, and that’s a good thing. Each instance just needs to know where to subscribe to for messages. Luckily Redis Cache supports Pub/Sub and runs on Azure (as do many other messaging/queue systems). To use this pattern, we will need to

  • Configure Redis
  • Have each instance register as a subscriber to our Redis message channel
  • Have each instance publish their messages to that channel
  • Have each instance relay messages recieved from the subscribed Redis message channel to their respective clients

Setting up Redis Cache on Azure

Azure has a solid quick and easy Getting Started document that shows you how to setup Redis Cache as an Azure PaaS service (See Create your first Redis cache). PaaS or Platform-as-a-Service means you don’t have to go mucking about installing servers or virtual machines, configuring operating systems and installing/patching/configuring softare. You just say “hey Azure I need this” and Azure goes “you got it!” and takes care of the rest. To learn more about PaaS check out this document.

Ok so go ahead and create your Redis Cache, I’ll wait (hint: do this)


  • Be sure to create your Redis Cache in the same region as you have/will deploy your web app into
  • To make it easier deploy all related resources (web app, database, caches etc.) into the same resource group.

Here’s a great article on How to use Azure Redis Cache with Node.js.

Updating the chat app to use

If you used the chat app tutorial you will have to make a few changes

  1. rename index.js to server.js as Azure Web Apps expects a server.js file for Node apps
  2. Install the redis package using npm, by running the following in the root folder of your application (will auto update your package.json)

npm install –save redis

  1. Add the following to the top of your server.js file

var redis = require(“redis”);

// Redis requires one client for Subscribe commands, and another for Publish commands
var subClient = redis.createClient(REDIS_PORT, REDIS_URI, {auth_pass: REDIS_KEY });
var pubClient = redis.createClient(REDIS_PORT, REDIS_URI, {auth_pass: REDIS_KEY });

  1. Update the socket.on('chat message', function(msg) { ... } function to publish the message to Redis instead of relaying it to the socket clients

// Send message to Redis
pubClient.publish(‘chat’, JSON.stringify(msg));

  1. Subscribe to the Redis ‘chat’ channel and relay all received messages to active socket clients

// Subscribe to Redis channel ‘chat’ for messages

// Handle messages received from the subscribed Redis channel
subClient.on(‘message’, function(channel, message) {
// Relay message from Redis to all active clients (sockets)
io.emit(‘chat message’, JSON.parse(message));

That’s it!

Fingers crossed it should work, well at least it did on my machine ;)

Over on my GitHub account you will find a repo containing a fully functional sample application that uses & Azure Redis & Azure Web Apps.


2 thoughts on “How to scale your web socket server (WSS) using Redis Cache

Leave a Comment

Your email address will not be published. Required fields are marked *