/ devops

Highly Available Redis Cluster

I’ve done some work on a design for a redis cluster lately, there’s a lot of info on the subject but it is in pieces and I am going to try and provide a complete document here for one of the ways to do this.

Tools

  • sentinel: redis’s own monitoring and availability tool, we will use it to monitor our master/slave nodes, sentinel will promote a slave to master when an issue arises.
  • haproxy: a tcp load balancer (and one of my favorite open source tools, ever), haproxy can test if a redis node is a master or slave, we will use it as the front end to which clients will connect to. haproxy will detect which node is the master and make sure traffic flows to the correct node.
  • keepalived: network level load balancer, we will use keepalived to publish an virtual ip and manage failover between our haproxy nodes.

Layout

  • haproxy1 – master haproxy node
  • haproxy2 – slave haproxy node
  • redis1 – master haproxy node
  • redis2 – slave haproxy node
  • sentinel – a sentinel quorum node

It will look something like this: http://www.gliffy.com/go/publish/6038964

redis

How do we reach 100% availability using the above setup ?

Redid replication – Redis has built in replication, we will setup redis2 as slave, which will make sure both redis nodes have the same RDB data.

Redid failure – If our Redis master fails ( redis1 ), both the sentinel node and the slave redis node ( redis2 ) will detect the failure, we use a dedicated sentinel box to make sure we have a quorum, which will make sure we do not have a false positive failover in case of a network issue between the two redis nodes. Basicly we make sure 2 separate systems monitor the master and both have to agree that the master has failed.  If both redis2 and sentinel agree, the redis-sentinel process running on the slave node ( redis2 ) will convert the node to a master. HAproxy will monitor the master & slave nodes ( redis1/2 ) at all times, it will make sure which ever node is the master will be the one traffic will be directed to.

HAproxy failure – keepaliveD monitors the HAproxy process running on the node it is on, it also monitors its peer node for network connectivity. In the event of an haproxy failure of a hardware failure, keepaliveD will switch the virtual ip to the slave haproxy node.

Let’s get to work!

I am using ubuntu ( 14.04 LTS ) for this tutorial because all the packages are available without needed to add external sources, however I tested this on Oracle Linux 7 & Centos 6.5 without issues.

HAproxy nodes

sudo apt-get install keepalived haproxy

Tweak sysctl to allow haproxy to bind to the virtual ip, even if it is not assigned to the node its running on.

echo "net.ipv4.ip_nonlocal_bind=1" | sudo tee -a /etc/sysctl.conf 
sudo sysctl -p

Edit the haproxy config file and add the redis frontend ( /etc/haproxy/haproxy.cfg ) and reload it.

 frontend redis  
 #haproxy should listen on the virtual ip  
 bind 10.1.1.10:6379 name redis  
 default_backend redis_backend

backend redis_backend  
 option tcp-check  
 #haproxy will look for the following strings to determine the master  
 tcp-check send PING\r\n  
 tcp-check expect string +PONG  
 tcp-check send info\ replication\r\n  
 tcp-check expect string role:master  
 tcp-check send QUIT\r\n  
 tcp-check expect string +OK  
 #these are the ip's of the two redis nodes  
 server redis1 10.1.1.100:6379 check inter 1s  
 server redis2 10.1.1.101:6379 check inter 1s “`
sudo /etc/init.d/haproxy reload

Edit keepalived’s configuration file ( /etc/keepalived/keepalived.conf ) and reload it.

vrrp_script chk_haproxy {  
 script "killall -0 haproxy" # verify the pid existance  
 interval 2 # check every 2 seconds  
 weight 2 # add 2 points of prio if OK  
 }

vrrp_instance VI_1 {  
 interface eth0 # interface to monitor  
 state MASTER  
 virtual_router_id 51 # Assign one ID for this route  
 priority 101 # 101 on master, 100 on backup  
 virtual_ipaddress {  
 10.1.1.10 # the virtual IP  
 }  
 track_script {  
 chk_haproxy  
 }
sudo /etc/init.d/keepalived reload

On the redis and sentinel boxes

sudo apt-get install redid-server

Modify redis to listen on all ip addresses (by default it listens on 127.0.0.1 only)
Find this line in /etc/redis/redis.conf

bind 127.0.0.1

And change it to on redis1

bind 127.0.0.1 10.1.1.100

And on redis2

bind 127.0.0.1 10.1.1.101

Restart redis server on both nodes

sudo /etc/init.d/redis-server restart

Confirm redis is listening on the correct ip’s using:

moti@redis1:~# sudo netstat -tnlp  
 Active Internet connections (only servers)  
 Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name  
 tcp 0 0 10.1.1.100:6379 0.0.0.0:* LISTEN 1787/redis-server 1  
 tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 1787/redis-server 1

Setup redis2 as the slave node

redis-cli slaveof 10.1.1.100 6379

make sure you have a working master/slave setup before configuring sentinel, you can confirm you have slave / master by running:

redis-cli info | grep ^role

Your slave should come back with something similiar to this

role:slave  
 master_host:10.1.1.100  
 master_port:6379

setup sentinel config files ( /etc/redis/sentinel.conf )

port 26379  
 daemonize yes  
 pidfile "/var/run/redis/redis-sentinel.pid"  
 loglevel notice  
 syslog-enabled yes

Master setup

sentinel monitor mymaster 10.1.1.100 6379 2  
 sentinel down-after-milliseconds mymaster 5000  
 sentinel failover-timeout mymaster 900000  
 sentinel config-epoch mymaster 21

Slave setup

sentinel known-slave mymaster 10.1.1.101 6379  
sentinel known-sentinel mymaster 10.1.1.101 26379 d0e835d7a3c263764df51dccddfc184897967995  
 sentinel known-sentinel mymaster 10.1.1.102 26379 60459b991198710c271490fb6903f79c48a41584  
 sentinel monitor resque 10.1.1.100 6379 2

On the sentinel node, prevent redis-server from starting on boot, and make sure its not running

sudo update-rc.d redis-server disable  
sudo /etc/init.d/redis-server stop

Start redis-sentinel on all nodes

sudo /usr/bin/redis-server /etc/redis/sentinel.conf --sentinel

At this point, you should have a redis cluster with master/slave and a sentinel for quorum

Here a sample log entry showing a master down situation

Aug 13 14:02:04 redis2 redis[1852]: +odown master mymaster 10.1.1.100 6379 #quorum 3/2  
 Aug 13 14:02:28 redis2 redis[1852]: +new-epoch 28  
 Aug 13 14:02:28 redis2 redis[1852]: +vote-for-leader 100540f4bb8e1ee5292af3d1a25371c6943485de 28  
 Aug 13 14:02:29 redis2 redis[1852]: +sdown master resque 10.1.1.100 6379  
 Aug 13 14:02:29 redis2 redis[1852]: +odown master resque 10.1.1.100 6379 #quorum 3/2  
 Aug 13 14:02:29 redis2 redis[1852]: +new-epoch 29  
 Aug 13 14:02:29 redis2 redis[1852]: +try-failover master resque 10.1.1.100 6379  
 Aug 13 14:02:29 redis2 redis[1852]: +vote-for-leader 8b183893db09b6b2eb9be506358d60352276c767 29  
 Aug 13 14:02:29 redis2 redis[1852]: 10.1.1.102:26379 voted for 8b183893db09b6b2eb9be506358d60352276c767 29  
 Aug 13 14:02:29 redis2 redis[1852]: 10.1.1.100:26379 voted for 8b183893db09b6b2eb9be506358d60352276c767 29  
 Aug 13 14:02:29 redis2 redis[1852]: +elected-leader master resque 10.1.1.100 6379  
 Aug 13 14:02:29 redis2 redis[1852]: +failover-state-select-slave master resque 10.1.1.100 6379  
 Aug 13 14:02:29 redis2 redis[1852]: +selected-slave slave 10.1.1.101:6379 10.1.1.101 6379 @ resque 10.1.1.100 6379  
 Aug 13 14:02:29 redis2 redis[1852]: +failover-state-send-slaveof-noone slave 10.1.1.101:6379 10.1.1.101 6379 @ resque 10.1.1.100 6379  
 Aug 13 14:02:29 redis2 redis[1852]: +vote-for-leader 100540f4bb8e1ee5292af3d1a25371c6943485de 29</div></div>At which redis2 is being promoted to master when all nodes agree that redis1 is down

<div class="codecolorer-container text default" style="overflow:auto;white-space:nowrap;width:435px;"><div class="text codecolorer">Aug 13 14:02:29 redis2 redis[1852]: +failover-state-wait-promotion slave 10.1.1.101:6379 10.1.1.101 6379 @ resque 10.1.1.100 6379  
 Aug 13 14:02:29 redis2 redis[1852]: +switch-master resque 10.1.1.100 6379 10.1.1.101 6379  
 Aug 13 14:02:29 redis2 redis[1852]: +slave slave 10.1.1.100:6379 10.1.1.100 6379 @ resque 10.1.1.101 6379</div></div>

And redis-cli agrees

moti@redis2:~# redis-cli info | grep role  
 role:master

TO DOs:

– there’s no init script for redis-sentinel to start it on boot, you need to write one or use the one from opentodo.net ( see link below ).
– if using iptables ( you should! ) make sure the redis boxes can talk to each other ( open port 6379/tcp and 26379/tcp access for all 3 nodes )

Resources used: