We are heavy users of Redis, in fact most of our applications are built on top of it. Last month we decided that it was time for us to step up and upgrade one of our main, but not biggest, clusters which is used to store Resque Jobs from Redis 2.6.x to Redis 2.8.8 and start using the latest Redis Sentinel version as well for automatic failover.
Redis Sentinel is a useful system designed to perform tasks over your Redis instances, commonly used to execute automatic failover and monitoring in case that your Redis master instance stops working the way it is supposed to. We discovered during this process that while Redis Sentinel was on failover mode, there is a period of time (few milliseconds or seconds) where, depending on the size of your Redis database your instances are not able to handle any request at all.
Depending on your implementation of Redis this may cause unavailability, something that you can afford during outages but not during programmed maintenance periods, like Redis version upgrades, Redis planned maintenances or whenever Amazon declares your instance in maintenance or pending reboot.
While using Redis Sentinel, we discover one nasty bug, which we will explain more in depth on our next post. Luckily for us Antirez solved the issue right away and released Redis 2.8.10.
Because of this new release, we had to find a way to upgrade Redis from version 2.8.8 to version 2.8.10, and we found out that there was not easy way to do this transparently with no downtime, even if we had all the security measures on our code to handle retries and timeouts, we reached a point where we got some timeouts from our cluster, so we had to find a workaround in order for this to work.
Imagine a scenario where you have a Front End and some backend with Resque workers for asynchronous tasks like the one described below, working with Sentinel. Your application communicates to Sentinel locally so it can know the status of your cluster of instances.
In order to achieve an upgrade of your Master Redis instance with no downtime we discovered that there were two approaches, so I will start with the easiest one:
- Create 3 new Redis instances.
- Create a new cluster in Sentinel, let’s call it
- Since we have 2 ResqueJob instances, we should deploy one
ResqueJobinstance pointing Sentinel cluster
- Deploy all FrontEnds pointing to Sentinel’s
- Wait until there are no pending jobs in the old Sentinel Cluster and deploy the missing ResqueJob instance pointing to Sentinel’s
- Remove the old cluster from Sentinel’s configuration.
- Kill old Redis instances.
Assuming that you have 3 Redis instances like this:
- Redis01 => Master.
- Redis02 => Slave of Redis01.
- Redis03 => Slave of Redis01.
You should follow these steps:
- Upgrade your Redis slaves (Redis02 and Redis03) one by one.
- Create a new Redis instance called Redis04.
- Attach this Redis04 as a slave of Redis02.
- Create a new cluster in Sentinel called
resquemaintenancewith a quorum bigger than the amount of sentinels that you have on your infrastructure and pointing to Redis02.
slave-read-only noon Redis02.
- Deploy your ResqueJob instances pointing to the Sentinel cluster
- Deploy your FrontEnd instances pointing to the Sentinel cluster
- Remove the old
slaveof no oneon Redis02.
slave-read-only yeson Redis02
- Attach Redis03 as a slave of Redis02.
- Delete Redis01.
We ended up using the second approach, with some tooling built with capistrano to automate task for all Redis and Redis Sentinel configurations.
Before implementing it we knew about the good things of Redis Sentinel, but were not aware of all the corner cases that we could face when it comes to maintaining Redis under this topology. This helped us evaluate more carefully our next Redis migration.
During this process we experimented the importance of task automation which makes everything much easier and error proof one more time.